Oct  2 06:49:06 np0005466031 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  2 06:49:06 np0005466031 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  2 06:49:06 np0005466031 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:49:06 np0005466031 kernel: BIOS-provided physical RAM map:
Oct  2 06:49:06 np0005466031 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  2 06:49:06 np0005466031 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  2 06:49:06 np0005466031 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  2 06:49:06 np0005466031 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  2 06:49:06 np0005466031 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  2 06:49:06 np0005466031 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  2 06:49:06 np0005466031 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  2 06:49:06 np0005466031 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  2 06:49:06 np0005466031 kernel: NX (Execute Disable) protection: active
Oct  2 06:49:06 np0005466031 kernel: APIC: Static calls initialized
Oct  2 06:49:06 np0005466031 kernel: SMBIOS 2.8 present.
Oct  2 06:49:06 np0005466031 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  2 06:49:06 np0005466031 kernel: Hypervisor detected: KVM
Oct  2 06:49:06 np0005466031 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  2 06:49:06 np0005466031 kernel: kvm-clock: using sched offset of 5716611689 cycles
Oct  2 06:49:06 np0005466031 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  2 06:49:06 np0005466031 kernel: tsc: Detected 2800.000 MHz processor
Oct  2 06:49:06 np0005466031 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  2 06:49:06 np0005466031 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  2 06:49:06 np0005466031 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  2 06:49:06 np0005466031 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  2 06:49:06 np0005466031 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  2 06:49:06 np0005466031 kernel: Using GB pages for direct mapping
Oct  2 06:49:06 np0005466031 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  2 06:49:06 np0005466031 kernel: ACPI: Early table checksum verification disabled
Oct  2 06:49:06 np0005466031 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  2 06:49:06 np0005466031 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:49:06 np0005466031 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:49:06 np0005466031 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:49:06 np0005466031 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  2 06:49:06 np0005466031 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:49:06 np0005466031 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:49:06 np0005466031 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  2 06:49:06 np0005466031 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  2 06:49:06 np0005466031 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  2 06:49:06 np0005466031 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  2 06:49:06 np0005466031 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  2 06:49:06 np0005466031 kernel: No NUMA configuration found
Oct  2 06:49:06 np0005466031 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  2 06:49:06 np0005466031 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct  2 06:49:06 np0005466031 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  2 06:49:06 np0005466031 kernel: Zone ranges:
Oct  2 06:49:06 np0005466031 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  2 06:49:06 np0005466031 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  2 06:49:06 np0005466031 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  2 06:49:06 np0005466031 kernel:  Device   empty
Oct  2 06:49:06 np0005466031 kernel: Movable zone start for each node
Oct  2 06:49:06 np0005466031 kernel: Early memory node ranges
Oct  2 06:49:06 np0005466031 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  2 06:49:06 np0005466031 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  2 06:49:06 np0005466031 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  2 06:49:06 np0005466031 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  2 06:49:06 np0005466031 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  2 06:49:06 np0005466031 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  2 06:49:06 np0005466031 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  2 06:49:06 np0005466031 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  2 06:49:06 np0005466031 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  2 06:49:06 np0005466031 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  2 06:49:06 np0005466031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  2 06:49:06 np0005466031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  2 06:49:06 np0005466031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  2 06:49:06 np0005466031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  2 06:49:06 np0005466031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  2 06:49:06 np0005466031 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  2 06:49:06 np0005466031 kernel: TSC deadline timer available
Oct  2 06:49:06 np0005466031 kernel: CPU topo: Max. logical packages:   8
Oct  2 06:49:06 np0005466031 kernel: CPU topo: Max. logical dies:       8
Oct  2 06:49:06 np0005466031 kernel: CPU topo: Max. dies per package:   1
Oct  2 06:49:06 np0005466031 kernel: CPU topo: Max. threads per core:   1
Oct  2 06:49:06 np0005466031 kernel: CPU topo: Num. cores per package:     1
Oct  2 06:49:06 np0005466031 kernel: CPU topo: Num. threads per package:   1
Oct  2 06:49:06 np0005466031 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  2 06:49:06 np0005466031 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  2 06:49:06 np0005466031 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  2 06:49:06 np0005466031 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  2 06:49:06 np0005466031 kernel: Booting paravirtualized kernel on KVM
Oct  2 06:49:06 np0005466031 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  2 06:49:06 np0005466031 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  2 06:49:06 np0005466031 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  2 06:49:06 np0005466031 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  2 06:49:06 np0005466031 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:49:06 np0005466031 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  2 06:49:06 np0005466031 kernel: random: crng init done
Oct  2 06:49:06 np0005466031 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: Fallback order for Node 0: 0 
Oct  2 06:49:06 np0005466031 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  2 06:49:06 np0005466031 kernel: Policy zone: Normal
Oct  2 06:49:06 np0005466031 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  2 06:49:06 np0005466031 kernel: software IO TLB: area num 8.
Oct  2 06:49:06 np0005466031 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  2 06:49:06 np0005466031 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  2 06:49:06 np0005466031 kernel: ftrace: allocated 193 pages with 3 groups
Oct  2 06:49:06 np0005466031 kernel: Dynamic Preempt: voluntary
Oct  2 06:49:06 np0005466031 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  2 06:49:06 np0005466031 kernel: rcu: #011RCU event tracing is enabled.
Oct  2 06:49:06 np0005466031 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  2 06:49:06 np0005466031 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  2 06:49:06 np0005466031 kernel: #011Rude variant of Tasks RCU enabled.
Oct  2 06:49:06 np0005466031 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  2 06:49:06 np0005466031 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  2 06:49:06 np0005466031 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  2 06:49:06 np0005466031 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:49:06 np0005466031 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:49:06 np0005466031 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:49:06 np0005466031 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  2 06:49:06 np0005466031 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  2 06:49:06 np0005466031 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  2 06:49:06 np0005466031 kernel: Console: colour VGA+ 80x25
Oct  2 06:49:06 np0005466031 kernel: printk: console [ttyS0] enabled
Oct  2 06:49:06 np0005466031 kernel: ACPI: Core revision 20230331
Oct  2 06:49:06 np0005466031 kernel: APIC: Switch to symmetric I/O mode setup
Oct  2 06:49:06 np0005466031 kernel: x2apic enabled
Oct  2 06:49:06 np0005466031 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  2 06:49:06 np0005466031 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  2 06:49:06 np0005466031 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct  2 06:49:06 np0005466031 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  2 06:49:06 np0005466031 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  2 06:49:06 np0005466031 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  2 06:49:06 np0005466031 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  2 06:49:06 np0005466031 kernel: Spectre V2 : Mitigation: Retpolines
Oct  2 06:49:06 np0005466031 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  2 06:49:06 np0005466031 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  2 06:49:06 np0005466031 kernel: RETBleed: Mitigation: untrained return thunk
Oct  2 06:49:06 np0005466031 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  2 06:49:06 np0005466031 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  2 06:49:06 np0005466031 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  2 06:49:06 np0005466031 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  2 06:49:06 np0005466031 kernel: x86/bugs: return thunk changed
Oct  2 06:49:06 np0005466031 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  2 06:49:06 np0005466031 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  2 06:49:06 np0005466031 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  2 06:49:06 np0005466031 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  2 06:49:06 np0005466031 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  2 06:49:06 np0005466031 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  2 06:49:06 np0005466031 kernel: Freeing SMP alternatives memory: 40K
Oct  2 06:49:06 np0005466031 kernel: pid_max: default: 32768 minimum: 301
Oct  2 06:49:06 np0005466031 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  2 06:49:06 np0005466031 kernel: landlock: Up and running.
Oct  2 06:49:06 np0005466031 kernel: Yama: becoming mindful.
Oct  2 06:49:06 np0005466031 kernel: SELinux:  Initializing.
Oct  2 06:49:06 np0005466031 kernel: LSM support for eBPF active
Oct  2 06:49:06 np0005466031 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  2 06:49:06 np0005466031 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  2 06:49:06 np0005466031 kernel: ... version:                0
Oct  2 06:49:06 np0005466031 kernel: ... bit width:              48
Oct  2 06:49:06 np0005466031 kernel: ... generic registers:      6
Oct  2 06:49:06 np0005466031 kernel: ... value mask:             0000ffffffffffff
Oct  2 06:49:06 np0005466031 kernel: ... max period:             00007fffffffffff
Oct  2 06:49:06 np0005466031 kernel: ... fixed-purpose events:   0
Oct  2 06:49:06 np0005466031 kernel: ... event mask:             000000000000003f
Oct  2 06:49:06 np0005466031 kernel: signal: max sigframe size: 1776
Oct  2 06:49:06 np0005466031 kernel: rcu: Hierarchical SRCU implementation.
Oct  2 06:49:06 np0005466031 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  2 06:49:06 np0005466031 kernel: smp: Bringing up secondary CPUs ...
Oct  2 06:49:06 np0005466031 kernel: smpboot: x86: Booting SMP configuration:
Oct  2 06:49:06 np0005466031 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  2 06:49:06 np0005466031 kernel: smp: Brought up 1 node, 8 CPUs
Oct  2 06:49:06 np0005466031 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct  2 06:49:06 np0005466031 kernel: node 0 deferred pages initialised in 23ms
Oct  2 06:49:06 np0005466031 kernel: Memory: 7765608K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct  2 06:49:06 np0005466031 kernel: devtmpfs: initialized
Oct  2 06:49:06 np0005466031 kernel: x86/mm: Memory block size: 128MB
Oct  2 06:49:06 np0005466031 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  2 06:49:06 np0005466031 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: pinctrl core: initialized pinctrl subsystem
Oct  2 06:49:06 np0005466031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  2 06:49:06 np0005466031 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  2 06:49:06 np0005466031 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  2 06:49:06 np0005466031 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  2 06:49:06 np0005466031 kernel: audit: initializing netlink subsys (disabled)
Oct  2 06:49:06 np0005466031 kernel: audit: type=2000 audit(1759402144.750:1): state=initialized audit_enabled=0 res=1
Oct  2 06:49:06 np0005466031 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  2 06:49:06 np0005466031 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  2 06:49:06 np0005466031 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  2 06:49:06 np0005466031 kernel: cpuidle: using governor menu
Oct  2 06:49:06 np0005466031 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  2 06:49:06 np0005466031 kernel: PCI: Using configuration type 1 for base access
Oct  2 06:49:06 np0005466031 kernel: PCI: Using configuration type 1 for extended access
Oct  2 06:49:06 np0005466031 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  2 06:49:06 np0005466031 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  2 06:49:06 np0005466031 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  2 06:49:06 np0005466031 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  2 06:49:06 np0005466031 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  2 06:49:06 np0005466031 kernel: Demotion targets for Node 0: null
Oct  2 06:49:06 np0005466031 kernel: cryptd: max_cpu_qlen set to 1000
Oct  2 06:49:06 np0005466031 kernel: ACPI: Added _OSI(Module Device)
Oct  2 06:49:06 np0005466031 kernel: ACPI: Added _OSI(Processor Device)
Oct  2 06:49:06 np0005466031 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  2 06:49:06 np0005466031 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  2 06:49:06 np0005466031 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  2 06:49:06 np0005466031 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  2 06:49:06 np0005466031 kernel: ACPI: Interpreter enabled
Oct  2 06:49:06 np0005466031 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  2 06:49:06 np0005466031 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  2 06:49:06 np0005466031 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  2 06:49:06 np0005466031 kernel: PCI: Using E820 reservations for host bridge windows
Oct  2 06:49:06 np0005466031 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  2 06:49:06 np0005466031 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  2 06:49:06 np0005466031 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [3] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [4] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [5] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [6] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [7] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [8] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [9] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [10] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [11] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [12] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [13] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [14] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [15] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [16] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [17] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [18] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [19] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [20] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [21] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [22] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [23] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [24] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [25] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [26] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [27] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [28] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [29] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [30] registered
Oct  2 06:49:06 np0005466031 kernel: acpiphp: Slot [31] registered
Oct  2 06:49:06 np0005466031 kernel: PCI host bridge to bus 0000:00
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  2 06:49:06 np0005466031 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  2 06:49:06 np0005466031 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  2 06:49:06 np0005466031 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  2 06:49:06 np0005466031 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  2 06:49:06 np0005466031 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  2 06:49:06 np0005466031 kernel: iommu: Default domain type: Translated
Oct  2 06:49:06 np0005466031 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  2 06:49:06 np0005466031 kernel: SCSI subsystem initialized
Oct  2 06:49:06 np0005466031 kernel: ACPI: bus type USB registered
Oct  2 06:49:06 np0005466031 kernel: usbcore: registered new interface driver usbfs
Oct  2 06:49:06 np0005466031 kernel: usbcore: registered new interface driver hub
Oct  2 06:49:06 np0005466031 kernel: usbcore: registered new device driver usb
Oct  2 06:49:06 np0005466031 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  2 06:49:06 np0005466031 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  2 06:49:06 np0005466031 kernel: PTP clock support registered
Oct  2 06:49:06 np0005466031 kernel: EDAC MC: Ver: 3.0.0
Oct  2 06:49:06 np0005466031 kernel: NetLabel: Initializing
Oct  2 06:49:06 np0005466031 kernel: NetLabel:  domain hash size = 128
Oct  2 06:49:06 np0005466031 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  2 06:49:06 np0005466031 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  2 06:49:06 np0005466031 kernel: PCI: Using ACPI for IRQ routing
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  2 06:49:06 np0005466031 kernel: vgaarb: loaded
Oct  2 06:49:06 np0005466031 kernel: clocksource: Switched to clocksource kvm-clock
Oct  2 06:49:06 np0005466031 kernel: VFS: Disk quotas dquot_6.6.0
Oct  2 06:49:06 np0005466031 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  2 06:49:06 np0005466031 kernel: pnp: PnP ACPI init
Oct  2 06:49:06 np0005466031 kernel: pnp: PnP ACPI: found 5 devices
Oct  2 06:49:06 np0005466031 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  2 06:49:06 np0005466031 kernel: NET: Registered PF_INET protocol family
Oct  2 06:49:06 np0005466031 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  2 06:49:06 np0005466031 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  2 06:49:06 np0005466031 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  2 06:49:06 np0005466031 kernel: NET: Registered PF_XDP protocol family
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  2 06:49:06 np0005466031 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  2 06:49:06 np0005466031 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  2 06:49:06 np0005466031 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 98293 usecs
Oct  2 06:49:06 np0005466031 kernel: PCI: CLS 0 bytes, default 64
Oct  2 06:49:06 np0005466031 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  2 06:49:06 np0005466031 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  2 06:49:06 np0005466031 kernel: ACPI: bus type thunderbolt registered
Oct  2 06:49:06 np0005466031 kernel: Trying to unpack rootfs image as initramfs...
Oct  2 06:49:06 np0005466031 kernel: Initialise system trusted keyrings
Oct  2 06:49:06 np0005466031 kernel: Key type blacklist registered
Oct  2 06:49:06 np0005466031 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  2 06:49:06 np0005466031 kernel: zbud: loaded
Oct  2 06:49:06 np0005466031 kernel: integrity: Platform Keyring initialized
Oct  2 06:49:06 np0005466031 kernel: integrity: Machine keyring initialized
Oct  2 06:49:06 np0005466031 kernel: Freeing initrd memory: 86104K
Oct  2 06:49:06 np0005466031 kernel: NET: Registered PF_ALG protocol family
Oct  2 06:49:06 np0005466031 kernel: xor: automatically using best checksumming function   avx       
Oct  2 06:49:06 np0005466031 kernel: Key type asymmetric registered
Oct  2 06:49:06 np0005466031 kernel: Asymmetric key parser 'x509' registered
Oct  2 06:49:06 np0005466031 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  2 06:49:06 np0005466031 kernel: io scheduler mq-deadline registered
Oct  2 06:49:06 np0005466031 kernel: io scheduler kyber registered
Oct  2 06:49:06 np0005466031 kernel: io scheduler bfq registered
Oct  2 06:49:06 np0005466031 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  2 06:49:06 np0005466031 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  2 06:49:06 np0005466031 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  2 06:49:06 np0005466031 kernel: ACPI: button: Power Button [PWRF]
Oct  2 06:49:06 np0005466031 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  2 06:49:06 np0005466031 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  2 06:49:06 np0005466031 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  2 06:49:06 np0005466031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  2 06:49:06 np0005466031 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  2 06:49:06 np0005466031 kernel: Non-volatile memory driver v1.3
Oct  2 06:49:06 np0005466031 kernel: rdac: device handler registered
Oct  2 06:49:06 np0005466031 kernel: hp_sw: device handler registered
Oct  2 06:49:06 np0005466031 kernel: emc: device handler registered
Oct  2 06:49:06 np0005466031 kernel: alua: device handler registered
Oct  2 06:49:06 np0005466031 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  2 06:49:06 np0005466031 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  2 06:49:06 np0005466031 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  2 06:49:06 np0005466031 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  2 06:49:06 np0005466031 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  2 06:49:06 np0005466031 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  2 06:49:06 np0005466031 kernel: usb usb1: Product: UHCI Host Controller
Oct  2 06:49:06 np0005466031 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  2 06:49:06 np0005466031 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  2 06:49:06 np0005466031 kernel: hub 1-0:1.0: USB hub found
Oct  2 06:49:06 np0005466031 kernel: hub 1-0:1.0: 2 ports detected
Oct  2 06:49:06 np0005466031 kernel: usbcore: registered new interface driver usbserial_generic
Oct  2 06:49:06 np0005466031 kernel: usbserial: USB Serial support registered for generic
Oct  2 06:49:06 np0005466031 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  2 06:49:06 np0005466031 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  2 06:49:06 np0005466031 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  2 06:49:06 np0005466031 kernel: mousedev: PS/2 mouse device common for all mice
Oct  2 06:49:06 np0005466031 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  2 06:49:06 np0005466031 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  2 06:49:06 np0005466031 kernel: rtc_cmos 00:04: registered as rtc0
Oct  2 06:49:06 np0005466031 kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T10:49:05 UTC (1759402145)
Oct  2 06:49:06 np0005466031 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  2 06:49:06 np0005466031 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  2 06:49:06 np0005466031 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  2 06:49:06 np0005466031 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  2 06:49:06 np0005466031 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  2 06:49:06 np0005466031 kernel: usbcore: registered new interface driver usbhid
Oct  2 06:49:06 np0005466031 kernel: usbhid: USB HID core driver
Oct  2 06:49:06 np0005466031 kernel: drop_monitor: Initializing network drop monitor service
Oct  2 06:49:06 np0005466031 kernel: Initializing XFRM netlink socket
Oct  2 06:49:06 np0005466031 kernel: NET: Registered PF_INET6 protocol family
Oct  2 06:49:06 np0005466031 kernel: Segment Routing with IPv6
Oct  2 06:49:06 np0005466031 kernel: NET: Registered PF_PACKET protocol family
Oct  2 06:49:06 np0005466031 kernel: mpls_gso: MPLS GSO support
Oct  2 06:49:06 np0005466031 kernel: IPI shorthand broadcast: enabled
Oct  2 06:49:06 np0005466031 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  2 06:49:06 np0005466031 kernel: AES CTR mode by8 optimization enabled
Oct  2 06:49:06 np0005466031 kernel: sched_clock: Marking stable (1199004200, 146351090)->(1468121190, -122765900)
Oct  2 06:49:06 np0005466031 kernel: registered taskstats version 1
Oct  2 06:49:06 np0005466031 kernel: Loading compiled-in X.509 certificates
Oct  2 06:49:06 np0005466031 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  2 06:49:06 np0005466031 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  2 06:49:06 np0005466031 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  2 06:49:06 np0005466031 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  2 06:49:06 np0005466031 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  2 06:49:06 np0005466031 kernel: Demotion targets for Node 0: null
Oct  2 06:49:06 np0005466031 kernel: page_owner is disabled
Oct  2 06:49:06 np0005466031 kernel: Key type .fscrypt registered
Oct  2 06:49:06 np0005466031 kernel: Key type fscrypt-provisioning registered
Oct  2 06:49:06 np0005466031 kernel: Key type big_key registered
Oct  2 06:49:06 np0005466031 kernel: Key type encrypted registered
Oct  2 06:49:06 np0005466031 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  2 06:49:06 np0005466031 kernel: Loading compiled-in module X.509 certificates
Oct  2 06:49:06 np0005466031 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  2 06:49:06 np0005466031 kernel: ima: Allocated hash algorithm: sha256
Oct  2 06:49:06 np0005466031 kernel: ima: No architecture policies found
Oct  2 06:49:06 np0005466031 kernel: evm: Initialising EVM extended attributes:
Oct  2 06:49:06 np0005466031 kernel: evm: security.selinux
Oct  2 06:49:06 np0005466031 kernel: evm: security.SMACK64 (disabled)
Oct  2 06:49:06 np0005466031 kernel: evm: security.SMACK64EXEC (disabled)
Oct  2 06:49:06 np0005466031 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  2 06:49:06 np0005466031 kernel: evm: security.SMACK64MMAP (disabled)
Oct  2 06:49:06 np0005466031 kernel: evm: security.apparmor (disabled)
Oct  2 06:49:06 np0005466031 kernel: evm: security.ima
Oct  2 06:49:06 np0005466031 kernel: evm: security.capability
Oct  2 06:49:06 np0005466031 kernel: evm: HMAC attrs: 0x1
Oct  2 06:49:06 np0005466031 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  2 06:49:06 np0005466031 kernel: Running certificate verification RSA selftest
Oct  2 06:49:06 np0005466031 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  2 06:49:06 np0005466031 kernel: Running certificate verification ECDSA selftest
Oct  2 06:49:06 np0005466031 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  2 06:49:06 np0005466031 kernel: clk: Disabling unused clocks
Oct  2 06:49:06 np0005466031 kernel: Freeing unused decrypted memory: 2028K
Oct  2 06:49:06 np0005466031 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  2 06:49:06 np0005466031 kernel: Write protecting the kernel read-only data: 30720k
Oct  2 06:49:06 np0005466031 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  2 06:49:06 np0005466031 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  2 06:49:06 np0005466031 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  2 06:49:06 np0005466031 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  2 06:49:06 np0005466031 kernel: usb 1-1: Manufacturer: QEMU
Oct  2 06:49:06 np0005466031 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  2 06:49:06 np0005466031 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  2 06:49:06 np0005466031 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  2 06:49:06 np0005466031 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  2 06:49:06 np0005466031 kernel: Run /init as init process
Oct  2 06:49:06 np0005466031 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 06:49:06 np0005466031 systemd: Detected virtualization kvm.
Oct  2 06:49:06 np0005466031 systemd: Detected architecture x86-64.
Oct  2 06:49:06 np0005466031 systemd: Running in initrd.
Oct  2 06:49:06 np0005466031 systemd: No hostname configured, using default hostname.
Oct  2 06:49:06 np0005466031 systemd: Hostname set to <localhost>.
Oct  2 06:49:06 np0005466031 systemd: Initializing machine ID from VM UUID.
Oct  2 06:49:06 np0005466031 systemd: Queued start job for default target Initrd Default Target.
Oct  2 06:49:06 np0005466031 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  2 06:49:06 np0005466031 systemd: Reached target Local Encrypted Volumes.
Oct  2 06:49:06 np0005466031 systemd: Reached target Initrd /usr File System.
Oct  2 06:49:06 np0005466031 systemd: Reached target Local File Systems.
Oct  2 06:49:06 np0005466031 systemd: Reached target Path Units.
Oct  2 06:49:06 np0005466031 systemd: Reached target Slice Units.
Oct  2 06:49:06 np0005466031 systemd: Reached target Swaps.
Oct  2 06:49:06 np0005466031 systemd: Reached target Timer Units.
Oct  2 06:49:06 np0005466031 systemd: Listening on D-Bus System Message Bus Socket.
Oct  2 06:49:06 np0005466031 systemd: Listening on Journal Socket (/dev/log).
Oct  2 06:49:06 np0005466031 systemd: Listening on Journal Socket.
Oct  2 06:49:06 np0005466031 systemd: Listening on udev Control Socket.
Oct  2 06:49:06 np0005466031 systemd: Listening on udev Kernel Socket.
Oct  2 06:49:06 np0005466031 systemd: Reached target Socket Units.
Oct  2 06:49:06 np0005466031 systemd: Starting Create List of Static Device Nodes...
Oct  2 06:49:06 np0005466031 systemd: Starting Journal Service...
Oct  2 06:49:06 np0005466031 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  2 06:49:06 np0005466031 systemd: Starting Apply Kernel Variables...
Oct  2 06:49:06 np0005466031 systemd: Starting Create System Users...
Oct  2 06:49:06 np0005466031 systemd: Starting Setup Virtual Console...
Oct  2 06:49:06 np0005466031 systemd: Finished Create List of Static Device Nodes.
Oct  2 06:49:06 np0005466031 systemd: Finished Apply Kernel Variables.
Oct  2 06:49:06 np0005466031 systemd-journald[310]: Journal started
Oct  2 06:49:06 np0005466031 systemd-journald[310]: Runtime Journal (/run/log/journal/91df6c8e6fe249d29991360b14608f11) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:49:06 np0005466031 systemd-sysusers[314]: Creating group 'users' with GID 100.
Oct  2 06:49:06 np0005466031 systemd: Started Journal Service.
Oct  2 06:49:06 np0005466031 systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Oct  2 06:49:06 np0005466031 systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  2 06:49:06 np0005466031 systemd[1]: Finished Create System Users.
Oct  2 06:49:06 np0005466031 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  2 06:49:06 np0005466031 systemd[1]: Starting Create Volatile Files and Directories...
Oct  2 06:49:06 np0005466031 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  2 06:49:06 np0005466031 systemd[1]: Finished Create Volatile Files and Directories.
Oct  2 06:49:06 np0005466031 systemd[1]: Finished Setup Virtual Console.
Oct  2 06:49:06 np0005466031 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  2 06:49:06 np0005466031 systemd[1]: Starting dracut cmdline hook...
Oct  2 06:49:06 np0005466031 dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Oct  2 06:49:06 np0005466031 dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:49:06 np0005466031 systemd[1]: Finished dracut cmdline hook.
Oct  2 06:49:06 np0005466031 systemd[1]: Starting dracut pre-udev hook...
Oct  2 06:49:06 np0005466031 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  2 06:49:06 np0005466031 kernel: device-mapper: uevent: version 1.0.3
Oct  2 06:49:06 np0005466031 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  2 06:49:06 np0005466031 kernel: RPC: Registered named UNIX socket transport module.
Oct  2 06:49:06 np0005466031 kernel: RPC: Registered udp transport module.
Oct  2 06:49:06 np0005466031 kernel: RPC: Registered tcp transport module.
Oct  2 06:49:06 np0005466031 kernel: RPC: Registered tcp-with-tls transport module.
Oct  2 06:49:06 np0005466031 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  2 06:49:06 np0005466031 rpc.statd[445]: Version 2.5.4 starting
Oct  2 06:49:06 np0005466031 rpc.statd[445]: Initializing NSM state
Oct  2 06:49:06 np0005466031 rpc.idmapd[450]: Setting log level to 0
Oct  2 06:49:06 np0005466031 systemd[1]: Finished dracut pre-udev hook.
Oct  2 06:49:06 np0005466031 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  2 06:49:06 np0005466031 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Oct  2 06:49:06 np0005466031 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  2 06:49:06 np0005466031 systemd[1]: Starting dracut pre-trigger hook...
Oct  2 06:49:06 np0005466031 systemd[1]: Finished dracut pre-trigger hook.
Oct  2 06:49:06 np0005466031 systemd[1]: Starting Coldplug All udev Devices...
Oct  2 06:49:07 np0005466031 systemd[1]: Created slice Slice /system/modprobe.
Oct  2 06:49:07 np0005466031 systemd[1]: Starting Load Kernel Module configfs...
Oct  2 06:49:07 np0005466031 systemd[1]: Finished Coldplug All udev Devices.
Oct  2 06:49:07 np0005466031 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:49:07 np0005466031 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:49:07 np0005466031 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  2 06:49:07 np0005466031 systemd[1]: Reached target Network.
Oct  2 06:49:07 np0005466031 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  2 06:49:07 np0005466031 systemd[1]: Starting dracut initqueue hook...
Oct  2 06:49:07 np0005466031 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  2 06:49:07 np0005466031 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  2 06:49:07 np0005466031 kernel: vda: vda1
Oct  2 06:49:07 np0005466031 kernel: scsi host0: ata_piix
Oct  2 06:49:07 np0005466031 kernel: scsi host1: ata_piix
Oct  2 06:49:07 np0005466031 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  2 06:49:07 np0005466031 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  2 06:49:07 np0005466031 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  2 06:49:07 np0005466031 systemd[1]: Reached target Initrd Root Device.
Oct  2 06:49:07 np0005466031 systemd[1]: Mounting Kernel Configuration File System...
Oct  2 06:49:07 np0005466031 systemd[1]: Mounted Kernel Configuration File System.
Oct  2 06:49:07 np0005466031 systemd[1]: Reached target System Initialization.
Oct  2 06:49:07 np0005466031 systemd[1]: Reached target Basic System.
Oct  2 06:49:07 np0005466031 kernel: ata1: found unknown device (class 0)
Oct  2 06:49:07 np0005466031 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  2 06:49:07 np0005466031 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  2 06:49:07 np0005466031 systemd-udevd[467]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 06:49:07 np0005466031 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  2 06:49:07 np0005466031 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  2 06:49:07 np0005466031 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  2 06:49:07 np0005466031 systemd[1]: Finished dracut initqueue hook.
Oct  2 06:49:07 np0005466031 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  2 06:49:07 np0005466031 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  2 06:49:07 np0005466031 systemd[1]: Reached target Remote File Systems.
Oct  2 06:49:07 np0005466031 systemd[1]: Starting dracut pre-mount hook...
Oct  2 06:49:07 np0005466031 systemd[1]: Finished dracut pre-mount hook.
Oct  2 06:49:07 np0005466031 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  2 06:49:07 np0005466031 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Oct  2 06:49:07 np0005466031 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  2 06:49:07 np0005466031 systemd[1]: Mounting /sysroot...
Oct  2 06:49:08 np0005466031 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  2 06:49:08 np0005466031 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  2 06:49:08 np0005466031 kernel: XFS (vda1): Ending clean mount
Oct  2 06:49:08 np0005466031 systemd[1]: Mounted /sysroot.
Oct  2 06:49:08 np0005466031 systemd[1]: Reached target Initrd Root File System.
Oct  2 06:49:08 np0005466031 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  2 06:49:08 np0005466031 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  2 06:49:08 np0005466031 systemd[1]: Reached target Initrd File Systems.
Oct  2 06:49:08 np0005466031 systemd[1]: Reached target Initrd Default Target.
Oct  2 06:49:08 np0005466031 systemd[1]: Starting dracut mount hook...
Oct  2 06:49:08 np0005466031 systemd[1]: Finished dracut mount hook.
Oct  2 06:49:08 np0005466031 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  2 06:49:08 np0005466031 rpc.idmapd[450]: exiting on signal 15
Oct  2 06:49:08 np0005466031 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  2 06:49:08 np0005466031 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Network.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Timer Units.
Oct  2 06:49:08 np0005466031 systemd[1]: dbus.socket: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  2 06:49:08 np0005466031 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Initrd Default Target.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Basic System.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Initrd Root Device.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Initrd /usr File System.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Path Units.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Remote File Systems.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Slice Units.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Socket Units.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target System Initialization.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Local File Systems.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Swaps.
Oct  2 06:49:08 np0005466031 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped dracut mount hook.
Oct  2 06:49:08 np0005466031 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped dracut pre-mount hook.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  2 06:49:08 np0005466031 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped dracut initqueue hook.
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Apply Kernel Variables.
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Coldplug All udev Devices.
Oct  2 06:49:08 np0005466031 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped dracut pre-trigger hook.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Setup Virtual Console.
Oct  2 06:49:08 np0005466031 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Closed udev Control Socket.
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Closed udev Kernel Socket.
Oct  2 06:49:08 np0005466031 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped dracut pre-udev hook.
Oct  2 06:49:08 np0005466031 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped dracut cmdline hook.
Oct  2 06:49:08 np0005466031 systemd[1]: Starting Cleanup udev Database...
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  2 06:49:08 np0005466031 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  2 06:49:08 np0005466031 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Stopped Create System Users.
Oct  2 06:49:08 np0005466031 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  2 06:49:08 np0005466031 systemd[1]: Finished Cleanup udev Database.
Oct  2 06:49:08 np0005466031 systemd[1]: Reached target Switch Root.
Oct  2 06:49:08 np0005466031 systemd[1]: Starting Switch Root...
Oct  2 06:49:08 np0005466031 systemd[1]: Switching root.
Oct  2 06:49:08 np0005466031 systemd-journald[310]: Journal stopped
Oct  2 06:49:09 np0005466031 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  2 06:49:09 np0005466031 kernel: audit: type=1404 audit(1759402148.679:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  2 06:49:09 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 06:49:09 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 06:49:09 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 06:49:09 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 06:49:09 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 06:49:09 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 06:49:09 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 06:49:09 np0005466031 kernel: audit: type=1403 audit(1759402148.837:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  2 06:49:09 np0005466031 systemd: Successfully loaded SELinux policy in 162.046ms.
Oct  2 06:49:09 np0005466031 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.275ms.
Oct  2 06:49:09 np0005466031 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 06:49:09 np0005466031 systemd: Detected virtualization kvm.
Oct  2 06:49:09 np0005466031 systemd: Detected architecture x86-64.
Oct  2 06:49:09 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 06:49:09 np0005466031 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  2 06:49:09 np0005466031 systemd: Stopped Switch Root.
Oct  2 06:49:09 np0005466031 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  2 06:49:09 np0005466031 systemd: Created slice Slice /system/getty.
Oct  2 06:49:09 np0005466031 systemd: Created slice Slice /system/serial-getty.
Oct  2 06:49:09 np0005466031 systemd: Created slice Slice /system/sshd-keygen.
Oct  2 06:49:09 np0005466031 systemd: Created slice User and Session Slice.
Oct  2 06:49:09 np0005466031 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  2 06:49:09 np0005466031 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  2 06:49:09 np0005466031 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  2 06:49:09 np0005466031 systemd: Reached target Local Encrypted Volumes.
Oct  2 06:49:09 np0005466031 systemd: Stopped target Switch Root.
Oct  2 06:49:09 np0005466031 systemd: Stopped target Initrd File Systems.
Oct  2 06:49:09 np0005466031 systemd: Stopped target Initrd Root File System.
Oct  2 06:49:09 np0005466031 systemd: Reached target Local Integrity Protected Volumes.
Oct  2 06:49:09 np0005466031 systemd: Reached target Path Units.
Oct  2 06:49:09 np0005466031 systemd: Reached target rpc_pipefs.target.
Oct  2 06:49:09 np0005466031 systemd: Reached target Slice Units.
Oct  2 06:49:09 np0005466031 systemd: Reached target Swaps.
Oct  2 06:49:09 np0005466031 systemd: Reached target Local Verity Protected Volumes.
Oct  2 06:49:09 np0005466031 systemd: Listening on RPCbind Server Activation Socket.
Oct  2 06:49:09 np0005466031 systemd: Reached target RPC Port Mapper.
Oct  2 06:49:09 np0005466031 systemd: Listening on Process Core Dump Socket.
Oct  2 06:49:09 np0005466031 systemd: Listening on initctl Compatibility Named Pipe.
Oct  2 06:49:09 np0005466031 systemd: Listening on udev Control Socket.
Oct  2 06:49:09 np0005466031 systemd: Listening on udev Kernel Socket.
Oct  2 06:49:09 np0005466031 systemd: Mounting Huge Pages File System...
Oct  2 06:49:09 np0005466031 systemd: Mounting POSIX Message Queue File System...
Oct  2 06:49:09 np0005466031 systemd: Mounting Kernel Debug File System...
Oct  2 06:49:09 np0005466031 systemd: Mounting Kernel Trace File System...
Oct  2 06:49:09 np0005466031 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  2 06:49:09 np0005466031 systemd: Starting Create List of Static Device Nodes...
Oct  2 06:49:09 np0005466031 systemd: Starting Load Kernel Module configfs...
Oct  2 06:49:09 np0005466031 systemd: Starting Load Kernel Module drm...
Oct  2 06:49:09 np0005466031 systemd: Starting Load Kernel Module efi_pstore...
Oct  2 06:49:09 np0005466031 systemd: Starting Load Kernel Module fuse...
Oct  2 06:49:09 np0005466031 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  2 06:49:09 np0005466031 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  2 06:49:09 np0005466031 systemd: Stopped File System Check on Root Device.
Oct  2 06:49:09 np0005466031 systemd: Stopped Journal Service.
Oct  2 06:49:09 np0005466031 systemd: Starting Journal Service...
Oct  2 06:49:09 np0005466031 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  2 06:49:09 np0005466031 systemd: Starting Generate network units from Kernel command line...
Oct  2 06:49:09 np0005466031 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:49:09 np0005466031 systemd: Starting Remount Root and Kernel File Systems...
Oct  2 06:49:09 np0005466031 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  2 06:49:09 np0005466031 systemd: Starting Apply Kernel Variables...
Oct  2 06:49:09 np0005466031 systemd: Starting Coldplug All udev Devices...
Oct  2 06:49:09 np0005466031 kernel: fuse: init (API version 7.37)
Oct  2 06:49:09 np0005466031 systemd: Mounted Huge Pages File System.
Oct  2 06:49:09 np0005466031 systemd-journald[680]: Journal started
Oct  2 06:49:09 np0005466031 systemd-journald[680]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:49:09 np0005466031 systemd[1]: Queued start job for default target Multi-User System.
Oct  2 06:49:09 np0005466031 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  2 06:49:09 np0005466031 systemd: Mounted POSIX Message Queue File System.
Oct  2 06:49:09 np0005466031 systemd: Started Journal Service.
Oct  2 06:49:09 np0005466031 systemd[1]: Mounted Kernel Debug File System.
Oct  2 06:49:09 np0005466031 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  2 06:49:09 np0005466031 systemd[1]: Mounted Kernel Trace File System.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Create List of Static Device Nodes.
Oct  2 06:49:09 np0005466031 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:49:09 np0005466031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  2 06:49:09 np0005466031 kernel: ACPI: bus type drm_connector registered
Oct  2 06:49:09 np0005466031 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Load Kernel Module fuse.
Oct  2 06:49:09 np0005466031 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Load Kernel Module drm.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Generate network units from Kernel command line.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Apply Kernel Variables.
Oct  2 06:49:09 np0005466031 systemd[1]: Mounting FUSE Control File System...
Oct  2 06:49:09 np0005466031 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Rebuild Hardware Database...
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  2 06:49:09 np0005466031 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Load/Save OS Random Seed...
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Create System Users...
Oct  2 06:49:09 np0005466031 systemd[1]: Mounted FUSE Control File System.
Oct  2 06:49:09 np0005466031 systemd-journald[680]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:49:09 np0005466031 systemd-journald[680]: Received client request to flush runtime journal.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Load/Save OS Random Seed.
Oct  2 06:49:09 np0005466031 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Create System Users.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Coldplug All udev Devices.
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  2 06:49:09 np0005466031 systemd[1]: Reached target Preparation for Local File Systems.
Oct  2 06:49:09 np0005466031 systemd[1]: Reached target Local File Systems.
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  2 06:49:09 np0005466031 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  2 06:49:09 np0005466031 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  2 06:49:09 np0005466031 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Automatic Boot Loader Update...
Oct  2 06:49:09 np0005466031 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Create Volatile Files and Directories...
Oct  2 06:49:09 np0005466031 bootctl[697]: Couldn't find EFI system partition, skipping.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Automatic Boot Loader Update.
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Create Volatile Files and Directories.
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Security Auditing Service...
Oct  2 06:49:09 np0005466031 systemd[1]: Starting RPC Bind...
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Rebuild Journal Catalog...
Oct  2 06:49:09 np0005466031 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  2 06:49:09 np0005466031 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Rebuild Journal Catalog.
Oct  2 06:49:09 np0005466031 systemd[1]: Started RPC Bind.
Oct  2 06:49:09 np0005466031 augenrules[708]: /sbin/augenrules: No change
Oct  2 06:49:09 np0005466031 augenrules[724]: No rules
Oct  2 06:49:09 np0005466031 augenrules[724]: enabled 1
Oct  2 06:49:09 np0005466031 augenrules[724]: failure 1
Oct  2 06:49:09 np0005466031 augenrules[724]: pid 703
Oct  2 06:49:09 np0005466031 augenrules[724]: rate_limit 0
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_limit 8192
Oct  2 06:49:09 np0005466031 augenrules[724]: lost 0
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog 4
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_wait_time 60000
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_wait_time_actual 0
Oct  2 06:49:09 np0005466031 augenrules[724]: enabled 1
Oct  2 06:49:09 np0005466031 augenrules[724]: failure 1
Oct  2 06:49:09 np0005466031 augenrules[724]: pid 703
Oct  2 06:49:09 np0005466031 augenrules[724]: rate_limit 0
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_limit 8192
Oct  2 06:49:09 np0005466031 augenrules[724]: lost 0
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog 0
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_wait_time 60000
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_wait_time_actual 0
Oct  2 06:49:09 np0005466031 augenrules[724]: enabled 1
Oct  2 06:49:09 np0005466031 augenrules[724]: failure 1
Oct  2 06:49:09 np0005466031 augenrules[724]: pid 703
Oct  2 06:49:09 np0005466031 augenrules[724]: rate_limit 0
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_limit 8192
Oct  2 06:49:09 np0005466031 augenrules[724]: lost 0
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog 3
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_wait_time 60000
Oct  2 06:49:09 np0005466031 augenrules[724]: backlog_wait_time_actual 0
Oct  2 06:49:09 np0005466031 systemd[1]: Started Security Auditing Service.
Oct  2 06:49:09 np0005466031 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  2 06:49:09 np0005466031 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  2 06:49:10 np0005466031 systemd[1]: Finished Rebuild Hardware Database.
Oct  2 06:49:10 np0005466031 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  2 06:49:10 np0005466031 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  2 06:49:10 np0005466031 systemd[1]: Starting Update is Completed...
Oct  2 06:49:10 np0005466031 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Oct  2 06:49:10 np0005466031 systemd[1]: Finished Update is Completed.
Oct  2 06:49:10 np0005466031 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  2 06:49:10 np0005466031 systemd[1]: Reached target System Initialization.
Oct  2 06:49:10 np0005466031 systemd[1]: Started dnf makecache --timer.
Oct  2 06:49:10 np0005466031 systemd[1]: Started Daily rotation of log files.
Oct  2 06:49:10 np0005466031 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  2 06:49:10 np0005466031 systemd[1]: Reached target Timer Units.
Oct  2 06:49:10 np0005466031 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  2 06:49:10 np0005466031 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  2 06:49:10 np0005466031 systemd[1]: Reached target Socket Units.
Oct  2 06:49:10 np0005466031 systemd[1]: Starting D-Bus System Message Bus...
Oct  2 06:49:10 np0005466031 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:49:10 np0005466031 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  2 06:49:10 np0005466031 systemd[1]: Starting Load Kernel Module configfs...
Oct  2 06:49:10 np0005466031 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:49:10 np0005466031 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:49:10 np0005466031 systemd-udevd[742]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 06:49:10 np0005466031 systemd[1]: Started D-Bus System Message Bus.
Oct  2 06:49:10 np0005466031 systemd[1]: Reached target Basic System.
Oct  2 06:49:10 np0005466031 dbus-broker-lau[744]: Ready
Oct  2 06:49:10 np0005466031 systemd[1]: Starting NTP client/server...
Oct  2 06:49:10 np0005466031 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  2 06:49:10 np0005466031 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  2 06:49:10 np0005466031 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  2 06:49:10 np0005466031 systemd[1]: Starting IPv4 firewall with iptables...
Oct  2 06:49:10 np0005466031 systemd[1]: Started irqbalance daemon.
Oct  2 06:49:10 np0005466031 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  2 06:49:10 np0005466031 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:49:10 np0005466031 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:49:10 np0005466031 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:49:10 np0005466031 systemd[1]: Reached target sshd-keygen.target.
Oct  2 06:49:10 np0005466031 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  2 06:49:10 np0005466031 systemd[1]: Reached target User and Group Name Lookups.
Oct  2 06:49:10 np0005466031 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  2 06:49:10 np0005466031 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  2 06:49:10 np0005466031 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  2 06:49:10 np0005466031 systemd[1]: Starting User Login Management...
Oct  2 06:49:10 np0005466031 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  2 06:49:10 np0005466031 chronyd[799]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  2 06:49:10 np0005466031 chronyd[799]: Loaded 0 symmetric keys
Oct  2 06:49:10 np0005466031 chronyd[799]: Using right/UTC timezone to obtain leap second data
Oct  2 06:49:10 np0005466031 chronyd[799]: Loaded seccomp filter (level 2)
Oct  2 06:49:10 np0005466031 systemd[1]: Started NTP client/server.
Oct  2 06:49:10 np0005466031 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  2 06:49:10 np0005466031 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  2 06:49:10 np0005466031 systemd-logind[786]: New seat seat0.
Oct  2 06:49:10 np0005466031 systemd[1]: Started User Login Management.
Oct  2 06:49:10 np0005466031 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  2 06:49:10 np0005466031 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  2 06:49:10 np0005466031 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  2 06:49:10 np0005466031 kernel: Console: switching to colour dummy device 80x25
Oct  2 06:49:10 np0005466031 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  2 06:49:10 np0005466031 kernel: [drm] features: -context_init
Oct  2 06:49:10 np0005466031 kernel: [drm] number of scanouts: 1
Oct  2 06:49:10 np0005466031 kernel: [drm] number of cap sets: 0
Oct  2 06:49:10 np0005466031 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  2 06:49:10 np0005466031 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  2 06:49:10 np0005466031 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  2 06:49:10 np0005466031 kernel: Console: switching to colour frame buffer device 128x48
Oct  2 06:49:10 np0005466031 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  2 06:49:10 np0005466031 kernel: kvm_amd: TSC scaling supported
Oct  2 06:49:10 np0005466031 kernel: kvm_amd: Nested Virtualization enabled
Oct  2 06:49:10 np0005466031 kernel: kvm_amd: Nested Paging enabled
Oct  2 06:49:10 np0005466031 kernel: kvm_amd: LBR virtualization supported
Oct  2 06:49:10 np0005466031 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Oct  2 06:49:10 np0005466031 systemd[1]: Finished IPv4 firewall with iptables.
Oct  2 06:49:11 np0005466031 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 02 Oct 2025 10:49:11 +0000. Up 6.75 seconds.
Oct  2 06:49:11 np0005466031 systemd[1]: run-cloud\x2dinit-tmp-tmpfclmo7_v.mount: Deactivated successfully.
Oct  2 06:49:11 np0005466031 systemd[1]: Starting Hostname Service...
Oct  2 06:49:11 np0005466031 systemd[1]: Started Hostname Service.
Oct  2 06:49:11 np0005466031 systemd-hostnamed[854]: Hostname set to <np0005466031.novalocal> (static)
Oct  2 06:49:11 np0005466031 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  2 06:49:11 np0005466031 systemd[1]: Reached target Preparation for Network.
Oct  2 06:49:11 np0005466031 systemd[1]: Starting Network Manager...
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6526] NetworkManager (version 1.54.1-1.el9) is starting... (boot:8f27f8ea-5ba0-4704-bd0a-df5680956f2e)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6530] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6680] manager[0x558631415080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6728] hostname: hostname: using hostnamed
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6729] hostname: static hostname changed from (none) to "np0005466031.novalocal"
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6732] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6862] manager[0x558631415080]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6863] manager[0x558631415080]: rfkill: WWAN hardware radio set enabled
Oct  2 06:49:11 np0005466031 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6952] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6952] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6953] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6953] manager: Networking is enabled by state file
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6954] settings: Loaded settings plugin: keyfile (internal)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.6985] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7006] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7026] dhcp: init: Using DHCP client 'internal'
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7028] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7037] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7047] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7052] device (lo): Activation: starting connection 'lo' (2b055021-40f0-49d2-a174-414e6fa87e08)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7059] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7061] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7090] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7093] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7095] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7097] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7098] device (eth0): carrier: link connected
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7100] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7106] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  2 06:49:11 np0005466031 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7112] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7114] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7115] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7117] manager: NetworkManager state is now CONNECTING
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7118] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7126] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7129] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 06:49:11 np0005466031 systemd[1]: Started Network Manager.
Oct  2 06:49:11 np0005466031 systemd[1]: Reached target Network.
Oct  2 06:49:11 np0005466031 systemd[1]: Starting Network Manager Wait Online...
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7174] dhcp4 (eth0): state changed new lease, address=38.129.56.167
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7182] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7197] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 06:49:11 np0005466031 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  2 06:49:11 np0005466031 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7276] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7278] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7279] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7285] device (lo): Activation: successful, device activated.
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7290] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7293] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7295] device (eth0): Activation: successful, device activated.
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7301] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 06:49:11 np0005466031 NetworkManager[858]: <info>  [1759402151.7302] manager: startup complete
Oct  2 06:49:11 np0005466031 systemd[1]: Finished Network Manager Wait Online.
Oct  2 06:49:11 np0005466031 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  2 06:49:11 np0005466031 systemd[1]: Starting Cloud-init: Network Stage...
Oct  2 06:49:11 np0005466031 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  2 06:49:11 np0005466031 systemd[1]: Reached target NFS client services.
Oct  2 06:49:11 np0005466031 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  2 06:49:11 np0005466031 systemd[1]: Reached target Remote File Systems.
Oct  2 06:49:11 np0005466031 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:49:12 np0005466031 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 02 Oct 2025 10:49:12 +0000. Up 7.67 seconds.
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |  eth0  | True |        38.129.56.167         | 255.255.255.0 | global | fa:16:3e:b2:9a:c3 |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:feb2:9ac3/64 |       .       |  link  | fa:16:3e:b2:9a:c3 |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct  2 06:49:12 np0005466031 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:49:13 np0005466031 cloud-init[922]: Generating public/private rsa key pair.
Oct  2 06:49:13 np0005466031 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  2 06:49:13 np0005466031 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  2 06:49:13 np0005466031 cloud-init[922]: The key fingerprint is:
Oct  2 06:49:13 np0005466031 cloud-init[922]: SHA256:ZCapGSOUzYDi3M1KS7kwEWVGcNyOCsgGrz6S6clEE7M root@np0005466031.novalocal
Oct  2 06:49:13 np0005466031 cloud-init[922]: The key's randomart image is:
Oct  2 06:49:13 np0005466031 cloud-init[922]: +---[RSA 3072]----+
Oct  2 06:49:13 np0005466031 cloud-init[922]: | +O@.            |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |+o=.o. .         |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |B++ O o +        |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |oB=B O =         |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |oE* *   S        |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |o..+             |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |.+               |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |*o.              |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |o+.              |
Oct  2 06:49:13 np0005466031 cloud-init[922]: +----[SHA256]-----+
Oct  2 06:49:13 np0005466031 cloud-init[922]: Generating public/private ecdsa key pair.
Oct  2 06:49:13 np0005466031 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  2 06:49:13 np0005466031 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  2 06:49:13 np0005466031 cloud-init[922]: The key fingerprint is:
Oct  2 06:49:13 np0005466031 cloud-init[922]: SHA256:C3Grk/P1KW5Da7RumArs5nq06wJUkx6299Z27kX6p2c root@np0005466031.novalocal
Oct  2 06:49:13 np0005466031 cloud-init[922]: The key's randomart image is:
Oct  2 06:49:13 np0005466031 cloud-init[922]: +---[ECDSA 256]---+
Oct  2 06:49:13 np0005466031 cloud-init[922]: |    .            |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |   *             |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |  + + . .        |
Oct  2 06:49:13 np0005466031 cloud-init[922]: | . o . o .       |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |.   . o S    .   |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |.  ..  * +o.o    |
Oct  2 06:49:13 np0005466031 cloud-init[922]: | . .o.* o=++ .   |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |  ..+. +o.Ooo. E |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |  .B=....*++oo=  |
Oct  2 06:49:13 np0005466031 cloud-init[922]: +----[SHA256]-----+
Oct  2 06:49:13 np0005466031 cloud-init[922]: Generating public/private ed25519 key pair.
Oct  2 06:49:13 np0005466031 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  2 06:49:13 np0005466031 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  2 06:49:13 np0005466031 cloud-init[922]: The key fingerprint is:
Oct  2 06:49:13 np0005466031 cloud-init[922]: SHA256:+Hhe40rlLjBOz22B50IDOiGr7XZZZa47od8Y9ngXzLM root@np0005466031.novalocal
Oct  2 06:49:13 np0005466031 cloud-init[922]: The key's randomart image is:
Oct  2 06:49:13 np0005466031 cloud-init[922]: +--[ED25519 256]--+
Oct  2 06:49:13 np0005466031 cloud-init[922]: |                 |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |                 |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |                 |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |  . . ..o        |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |   o o.=S+.      |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |  . o *o=oB      |
Oct  2 06:49:13 np0005466031 cloud-init[922]: | o   B+O+=+=     |
Oct  2 06:49:13 np0005466031 cloud-init[922]: |. o +.+X*+E.     |
Oct  2 06:49:13 np0005466031 cloud-init[922]: | o.. .=+=*o      |
Oct  2 06:49:13 np0005466031 cloud-init[922]: +----[SHA256]-----+
Oct  2 06:49:13 np0005466031 systemd[1]: Finished Cloud-init: Network Stage.
Oct  2 06:49:13 np0005466031 systemd[1]: Reached target Cloud-config availability.
Oct  2 06:49:13 np0005466031 systemd[1]: Reached target Network is Online.
Oct  2 06:49:13 np0005466031 systemd[1]: Starting Cloud-init: Config Stage...
Oct  2 06:49:13 np0005466031 systemd[1]: Starting Notify NFS peers of a restart...
Oct  2 06:49:13 np0005466031 systemd[1]: Starting System Logging Service...
Oct  2 06:49:13 np0005466031 systemd[1]: Starting OpenSSH server daemon...
Oct  2 06:49:13 np0005466031 sm-notify[1005]: Version 2.5.4 starting
Oct  2 06:49:13 np0005466031 systemd[1]: Starting Permit User Sessions...
Oct  2 06:49:13 np0005466031 systemd[1]: Started Notify NFS peers of a restart.
Oct  2 06:49:13 np0005466031 systemd[1]: Finished Permit User Sessions.
Oct  2 06:49:13 np0005466031 systemd[1]: Started Command Scheduler.
Oct  2 06:49:13 np0005466031 systemd[1]: Started Getty on tty1.
Oct  2 06:49:13 np0005466031 systemd[1]: Started Serial Getty on ttyS0.
Oct  2 06:49:13 np0005466031 systemd[1]: Reached target Login Prompts.
Oct  2 06:49:13 np0005466031 systemd[1]: Started OpenSSH server daemon.
Oct  2 06:49:13 np0005466031 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Oct  2 06:49:13 np0005466031 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  2 06:49:13 np0005466031 systemd[1]: Started System Logging Service.
Oct  2 06:49:13 np0005466031 systemd[1]: Reached target Multi-User System.
Oct  2 06:49:13 np0005466031 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  2 06:49:13 np0005466031 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  2 06:49:13 np0005466031 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  2 06:49:13 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 06:49:13 np0005466031 cloud-init[1018]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 02 Oct 2025 10:49:13 +0000. Up 9.43 seconds.
Oct  2 06:49:13 np0005466031 systemd[1]: Finished Cloud-init: Config Stage.
Oct  2 06:49:13 np0005466031 systemd[1]: Starting Cloud-init: Final Stage...
Oct  2 06:49:14 np0005466031 cloud-init[1023]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 02 Oct 2025 10:49:14 +0000. Up 9.83 seconds.
Oct  2 06:49:14 np0005466031 cloud-init[1030]: #############################################################
Oct  2 06:49:14 np0005466031 cloud-init[1032]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  2 06:49:14 np0005466031 cloud-init[1034]: 256 SHA256:C3Grk/P1KW5Da7RumArs5nq06wJUkx6299Z27kX6p2c root@np0005466031.novalocal (ECDSA)
Oct  2 06:49:14 np0005466031 cloud-init[1037]: 256 SHA256:+Hhe40rlLjBOz22B50IDOiGr7XZZZa47od8Y9ngXzLM root@np0005466031.novalocal (ED25519)
Oct  2 06:49:14 np0005466031 cloud-init[1040]: 3072 SHA256:ZCapGSOUzYDi3M1KS7kwEWVGcNyOCsgGrz6S6clEE7M root@np0005466031.novalocal (RSA)
Oct  2 06:49:14 np0005466031 cloud-init[1041]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  2 06:49:14 np0005466031 cloud-init[1042]: #############################################################
Oct  2 06:49:14 np0005466031 cloud-init[1023]: Cloud-init v. 24.4-7.el9 finished at Thu, 02 Oct 2025 10:49:14 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.00 seconds
Oct  2 06:49:14 np0005466031 systemd[1]: Finished Cloud-init: Final Stage.
Oct  2 06:49:14 np0005466031 systemd[1]: Reached target Cloud-init target.
Oct  2 06:49:14 np0005466031 systemd[1]: Startup finished in 1.600s (kernel) + 2.718s (initrd) + 5.768s (userspace) = 10.088s.
Oct  2 06:49:16 np0005466031 chronyd[799]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Oct  2 06:49:16 np0005466031 chronyd[799]: System clock TAI offset set to 37 seconds
Oct  2 06:49:20 np0005466031 irqbalance[781]: Cannot change IRQ 25 affinity: Operation not permitted
Oct  2 06:49:20 np0005466031 irqbalance[781]: IRQ 25 affinity is now unmanaged
Oct  2 06:49:20 np0005466031 irqbalance[781]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  2 06:49:20 np0005466031 irqbalance[781]: IRQ 31 affinity is now unmanaged
Oct  2 06:49:20 np0005466031 irqbalance[781]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  2 06:49:20 np0005466031 irqbalance[781]: IRQ 28 affinity is now unmanaged
Oct  2 06:49:20 np0005466031 irqbalance[781]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  2 06:49:20 np0005466031 irqbalance[781]: IRQ 32 affinity is now unmanaged
Oct  2 06:49:20 np0005466031 irqbalance[781]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  2 06:49:20 np0005466031 irqbalance[781]: IRQ 30 affinity is now unmanaged
Oct  2 06:49:20 np0005466031 irqbalance[781]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  2 06:49:20 np0005466031 irqbalance[781]: IRQ 29 affinity is now unmanaged
Oct  2 06:49:21 np0005466031 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 06:49:41 np0005466031 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:00:16 np0005466031 systemd[1]: Starting dnf makecache...
Oct  2 07:00:17 np0005466031 dnf[1061]: Failed determining last makecache time.
Oct  2 07:00:17 np0005466031 dnf[1061]: CentOS Stream 9 - BaseOS                         24 kB/s | 6.7 kB     00:00
Oct  2 07:00:17 np0005466031 dnf[1061]: CentOS Stream 9 - AppStream                      69 kB/s | 6.8 kB     00:00
Oct  2 07:00:18 np0005466031 dnf[1061]: CentOS Stream 9 - CRB                            65 kB/s | 6.6 kB     00:00
Oct  2 07:00:18 np0005466031 dnf[1061]: CentOS Stream 9 - Extras packages                66 kB/s | 8.0 kB     00:00
Oct  2 07:00:18 np0005466031 dnf[1061]: Metadata cache created.
Oct  2 07:00:18 np0005466031 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 07:00:18 np0005466031 systemd[1]: Finished dnf makecache.
Oct  2 07:02:13 np0005466031 systemd[1]: Created slice User Slice of UID 1000.
Oct  2 07:02:13 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  2 07:02:13 np0005466031 systemd-logind[786]: New session 1 of user zuul.
Oct  2 07:02:13 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  2 07:02:13 np0005466031 systemd[1]: Starting User Manager for UID 1000...
Oct  2 07:02:13 np0005466031 systemd[1090]: Queued start job for default target Main User Target.
Oct  2 07:02:13 np0005466031 systemd[1090]: Created slice User Application Slice.
Oct  2 07:02:13 np0005466031 systemd[1090]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 07:02:13 np0005466031 systemd[1090]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:02:13 np0005466031 systemd[1090]: Reached target Paths.
Oct  2 07:02:13 np0005466031 systemd[1090]: Reached target Timers.
Oct  2 07:02:13 np0005466031 systemd[1090]: Starting D-Bus User Message Bus Socket...
Oct  2 07:02:13 np0005466031 systemd[1090]: Starting Create User's Volatile Files and Directories...
Oct  2 07:02:13 np0005466031 systemd[1090]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:02:13 np0005466031 systemd[1090]: Reached target Sockets.
Oct  2 07:02:13 np0005466031 systemd[1090]: Finished Create User's Volatile Files and Directories.
Oct  2 07:02:13 np0005466031 systemd[1090]: Reached target Basic System.
Oct  2 07:02:13 np0005466031 systemd[1090]: Reached target Main User Target.
Oct  2 07:02:13 np0005466031 systemd[1090]: Startup finished in 126ms.
Oct  2 07:02:13 np0005466031 systemd[1]: Started User Manager for UID 1000.
Oct  2 07:02:13 np0005466031 systemd[1]: Started Session 1 of User zuul.
Oct  2 07:02:13 np0005466031 python3[1175]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:02:18 np0005466031 python3[1203]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:02:25 np0005466031 python3[1261]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:02:26 np0005466031 python3[1301]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  2 07:02:28 np0005466031 python3[1327]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdHOgImyIPDgNWnaMxITEPAN7NVtxzu14ISD59Z0krS9o0Yef/lJRBJcwAtbdZl6thmmrmd+i6nLhYv58i91I9BglmtPCtwZOV73PkKRHZ//oaGwnMih4wB70pyMygFWOrMfCeHRbPChFn2mwctskvcL515U/KpRwUH6WlesAnHltNt9DFUSKyQADMR0GdPnnDw8gLOq9DBkiwlfGxOV1vxXnsJgtCzmcYqLfOMUyT5CJybnG3mpE2Rfc4aNSBi+3/P2Age5mBEwGZMXQU8BTcxVemx04TNqPzeSvzH96Xtnm6b/EZ1nBpVZVpqJLubsNcY65zoE9DNXQJGgx09voZuQytvk2ksubtwSyX2khxwkaAPUuGWesuCs/pP/g0634ox7wm21U4hFzvMni4TFc4otDkcIsKet/KbBKdvGkk7IVb08Z3k8S96poyWuD8sK4zHLKur4EKbCU4aodgLm2RXTqJN6pLISaY3GAnRN94PvuTmeqA+tMo1IfiAgcif0k= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:28 np0005466031 python3[1351]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:29 np0005466031 python3[1450]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:29 np0005466031 python3[1521]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402949.0918658-254-131898247446817/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=e1f611fafbac4ef993faa9123ba23e77_id_rsa follow=False checksum=923ba278c698bf654f2c8fd44aaead32908a4e27 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:30 np0005466031 python3[1644]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:30 np0005466031 python3[1715]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402950.157445-309-7486001850232/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=e1f611fafbac4ef993faa9123ba23e77_id_rsa.pub follow=False checksum=9747c9704720df2c89f1c3bf3782f9b9dd59b88f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:32 np0005466031 python3[1763]: ansible-ping Invoked with data=pong
Oct  2 07:02:33 np0005466031 python3[1787]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:02:36 np0005466031 python3[1845]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  2 07:02:37 np0005466031 python3[1877]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:37 np0005466031 python3[1901]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:38 np0005466031 python3[1925]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:38 np0005466031 python3[1949]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:38 np0005466031 python3[1973]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:39 np0005466031 python3[1997]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:40 np0005466031 python3[2023]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:41 np0005466031 python3[2101]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:41 np0005466031 python3[2174]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402960.9423127-34-222692964899661/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:42 np0005466031 python3[2222]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:42 np0005466031 python3[2246]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:43 np0005466031 python3[2270]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:43 np0005466031 python3[2294]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:43 np0005466031 python3[2318]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:44 np0005466031 python3[2342]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:44 np0005466031 python3[2366]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:44 np0005466031 python3[2390]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:44 np0005466031 python3[2414]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:45 np0005466031 python3[2438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:45 np0005466031 python3[2462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:45 np0005466031 python3[2486]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:46 np0005466031 python3[2510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:46 np0005466031 python3[2534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:46 np0005466031 python3[2558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:46 np0005466031 python3[2582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:47 np0005466031 python3[2606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:47 np0005466031 python3[2630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:47 np0005466031 python3[2654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:47 np0005466031 python3[2678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:48 np0005466031 python3[2702]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:48 np0005466031 python3[2726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:48 np0005466031 python3[2750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:48 np0005466031 python3[2774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:49 np0005466031 python3[2798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:49 np0005466031 python3[2822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:52 np0005466031 python3[2848]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 07:02:52 np0005466031 systemd[1]: Starting Time & Date Service...
Oct  2 07:02:52 np0005466031 systemd[1]: Started Time & Date Service.
Oct  2 07:02:52 np0005466031 systemd-timedated[2850]: Changed time zone to 'UTC' (UTC).
Oct  2 07:02:52 np0005466031 python3[2879]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:53 np0005466031 python3[2955]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:53 np0005466031 python3[3026]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759402973.1926942-254-218399420353894/source _original_basename=tmppuc__doc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:54 np0005466031 python3[3126]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:54 np0005466031 python3[3197]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402974.0929353-304-158673680709978/source _original_basename=tmp1iwdqhq5 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:55 np0005466031 python3[3299]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:55 np0005466031 python3[3372]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402975.2178054-384-100317752565309/source _original_basename=tmpl8v_kxnt follow=False checksum=df49f9d92cf62290b790f8222a13dabe1c7a4f0a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:56 np0005466031 python3[3420]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:02:56 np0005466031 python3[3446]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:02:57 np0005466031 python3[3526]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:57 np0005466031 python3[3599]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402977.0561337-454-43697981994771/source _original_basename=tmpz7zqc9in follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:58 np0005466031 python3[3650]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-634d-add4-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:02:59 np0005466031 python3[3678]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-634d-add4-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  2 07:03:00 np0005466031 python3[3706]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:03:22 np0005466031 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 07:03:29 np0005466031 python3[3734]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:04:29 np0005466031 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  2 07:04:29 np0005466031 systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Oct  2 07:04:29 np0005466031 systemd[1090]: Starting Mark boot as successful...
Oct  2 07:04:29 np0005466031 systemd[1090]: Finished Mark boot as successful.
Oct  2 07:04:29 np0005466031 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  2 07:04:29 np0005466031 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  2 07:04:29 np0005466031 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  2 07:05:00 np0005466031 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  2 07:05:00 np0005466031 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1223] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 07:05:00 np0005466031 systemd-udevd[3740]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1416] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1441] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1444] device (eth1): carrier: link connected
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1446] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1451] policy: auto-activating connection 'Wired connection 1' (ff62706e-bc6e-362c-8d12-d133aa06af80)
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1456] device (eth1): Activation: starting connection 'Wired connection 1' (ff62706e-bc6e-362c-8d12-d133aa06af80)
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1457] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1458] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1461] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:05:00 np0005466031 NetworkManager[858]: <info>  [1759403100.1465] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:05:01 np0005466031 systemd-logind[786]: New session 3 of user zuul.
Oct  2 07:05:01 np0005466031 systemd[1]: Started Session 3 of User zuul.
Oct  2 07:05:01 np0005466031 python3[3770]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-98ee-a261-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:05:11 np0005466031 python3[3851]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:05:11 np0005466031 python3[3924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759403111.057746-206-169254859192088/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=dc1828b995bf7b42df843b224caf26b7f6af754f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:05:12 np0005466031 python3[3974]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:05:12 np0005466031 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  2 07:05:12 np0005466031 systemd[1]: Stopped Network Manager Wait Online.
Oct  2 07:05:12 np0005466031 systemd[1]: Stopping Network Manager Wait Online...
Oct  2 07:05:12 np0005466031 NetworkManager[858]: <info>  [1759403112.2437] caught SIGTERM, shutting down normally.
Oct  2 07:05:12 np0005466031 systemd[1]: Stopping Network Manager...
Oct  2 07:05:12 np0005466031 NetworkManager[858]: <info>  [1759403112.2451] dhcp4 (eth0): canceled DHCP transaction
Oct  2 07:05:12 np0005466031 NetworkManager[858]: <info>  [1759403112.2452] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:05:12 np0005466031 NetworkManager[858]: <info>  [1759403112.2452] dhcp4 (eth0): state changed no lease
Oct  2 07:05:12 np0005466031 NetworkManager[858]: <info>  [1759403112.2457] manager: NetworkManager state is now CONNECTING
Oct  2 07:05:12 np0005466031 NetworkManager[858]: <info>  [1759403112.2646] dhcp4 (eth1): canceled DHCP transaction
Oct  2 07:05:12 np0005466031 NetworkManager[858]: <info>  [1759403112.2647] dhcp4 (eth1): state changed no lease
Oct  2 07:05:12 np0005466031 NetworkManager[858]: <info>  [1759403112.2682] exiting (success)
Oct  2 07:05:12 np0005466031 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:05:12 np0005466031 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  2 07:05:12 np0005466031 systemd[1]: Stopped Network Manager.
Oct  2 07:05:12 np0005466031 systemd[1]: NetworkManager.service: Consumed 5.715s CPU time, 10.0M memory peak.
Oct  2 07:05:12 np0005466031 systemd[1]: Starting Network Manager...
Oct  2 07:05:12 np0005466031 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.3040] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:8f27f8ea-5ba0-4704-bd0a-df5680956f2e)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.3043] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.3094] manager[0x55c94c140070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 07:05:12 np0005466031 systemd[1]: Starting Hostname Service...
Oct  2 07:05:12 np0005466031 systemd[1]: Started Hostname Service.
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4128] hostname: hostname: using hostnamed
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4129] hostname: static hostname changed from (none) to "np0005466031.novalocal"
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4133] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4137] manager[0x55c94c140070]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4138] manager[0x55c94c140070]: rfkill: WWAN hardware radio set enabled
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4161] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4161] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4161] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4162] manager: Networking is enabled by state file
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4164] settings: Loaded settings plugin: keyfile (internal)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4167] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4188] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4199] dhcp: init: Using DHCP client 'internal'
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4201] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4205] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4209] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4215] device (lo): Activation: starting connection 'lo' (2b055021-40f0-49d2-a174-414e6fa87e08)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4220] device (eth0): carrier: link connected
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4223] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4226] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4227] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4231] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4236] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4241] device (eth1): carrier: link connected
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4244] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4248] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ff62706e-bc6e-362c-8d12-d133aa06af80) (indicated)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4248] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4252] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4257] device (eth1): Activation: starting connection 'Wired connection 1' (ff62706e-bc6e-362c-8d12-d133aa06af80)
Oct  2 07:05:12 np0005466031 systemd[1]: Started Network Manager.
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4272] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4275] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4277] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4278] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4280] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4282] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4283] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4285] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4287] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4296] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4298] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4306] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4308] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4337] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4339] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4343] device (lo): Activation: successful, device activated.
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4348] dhcp4 (eth0): state changed new lease, address=38.129.56.167
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4352] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 07:05:12 np0005466031 systemd[1]: Starting Network Manager Wait Online...
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4400] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4419] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4420] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4423] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4425] device (eth0): Activation: successful, device activated.
Oct  2 07:05:12 np0005466031 NetworkManager[3978]: <info>  [1759403112.4429] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 07:05:12 np0005466031 python3[4059]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-98ee-a261-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:05:22 np0005466031 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:05:42 np0005466031 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3488] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:05:57 np0005466031 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:05:57 np0005466031 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3716] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3721] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3739] device (eth1): Activation: successful, device activated.
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3747] manager: startup complete
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3751] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <warn>  [1759403157.3762] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3769] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  2 07:05:57 np0005466031 systemd[1]: Finished Network Manager Wait Online.
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3866] dhcp4 (eth1): canceled DHCP transaction
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3866] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3866] dhcp4 (eth1): state changed no lease
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3879] policy: auto-activating connection 'ci-private-network' (191bec86-92f9-5707-b786-f82bc84237e2)
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3882] device (eth1): Activation: starting connection 'ci-private-network' (191bec86-92f9-5707-b786-f82bc84237e2)
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3883] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3887] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3893] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.3901] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.4653] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.4657] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:05:57 np0005466031 NetworkManager[3978]: <info>  [1759403157.4667] device (eth1): Activation: successful, device activated.
Oct  2 07:06:07 np0005466031 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:06:12 np0005466031 systemd[1]: session-3.scope: Deactivated successfully.
Oct  2 07:06:12 np0005466031 systemd[1]: session-3.scope: Consumed 1.493s CPU time.
Oct  2 07:06:12 np0005466031 systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Oct  2 07:06:12 np0005466031 systemd-logind[786]: Removed session 3.
Oct  2 07:06:28 np0005466031 systemd-logind[786]: New session 4 of user zuul.
Oct  2 07:06:28 np0005466031 systemd[1]: Started Session 4 of User zuul.
Oct  2 07:06:29 np0005466031 python3[4170]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:06:29 np0005466031 python3[4243]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759403188.8405201-373-81014340660300/source _original_basename=tmphrkl_xdi follow=False checksum=c919e886c60bd4fe64e018977b3d3fbde98f63d3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:06:31 np0005466031 systemd[1]: session-4.scope: Deactivated successfully.
Oct  2 07:06:31 np0005466031 systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Oct  2 07:06:31 np0005466031 systemd-logind[786]: Removed session 4.
Oct  2 07:07:36 np0005466031 systemd[1090]: Created slice User Background Tasks Slice.
Oct  2 07:07:36 np0005466031 systemd[1090]: Starting Cleanup of User's Temporary Files and Directories...
Oct  2 07:07:36 np0005466031 systemd[1090]: Finished Cleanup of User's Temporary Files and Directories.
Oct  2 07:12:53 np0005466031 systemd-logind[786]: New session 5 of user zuul.
Oct  2 07:12:53 np0005466031 systemd[1]: Started Session 5 of User zuul.
Oct  2 07:12:54 np0005466031 python3[4302]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-383d-cd01-000000000cac-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:12:54 np0005466031 python3[4331]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:55 np0005466031 python3[4357]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:55 np0005466031 python3[4383]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:55 np0005466031 python3[4409]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:56 np0005466031 python3[4435]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:56 np0005466031 python3[4435]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  2 07:12:56 np0005466031 python3[4461]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:12:56 np0005466031 systemd[1]: Reloading.
Oct  2 07:12:56 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:12:58 np0005466031 python3[4518]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  2 07:12:59 np0005466031 python3[4544]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:12:59 np0005466031 python3[4572]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:12:59 np0005466031 python3[4600]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:13:00 np0005466031 python3[4628]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:13:00 np0005466031 python3[4655]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-383d-cd01-000000000cb2-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:13:01 np0005466031 python3[4685]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:13:04 np0005466031 systemd[1]: session-5.scope: Deactivated successfully.
Oct  2 07:13:04 np0005466031 systemd[1]: session-5.scope: Consumed 3.478s CPU time.
Oct  2 07:13:04 np0005466031 systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Oct  2 07:13:04 np0005466031 systemd-logind[786]: Removed session 5.
Oct  2 07:13:05 np0005466031 systemd-logind[786]: New session 6 of user zuul.
Oct  2 07:13:05 np0005466031 systemd[1]: Started Session 6 of User zuul.
Oct  2 07:13:06 np0005466031 python3[4718]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 07:13:32 np0005466031 kernel: SELinux:  Converting 365 SID table entries...
Oct  2 07:13:32 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:13:32 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:13:32 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:13:32 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:13:32 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:13:32 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:13:32 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:13:47 np0005466031 kernel: SELinux:  Converting 365 SID table entries...
Oct  2 07:13:47 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:13:47 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:13:47 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:13:47 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:13:47 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:13:47 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:13:47 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:13:58 np0005466031 kernel: SELinux:  Converting 365 SID table entries...
Oct  2 07:13:58 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:13:58 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:13:58 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:13:58 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:13:58 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:13:58 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:13:58 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:13:59 np0005466031 setsebool[4778]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  2 07:13:59 np0005466031 setsebool[4778]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  2 07:14:14 np0005466031 kernel: SELinux:  Converting 368 SID table entries...
Oct  2 07:14:14 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:14:14 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:14:14 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:14:14 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:14:14 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:14:14 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:14:14 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:14:35 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  2 07:14:35 np0005466031 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:14:35 np0005466031 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:14:35 np0005466031 systemd[1]: Reloading.
Oct  2 07:14:35 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:14:35 np0005466031 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:14:36 np0005466031 systemd[1]: Starting PackageKit Daemon...
Oct  2 07:14:36 np0005466031 systemd[1]: Starting Authorization Manager...
Oct  2 07:14:36 np0005466031 polkitd[6228]: Started polkitd version 0.117
Oct  2 07:14:36 np0005466031 systemd[1]: Started Authorization Manager.
Oct  2 07:14:36 np0005466031 systemd[1]: Started PackageKit Daemon.
Oct  2 07:14:43 np0005466031 python3[10864]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-52da-2fd4-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:14:44 np0005466031 kernel: evm: overlay not supported
Oct  2 07:14:44 np0005466031 systemd[1090]: Starting D-Bus User Message Bus...
Oct  2 07:14:44 np0005466031 dbus-broker-launch[11327]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  2 07:14:44 np0005466031 dbus-broker-launch[11327]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  2 07:14:44 np0005466031 systemd[1090]: Started D-Bus User Message Bus.
Oct  2 07:14:44 np0005466031 dbus-broker-lau[11327]: Ready
Oct  2 07:14:44 np0005466031 systemd[1090]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  2 07:14:44 np0005466031 systemd[1090]: Created slice Slice /user.
Oct  2 07:14:44 np0005466031 systemd[1090]: podman-11256.scope: unit configures an IP firewall, but not running as root.
Oct  2 07:14:44 np0005466031 systemd[1090]: (This warning is only shown for the first unit using IP firewalling.)
Oct  2 07:14:45 np0005466031 systemd[1090]: Started podman-11256.scope.
Oct  2 07:14:45 np0005466031 systemd[1090]: Started podman-pause-001e9480.scope.
Oct  2 07:14:45 np0005466031 python3[11674]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.136:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.136:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:14:46 np0005466031 systemd[1]: session-6.scope: Deactivated successfully.
Oct  2 07:14:46 np0005466031 systemd[1]: session-6.scope: Consumed 1min 6.350s CPU time.
Oct  2 07:14:46 np0005466031 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Oct  2 07:14:46 np0005466031 systemd-logind[786]: Removed session 6.
Oct  2 07:15:11 np0005466031 systemd-logind[786]: New session 7 of user zuul.
Oct  2 07:15:11 np0005466031 systemd[1]: Started Session 7 of User zuul.
Oct  2 07:15:11 np0005466031 python3[20354]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:15:11 np0005466031 python3[20561]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:15:12 np0005466031 python3[20883]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005466031.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  2 07:15:13 np0005466031 python3[21220]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:15:14 np0005466031 python3[21462]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:15:14 np0005466031 python3[21669]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759403713.845316-170-6351155624073/source _original_basename=tmp18it0ug6 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:15:15 np0005466031 python3[21863]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Oct  2 07:15:15 np0005466031 systemd[1]: Starting Hostname Service...
Oct  2 07:15:15 np0005466031 systemd[1]: Started Hostname Service.
Oct  2 07:15:15 np0005466031 systemd-hostnamed[21923]: Changed pretty hostname to 'compute-2'
Oct  2 07:15:15 np0005466031 systemd-hostnamed[21923]: Hostname set to <compute-2> (static)
Oct  2 07:15:15 np0005466031 NetworkManager[3978]: <info>  [1759403715.6507] hostname: static hostname changed from "np0005466031.novalocal" to "compute-2"
Oct  2 07:15:15 np0005466031 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:15:15 np0005466031 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:15:15 np0005466031 systemd[1]: session-7.scope: Deactivated successfully.
Oct  2 07:15:15 np0005466031 systemd[1]: session-7.scope: Consumed 2.382s CPU time.
Oct  2 07:15:15 np0005466031 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Oct  2 07:15:15 np0005466031 systemd-logind[786]: Removed session 7.
Oct  2 07:15:20 np0005466031 irqbalance[781]: Cannot change IRQ 27 affinity: Operation not permitted
Oct  2 07:15:20 np0005466031 irqbalance[781]: IRQ 27 affinity is now unmanaged
Oct  2 07:15:25 np0005466031 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:15:30 np0005466031 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:15:30 np0005466031 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:15:30 np0005466031 systemd[1]: man-db-cache-update.service: Consumed 58.205s CPU time.
Oct  2 07:15:30 np0005466031 systemd[1]: run-re4254b50d90d434886663bfbe0191354.service: Deactivated successfully.
Oct  2 07:15:45 np0005466031 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:19:14 np0005466031 systemd-logind[786]: New session 8 of user zuul.
Oct  2 07:19:14 np0005466031 systemd[1]: Started Session 8 of User zuul.
Oct  2 07:19:15 np0005466031 python3[26686]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:19:16 np0005466031 python3[26802]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:17 np0005466031 python3[26875]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.6047118-30638-16682700653189/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:17 np0005466031 python3[26901]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:17 np0005466031 python3[26974]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.6047118-30638-16682700653189/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:18 np0005466031 python3[27000]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:18 np0005466031 python3[27073]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.6047118-30638-16682700653189/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:18 np0005466031 python3[27099]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:19 np0005466031 python3[27172]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.6047118-30638-16682700653189/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:19 np0005466031 python3[27198]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:20 np0005466031 python3[27271]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.6047118-30638-16682700653189/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:20 np0005466031 python3[27297]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:21 np0005466031 python3[27370]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.6047118-30638-16682700653189/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:21 np0005466031 python3[27396]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:21 np0005466031 python3[27469]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.6047118-30638-16682700653189/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:33 np0005466031 python3[27517]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:19:42 np0005466031 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 07:24:33 np0005466031 systemd[1]: session-8.scope: Deactivated successfully.
Oct  2 07:24:33 np0005466031 systemd[1]: session-8.scope: Consumed 5.414s CPU time.
Oct  2 07:24:33 np0005466031 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Oct  2 07:24:33 np0005466031 systemd-logind[786]: Removed session 8.
Oct  2 07:34:07 np0005466031 systemd-logind[786]: New session 9 of user zuul.
Oct  2 07:34:07 np0005466031 systemd[1]: Started Session 9 of User zuul.
Oct  2 07:34:08 np0005466031 python3.9[27683]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:09 np0005466031 python3.9[27864]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:34:18 np0005466031 systemd[1]: session-9.scope: Deactivated successfully.
Oct  2 07:34:18 np0005466031 systemd[1]: session-9.scope: Consumed 7.503s CPU time.
Oct  2 07:34:18 np0005466031 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Oct  2 07:34:18 np0005466031 systemd-logind[786]: Removed session 9.
Oct  2 07:34:33 np0005466031 systemd-logind[786]: New session 10 of user zuul.
Oct  2 07:34:33 np0005466031 systemd[1]: Started Session 10 of User zuul.
Oct  2 07:34:34 np0005466031 python3.9[28074]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  2 07:34:35 np0005466031 python3.9[28248]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:36 np0005466031 python3.9[28400]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:34:37 np0005466031 python3.9[28553]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:34:38 np0005466031 python3.9[28705]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:34:39 np0005466031 python3.9[28857]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:34:39 np0005466031 python3.9[28980]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404878.6695333-185-209701636697357/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:34:40 np0005466031 python3.9[29132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:41 np0005466031 python3.9[29288]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:34:42 np0005466031 python3.9[29438]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:34:48 np0005466031 python3.9[29693]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:34:49 np0005466031 python3.9[29843]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:50 np0005466031 python3.9[29997]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:51 np0005466031 python3.9[30155]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:34:52 np0005466031 python3.9[30239]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:35:36 np0005466031 systemd[1]: Reloading.
Oct  2 07:35:36 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:35:37 np0005466031 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  2 07:35:37 np0005466031 systemd[1]: Reloading.
Oct  2 07:35:37 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:35:37 np0005466031 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  2 07:35:37 np0005466031 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  2 07:35:37 np0005466031 systemd[1]: Reloading.
Oct  2 07:35:37 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:35:37 np0005466031 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  2 07:35:38 np0005466031 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct  2 07:35:38 np0005466031 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct  2 07:35:38 np0005466031 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct  2 07:36:40 np0005466031 kernel: SELinux:  Converting 2714 SID table entries...
Oct  2 07:36:40 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:36:40 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:36:40 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:36:40 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:36:40 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:36:40 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:36:40 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:36:41 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  2 07:36:41 np0005466031 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:36:41 np0005466031 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:36:41 np0005466031 systemd[1]: Reloading.
Oct  2 07:36:41 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:36:41 np0005466031 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:36:41 np0005466031 systemd[1]: Starting PackageKit Daemon...
Oct  2 07:36:41 np0005466031 systemd[1]: Started PackageKit Daemon.
Oct  2 07:36:42 np0005466031 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:36:42 np0005466031 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:36:42 np0005466031 systemd[1]: man-db-cache-update.service: Consumed 1.128s CPU time.
Oct  2 07:36:42 np0005466031 systemd[1]: run-r8486c0b05cf24281b789d20116061403.service: Deactivated successfully.
Oct  2 07:36:52 np0005466031 python3.9[31747]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:36:55 np0005466031 python3.9[32028]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  2 07:36:56 np0005466031 python3.9[32180]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  2 07:36:59 np0005466031 python3.9[32334]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:37:00 np0005466031 python3.9[32486]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  2 07:37:01 np0005466031 python3.9[32638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:02 np0005466031 python3.9[32790]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:02 np0005466031 python3.9[32913]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405021.9427795-648-225080698748597/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:37:07 np0005466031 python3.9[33066]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  2 07:37:08 np0005466031 python3.9[33219]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:37:08 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:37:09 np0005466031 python3.9[33378]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:37:10 np0005466031 python3.9[33538]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  2 07:37:11 np0005466031 python3.9[33691]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:37:12 np0005466031 python3.9[33849]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  2 07:37:13 np0005466031 python3.9[34001]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:37:15 np0005466031 python3.9[34154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:16 np0005466031 python3.9[34306]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:17 np0005466031 python3.9[34429]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405036.1519687-932-191463167963727/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:18 np0005466031 python3.9[34581]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:37:18 np0005466031 systemd[1]: Starting Load Kernel Modules...
Oct  2 07:37:18 np0005466031 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  2 07:37:18 np0005466031 kernel: Bridge firewalling registered
Oct  2 07:37:18 np0005466031 systemd-modules-load[34585]: Inserted module 'br_netfilter'
Oct  2 07:37:18 np0005466031 systemd[1]: Finished Load Kernel Modules.
Oct  2 07:37:19 np0005466031 python3.9[34740]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:19 np0005466031 python3.9[34863]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405038.5745292-1001-25228237732293/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:20 np0005466031 python3.9[35015]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:37:23 np0005466031 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct  2 07:37:24 np0005466031 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct  2 07:37:24 np0005466031 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:37:24 np0005466031 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:37:24 np0005466031 systemd[1]: Reloading.
Oct  2 07:37:24 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:37:24 np0005466031 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:37:26 np0005466031 python3.9[37034]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:37:27 np0005466031 python3.9[38138]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  2 07:37:28 np0005466031 python3.9[38733]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:37:29 np0005466031 python3.9[39180]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:29 np0005466031 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:37:30 np0005466031 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:37:30 np0005466031 irqbalance[781]: Cannot change IRQ 26 affinity: Operation not permitted
Oct  2 07:37:30 np0005466031 irqbalance[781]: IRQ 26 affinity is now unmanaged
Oct  2 07:37:30 np0005466031 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:37:30 np0005466031 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:37:30 np0005466031 systemd[1]: man-db-cache-update.service: Consumed 5.101s CPU time.
Oct  2 07:37:30 np0005466031 systemd[1]: run-rafbee56a14ec4b1dba4d548c771de296.service: Deactivated successfully.
Oct  2 07:37:31 np0005466031 python3.9[39554]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:31 np0005466031 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  2 07:37:31 np0005466031 systemd[1]: tuned.service: Deactivated successfully.
Oct  2 07:37:31 np0005466031 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  2 07:37:31 np0005466031 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:37:31 np0005466031 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:37:32 np0005466031 python3.9[39716]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  2 07:37:35 np0005466031 python3.9[39868]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:35 np0005466031 systemd[1]: Reloading.
Oct  2 07:37:35 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:37:36 np0005466031 python3.9[40058]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:36 np0005466031 systemd[1]: Reloading.
Oct  2 07:37:36 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:37:37 np0005466031 python3.9[40246]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:38 np0005466031 python3.9[40399]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:38 np0005466031 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  2 07:37:40 np0005466031 python3.9[40552]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:42 np0005466031 python3.9[40714]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:43 np0005466031 python3.9[40867]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:37:43 np0005466031 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 07:37:43 np0005466031 systemd[1]: Stopped Apply Kernel Variables.
Oct  2 07:37:43 np0005466031 systemd[1]: Stopping Apply Kernel Variables...
Oct  2 07:37:43 np0005466031 systemd[1]: Starting Apply Kernel Variables...
Oct  2 07:37:43 np0005466031 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  2 07:37:43 np0005466031 systemd[1]: Finished Apply Kernel Variables.
Oct  2 07:37:44 np0005466031 systemd[1]: session-10.scope: Deactivated successfully.
Oct  2 07:37:44 np0005466031 systemd[1]: session-10.scope: Consumed 2min 10.789s CPU time.
Oct  2 07:37:44 np0005466031 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Oct  2 07:37:44 np0005466031 systemd-logind[786]: Removed session 10.
Oct  2 07:37:50 np0005466031 systemd-logind[786]: New session 11 of user zuul.
Oct  2 07:37:50 np0005466031 systemd[1]: Started Session 11 of User zuul.
Oct  2 07:37:51 np0005466031 python3.9[41051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:37:52 np0005466031 python3.9[41207]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  2 07:37:53 np0005466031 python3.9[41360]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:37:55 np0005466031 python3.9[41518]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:37:56 np0005466031 python3.9[41678]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:37:57 np0005466031 python3.9[41762]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:38:00 np0005466031 python3.9[41926]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:12 np0005466031 kernel: SELinux:  Converting 2724 SID table entries...
Oct  2 07:38:12 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:38:12 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:38:12 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:38:12 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:38:12 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:38:12 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:38:12 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:38:12 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  2 07:38:12 np0005466031 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  2 07:38:13 np0005466031 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:38:13 np0005466031 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:38:13 np0005466031 systemd[1]: Reloading.
Oct  2 07:38:13 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:38:13 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:38:14 np0005466031 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:38:14 np0005466031 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:38:14 np0005466031 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:38:14 np0005466031 systemd[1]: run-r3653b5d16b5b46f1b63ac19b6499e32d.service: Deactivated successfully.
Oct  2 07:38:19 np0005466031 python3.9[43027]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:38:19 np0005466031 systemd[1]: Reloading.
Oct  2 07:38:19 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:38:19 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:38:19 np0005466031 systemd[1]: Starting Open vSwitch Database Unit...
Oct  2 07:38:19 np0005466031 chown[43069]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  2 07:38:19 np0005466031 ovs-ctl[43074]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  2 07:38:19 np0005466031 ovs-ctl[43074]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  2 07:38:19 np0005466031 ovs-ctl[43074]: Starting ovsdb-server [  OK  ]
Oct  2 07:38:19 np0005466031 ovs-vsctl[43123]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  2 07:38:19 np0005466031 ovs-vsctl[43143]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"b9588630-ee40-495c-89d2-4219f6b0f0b5\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  2 07:38:19 np0005466031 ovs-ctl[43074]: Configuring Open vSwitch system IDs [  OK  ]
Oct  2 07:38:19 np0005466031 ovs-ctl[43074]: Enabling remote OVSDB managers [  OK  ]
Oct  2 07:38:19 np0005466031 systemd[1]: Started Open vSwitch Database Unit.
Oct  2 07:38:19 np0005466031 ovs-vsctl[43149]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct  2 07:38:19 np0005466031 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  2 07:38:19 np0005466031 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  2 07:38:19 np0005466031 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  2 07:38:19 np0005466031 kernel: openvswitch: Open vSwitch switching datapath
Oct  2 07:38:19 np0005466031 ovs-ctl[43193]: Inserting openvswitch module [  OK  ]
Oct  2 07:38:19 np0005466031 ovs-ctl[43162]: Starting ovs-vswitchd [  OK  ]
Oct  2 07:38:19 np0005466031 ovs-vsctl[43212]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct  2 07:38:19 np0005466031 ovs-ctl[43162]: Enabling remote OVSDB managers [  OK  ]
Oct  2 07:38:19 np0005466031 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  2 07:38:19 np0005466031 systemd[1]: Starting Open vSwitch...
Oct  2 07:38:19 np0005466031 systemd[1]: Finished Open vSwitch.
Oct  2 07:38:20 np0005466031 python3.9[43363]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:21 np0005466031 python3.9[43515]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  2 07:38:22 np0005466031 kernel: SELinux:  Converting 2738 SID table entries...
Oct  2 07:38:22 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:38:22 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:38:22 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:38:22 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:38:22 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:38:22 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:38:22 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:38:24 np0005466031 python3.9[43670]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:25 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  2 07:38:25 np0005466031 python3.9[43828]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:27 np0005466031 python3.9[43981]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:38:29 np0005466031 python3.9[44268]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:38:30 np0005466031 python3.9[44418]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:38:30 np0005466031 python3.9[44572]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:32 np0005466031 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:38:32 np0005466031 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:38:32 np0005466031 systemd[1]: Reloading.
Oct  2 07:38:33 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:38:33 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:38:33 np0005466031 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:38:34 np0005466031 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:38:34 np0005466031 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:38:34 np0005466031 systemd[1]: run-r049e807d1d6f4662a37af812a397c8c2.service: Deactivated successfully.
Oct  2 07:38:34 np0005466031 python3.9[44889]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:38:34 np0005466031 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  2 07:38:34 np0005466031 systemd[1]: Stopped Network Manager Wait Online.
Oct  2 07:38:34 np0005466031 systemd[1]: Stopping Network Manager Wait Online...
Oct  2 07:38:34 np0005466031 systemd[1]: Stopping Network Manager...
Oct  2 07:38:34 np0005466031 NetworkManager[3978]: <info>  [1759405114.7546] caught SIGTERM, shutting down normally.
Oct  2 07:38:34 np0005466031 NetworkManager[3978]: <info>  [1759405114.7575] dhcp4 (eth0): canceled DHCP transaction
Oct  2 07:38:34 np0005466031 NetworkManager[3978]: <info>  [1759405114.7575] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:38:34 np0005466031 NetworkManager[3978]: <info>  [1759405114.7576] dhcp4 (eth0): state changed no lease
Oct  2 07:38:34 np0005466031 NetworkManager[3978]: <info>  [1759405114.7580] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 07:38:34 np0005466031 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:38:34 np0005466031 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:38:34 np0005466031 NetworkManager[3978]: <info>  [1759405114.8093] exiting (success)
Oct  2 07:38:34 np0005466031 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  2 07:38:34 np0005466031 systemd[1]: Stopped Network Manager.
Oct  2 07:38:34 np0005466031 systemd[1]: NetworkManager.service: Consumed 12.191s CPU time, 4.1M memory peak, read 0B from disk, written 14.5K to disk.
Oct  2 07:38:34 np0005466031 systemd[1]: Starting Network Manager...
Oct  2 07:38:34 np0005466031 NetworkManager[44907]: <info>  [1759405114.9242] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:8f27f8ea-5ba0-4704-bd0a-df5680956f2e)
Oct  2 07:38:34 np0005466031 NetworkManager[44907]: <info>  [1759405114.9243] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 07:38:34 np0005466031 NetworkManager[44907]: <info>  [1759405114.9319] manager[0x561c0b6d9090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 07:38:34 np0005466031 systemd[1]: Starting Hostname Service...
Oct  2 07:38:35 np0005466031 systemd[1]: Started Hostname Service.
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0172] hostname: hostname: using hostnamed
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0173] hostname: static hostname changed from (none) to "compute-2"
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0189] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0200] manager[0x561c0b6d9090]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0200] manager[0x561c0b6d9090]: rfkill: WWAN hardware radio set enabled
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0235] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0243] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0244] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0245] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0245] manager: Networking is enabled by state file
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0248] settings: Loaded settings plugin: keyfile (internal)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0251] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0285] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0297] dhcp: init: Using DHCP client 'internal'
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0300] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0306] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0310] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0319] device (lo): Activation: starting connection 'lo' (2b055021-40f0-49d2-a174-414e6fa87e08)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0326] device (eth0): carrier: link connected
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0329] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0335] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0336] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0343] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0350] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0355] device (eth1): carrier: link connected
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0358] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0366] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (191bec86-92f9-5707-b786-f82bc84237e2) (indicated)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0366] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0374] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0381] device (eth1): Activation: starting connection 'ci-private-network' (191bec86-92f9-5707-b786-f82bc84237e2)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0387] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 07:38:35 np0005466031 systemd[1]: Started Network Manager.
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0409] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0413] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0414] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0417] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0423] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0425] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0428] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0432] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0439] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0441] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0459] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0475] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0485] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0488] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0495] device (lo): Activation: successful, device activated.
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0504] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0507] dhcp4 (eth0): state changed new lease, address=38.129.56.167
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0510] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0513] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0517] device (eth1): Activation: successful, device activated.
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0530] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0606] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0643] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0645] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0652] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0656] device (eth0): Activation: successful, device activated.
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0661] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 07:38:35 np0005466031 NetworkManager[44907]: <info>  [1759405115.0685] manager: startup complete
Oct  2 07:38:35 np0005466031 systemd[1]: Starting Network Manager Wait Online...
Oct  2 07:38:35 np0005466031 systemd[1]: Finished Network Manager Wait Online.
Oct  2 07:38:35 np0005466031 python3.9[45116]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:43 np0005466031 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:38:43 np0005466031 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:38:43 np0005466031 systemd[1]: Reloading.
Oct  2 07:38:43 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:38:43 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:38:43 np0005466031 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:38:44 np0005466031 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:38:44 np0005466031 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:38:44 np0005466031 systemd[1]: run-ra699b516fd2646bf91b6bc8048a14d27.service: Deactivated successfully.
Oct  2 07:38:45 np0005466031 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:38:47 np0005466031 python3.9[45579]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:38:48 np0005466031 python3.9[45731]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:49 np0005466031 python3.9[45885]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:49 np0005466031 python3.9[46037]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:50 np0005466031 python3.9[46189]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:50 np0005466031 python3.9[46341]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:51 np0005466031 python3.9[46493]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:38:52 np0005466031 python3.9[46616]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405131.218369-654-129647890745281/.source _original_basename=.95i2yqoy follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:53 np0005466031 python3.9[46768]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:53 np0005466031 python3.9[46920]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  2 07:38:54 np0005466031 python3.9[47072]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:56 np0005466031 python3.9[47499]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  2 07:38:57 np0005466031 ansible-async_wrapper.py[47674]: Invoked with j570358215356 300 /home/zuul/.ansible/tmp/ansible-tmp-1759405136.9859462-852-31535536074587/AnsiballZ_edpm_os_net_config.py _
Oct  2 07:38:57 np0005466031 ansible-async_wrapper.py[47677]: Starting module and watcher
Oct  2 07:38:57 np0005466031 ansible-async_wrapper.py[47677]: Start watching 47678 (300)
Oct  2 07:38:57 np0005466031 ansible-async_wrapper.py[47678]: Start module (47678)
Oct  2 07:38:57 np0005466031 ansible-async_wrapper.py[47674]: Return async_wrapper task started.
Oct  2 07:38:58 np0005466031 python3.9[47679]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  2 07:38:58 np0005466031 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  2 07:38:58 np0005466031 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  2 07:38:58 np0005466031 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  2 07:38:58 np0005466031 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  2 07:38:58 np0005466031 kernel: cfg80211: failed to load regulatory.db
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9357] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47680 uid=0 result="success"
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9378] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47680 uid=0 result="success"
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9951] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9954] audit: op="connection-add" uuid="c5555323-95eb-4f62-accb-cf510897eeaf" name="br-ex-br" pid=47680 uid=0 result="success"
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9969] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9971] audit: op="connection-add" uuid="ec5d319e-e184-4fce-b79e-6310246ce9cf" name="br-ex-port" pid=47680 uid=0 result="success"
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9985] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9987] audit: op="connection-add" uuid="72ba3370-cc9c-4eb7-b85b-435d9f83f85a" name="eth1-port" pid=47680 uid=0 result="success"
Oct  2 07:38:59 np0005466031 NetworkManager[44907]: <info>  [1759405139.9999] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0001] audit: op="connection-add" uuid="5cf6dddb-840b-4ffa-9103-41be193737a7" name="vlan20-port" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0013] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0015] audit: op="connection-add" uuid="2e0ae780-e751-4fe3-8258-ec34a2a224e1" name="vlan21-port" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0025] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0027] audit: op="connection-add" uuid="1bc7fc06-4d74-4fa6-9ce5-b043a518b4d1" name="vlan22-port" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0038] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0040] audit: op="connection-add" uuid="25af83fd-03f3-43fe-9689-13d023d31cd2" name="vlan23-port" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0060] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0077] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0079] audit: op="connection-add" uuid="b6ca4506-29c8-4e46-9f9d-d2d12752276f" name="br-ex-if" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0118] audit: op="connection-update" uuid="191bec86-92f9-5707-b786-f82bc84237e2" name="ci-private-network" args="ovs-interface.type,connection.slave-type,connection.master,connection.port-type,connection.controller,connection.timestamp,ipv4.dns,ipv4.routing-rules,ipv4.method,ipv4.never-default,ipv4.addresses,ipv4.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.routing-rules,ipv6.method,ipv6.addresses,ipv6.routes,ovs-external-ids.data" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0137] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0139] audit: op="connection-add" uuid="7f5d9ec8-b701-48ef-967b-b1427df1697d" name="vlan20-if" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0156] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0157] audit: op="connection-add" uuid="6e32e625-d708-455d-b095-8bc5103c4395" name="vlan21-if" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0174] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0176] audit: op="connection-add" uuid="ccbec305-d8d0-4aa8-bccd-e2673b850aec" name="vlan22-if" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0191] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0193] audit: op="connection-add" uuid="ef6b5053-8eb9-439f-a18f-f9dcc8489d65" name="vlan23-if" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0205] audit: op="connection-delete" uuid="ff62706e-bc6e-362c-8d12-d133aa06af80" name="Wired connection 1" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0218] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0228] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0231] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c5555323-95eb-4f62-accb-cf510897eeaf)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0232] audit: op="connection-activate" uuid="c5555323-95eb-4f62-accb-cf510897eeaf" name="br-ex-br" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0234] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0240] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0243] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ec5d319e-e184-4fce-b79e-6310246ce9cf)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0245] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0249] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0252] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (72ba3370-cc9c-4eb7-b85b-435d9f83f85a)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0254] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0259] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0263] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (5cf6dddb-840b-4ffa-9103-41be193737a7)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0265] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0272] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0276] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2e0ae780-e751-4fe3-8258-ec34a2a224e1)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0279] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0286] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0290] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (1bc7fc06-4d74-4fa6-9ce5-b043a518b4d1)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0291] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0297] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0300] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (25af83fd-03f3-43fe-9689-13d023d31cd2)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0301] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0304] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0306] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0311] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0317] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0320] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (b6ca4506-29c8-4e46-9f9d-d2d12752276f)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0321] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0324] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0327] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0328] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0330] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0338] device (eth1): disconnecting for new activation request.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0339] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0343] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0345] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0347] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0349] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0356] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0361] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (7f5d9ec8-b701-48ef-967b-b1427df1697d)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0362] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0365] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0367] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0369] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0371] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0376] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0380] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (6e32e625-d708-455d-b095-8bc5103c4395)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0381] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0384] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0387] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0389] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0391] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0396] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0400] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (ccbec305-d8d0-4aa8-bccd-e2673b850aec)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0401] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0404] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0406] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0408] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0410] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0417] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0421] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (ef6b5053-8eb9-439f-a18f-f9dcc8489d65)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0422] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0427] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0430] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0431] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0432] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0443] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0446] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0449] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0450] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0457] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0460] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0463] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0467] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0470] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 kernel: ovs-system: entered promiscuous mode
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0475] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0480] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0483] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0486] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0490] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0494] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0497] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0500] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 kernel: Timeout policy base is empty
Oct  2 07:39:00 np0005466031 systemd-udevd[47684]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0508] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0512] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0515] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0516] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0521] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0525] dhcp4 (eth0): canceled DHCP transaction
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0525] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0526] dhcp4 (eth0): state changed no lease
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0528] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0544] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0547] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47680 uid=0 result="fail" reason="Device is not activated"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0581] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0585] dhcp4 (eth0): state changed new lease, address=38.129.56.167
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0590] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0628] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0652] device (eth1): disconnecting for new activation request.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0653] audit: op="connection-activate" uuid="191bec86-92f9-5707-b786-f82bc84237e2" name="ci-private-network" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0660] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0770] device (eth1): Activation: starting connection 'ci-private-network' (191bec86-92f9-5707-b786-f82bc84237e2)
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0776] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0781] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0796] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0800] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0806] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0810] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0813] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0814] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0815] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0816] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0817] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0818] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0819] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47680 uid=0 result="success"
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0821] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0826] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0829] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0832] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0835] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0838] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0842] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0845] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0849] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0852] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0855] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0858] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0861] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0866] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0869] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0923] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0927] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.0935] device (eth1): Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 kernel: br-ex: entered promiscuous mode
Oct  2 07:39:00 np0005466031 kernel: vlan22: entered promiscuous mode
Oct  2 07:39:00 np0005466031 systemd-udevd[47683]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1119] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466031 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1135] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1150] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1152] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1157] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 kernel: vlan23: entered promiscuous mode
Oct  2 07:39:00 np0005466031 kernel: vlan20: entered promiscuous mode
Oct  2 07:39:00 np0005466031 kernel: vlan21: entered promiscuous mode
Oct  2 07:39:00 np0005466031 systemd-udevd[47685]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1373] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1379] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1418] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1429] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1447] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1458] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1458] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1460] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1468] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1475] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1481] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1490] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1512] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1531] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1540] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1542] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1550] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1559] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1561] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466031 NetworkManager[44907]: <info>  [1759405140.1569] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:01 np0005466031 NetworkManager[44907]: <info>  [1759405141.3047] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47680 uid=0 result="success"
Oct  2 07:39:01 np0005466031 NetworkManager[44907]: <info>  [1759405141.4710] checkpoint[0x561c0b6ae950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  2 07:39:01 np0005466031 NetworkManager[44907]: <info>  [1759405141.4712] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47680 uid=0 result="success"
Oct  2 07:39:01 np0005466031 python3.9[48038]: ansible-ansible.legacy.async_status Invoked with jid=j570358215356.47674 mode=status _async_dir=/root/.ansible_async
Oct  2 07:39:01 np0005466031 NetworkManager[44907]: <info>  [1759405141.7378] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47680 uid=0 result="success"
Oct  2 07:39:01 np0005466031 NetworkManager[44907]: <info>  [1759405141.7393] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47680 uid=0 result="success"
Oct  2 07:39:01 np0005466031 NetworkManager[44907]: <info>  [1759405141.9779] audit: op="networking-control" arg="global-dns-configuration" pid=47680 uid=0 result="success"
Oct  2 07:39:01 np0005466031 NetworkManager[44907]: <info>  [1759405141.9923] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  2 07:39:01 np0005466031 NetworkManager[44907]: <info>  [1759405141.9968] audit: op="networking-control" arg="global-dns-configuration" pid=47680 uid=0 result="success"
Oct  2 07:39:02 np0005466031 NetworkManager[44907]: <info>  [1759405142.0416] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47680 uid=0 result="success"
Oct  2 07:39:02 np0005466031 NetworkManager[44907]: <info>  [1759405142.2389] checkpoint[0x561c0b6aea20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  2 07:39:02 np0005466031 NetworkManager[44907]: <info>  [1759405142.2396] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47680 uid=0 result="success"
Oct  2 07:39:02 np0005466031 ansible-async_wrapper.py[47678]: Module complete (47678)
Oct  2 07:39:02 np0005466031 ansible-async_wrapper.py[47677]: Done in kid B.
Oct  2 07:39:05 np0005466031 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:39:05 np0005466031 python3.9[48144]: ansible-ansible.legacy.async_status Invoked with jid=j570358215356.47674 mode=status _async_dir=/root/.ansible_async
Oct  2 07:39:05 np0005466031 python3.9[48246]: ansible-ansible.legacy.async_status Invoked with jid=j570358215356.47674 mode=cleanup _async_dir=/root/.ansible_async
Oct  2 07:39:06 np0005466031 python3.9[48398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:06 np0005466031 python3.9[48521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405145.913549-933-243637657317141/.source.returncode _original_basename=.drt9bpnv follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:07 np0005466031 python3.9[48673]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:08 np0005466031 python3.9[48796]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405147.245924-981-6999380476298/.source.cfg _original_basename=.fazol5t7 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:09 np0005466031 python3.9[48949]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:39:09 np0005466031 systemd[1]: Reloading Network Manager...
Oct  2 07:39:09 np0005466031 NetworkManager[44907]: <info>  [1759405149.2919] audit: op="reload" arg="0" pid=48953 uid=0 result="success"
Oct  2 07:39:09 np0005466031 NetworkManager[44907]: <info>  [1759405149.2924] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  2 07:39:09 np0005466031 systemd[1]: Reloaded Network Manager.
Oct  2 07:39:09 np0005466031 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Oct  2 07:39:09 np0005466031 systemd[1]: session-11.scope: Deactivated successfully.
Oct  2 07:39:09 np0005466031 systemd[1]: session-11.scope: Consumed 49.383s CPU time.
Oct  2 07:39:09 np0005466031 systemd-logind[786]: Removed session 11.
Oct  2 07:39:14 np0005466031 systemd-logind[786]: New session 12 of user zuul.
Oct  2 07:39:14 np0005466031 systemd[1]: Started Session 12 of User zuul.
Oct  2 07:39:15 np0005466031 python3.9[49137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:16 np0005466031 python3.9[49291]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:18 np0005466031 python3.9[49485]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:39:18 np0005466031 systemd[1]: session-12.scope: Deactivated successfully.
Oct  2 07:39:18 np0005466031 systemd[1]: session-12.scope: Consumed 2.167s CPU time.
Oct  2 07:39:18 np0005466031 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Oct  2 07:39:18 np0005466031 systemd-logind[786]: Removed session 12.
Oct  2 07:39:19 np0005466031 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:39:24 np0005466031 systemd-logind[786]: New session 13 of user zuul.
Oct  2 07:39:24 np0005466031 systemd[1]: Started Session 13 of User zuul.
Oct  2 07:39:25 np0005466031 python3.9[49667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:26 np0005466031 python3.9[49821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:27 np0005466031 python3.9[49977]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:28 np0005466031 python3.9[50062]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:39:30 np0005466031 python3.9[50215]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:31 np0005466031 python3.9[50411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:32 np0005466031 python3.9[50563]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:39:32 np0005466031 podman[50564]: 2025-10-02 11:39:32.457772109 +0000 UTC m=+0.048919985 system refresh
Oct  2 07:39:33 np0005466031 python3.9[50727]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:33 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:39:34 np0005466031 python3.9[50850]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405172.7357485-204-39677302182837/.source.json follow=False _original_basename=podman_network_config.j2 checksum=d94203069a21e1a80535dcd0baada3635b2fb08c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:34 np0005466031 python3.9[51002]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:35 np0005466031 python3.9[51125]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405174.2209444-250-129776626433676/.source.conf follow=False _original_basename=registries.conf.j2 checksum=2f54462ce13fc7f0e9dc5b3970581b7761b51f34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:36 np0005466031 python3.9[51277]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:36 np0005466031 python3.9[51429]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:37 np0005466031 python3.9[51581]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:38 np0005466031 python3.9[51733]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:38 np0005466031 python3.9[51885]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:39:41 np0005466031 python3.9[52038]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:41 np0005466031 python3.9[52192]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:39:42 np0005466031 python3.9[52344]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:39:43 np0005466031 python3.9[52496]: ansible-service_facts Invoked
Oct  2 07:39:43 np0005466031 network[52513]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:39:43 np0005466031 network[52514]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:39:43 np0005466031 network[52515]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:39:48 np0005466031 python3.9[52969]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:39:50 np0005466031 python3.9[53122]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  2 07:39:52 np0005466031 python3.9[53274]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:52 np0005466031 python3.9[53399]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405191.714555-645-24118490459728/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:53 np0005466031 python3.9[53553]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:54 np0005466031 python3.9[53678]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405193.0532377-692-276001090714039/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:55 np0005466031 python3.9[53832]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:57 np0005466031 python3.9[53986]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:58 np0005466031 python3.9[54070]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:39:59 np0005466031 python3.9[54224]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:40:00 np0005466031 python3.9[54308]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:40:00 np0005466031 chronyd[799]: chronyd exiting
Oct  2 07:40:00 np0005466031 systemd[1]: Stopping NTP client/server...
Oct  2 07:40:00 np0005466031 systemd[1]: chronyd.service: Deactivated successfully.
Oct  2 07:40:00 np0005466031 systemd[1]: Stopped NTP client/server.
Oct  2 07:40:00 np0005466031 systemd[1]: Starting NTP client/server...
Oct  2 07:40:00 np0005466031 chronyd[54317]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  2 07:40:00 np0005466031 chronyd[54317]: Frequency -28.810 +/- 0.243 ppm read from /var/lib/chrony/drift
Oct  2 07:40:00 np0005466031 chronyd[54317]: Loaded seccomp filter (level 2)
Oct  2 07:40:00 np0005466031 systemd[1]: Started NTP client/server.
Oct  2 07:40:00 np0005466031 systemd[1]: session-13.scope: Deactivated successfully.
Oct  2 07:40:00 np0005466031 systemd[1]: session-13.scope: Consumed 24.224s CPU time.
Oct  2 07:40:00 np0005466031 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Oct  2 07:40:00 np0005466031 systemd-logind[786]: Removed session 13.
Oct  2 07:40:06 np0005466031 systemd-logind[786]: New session 14 of user zuul.
Oct  2 07:40:06 np0005466031 systemd[1]: Started Session 14 of User zuul.
Oct  2 07:40:07 np0005466031 python3.9[54498]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:08 np0005466031 python3.9[54650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:08 np0005466031 python3.9[54773]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405207.5661705-69-240288390609559/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:09 np0005466031 systemd[1]: session-14.scope: Deactivated successfully.
Oct  2 07:40:09 np0005466031 systemd[1]: session-14.scope: Consumed 1.482s CPU time.
Oct  2 07:40:09 np0005466031 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Oct  2 07:40:09 np0005466031 systemd-logind[786]: Removed session 14.
Oct  2 07:40:14 np0005466031 systemd-logind[786]: New session 15 of user zuul.
Oct  2 07:40:14 np0005466031 systemd[1]: Started Session 15 of User zuul.
Oct  2 07:40:15 np0005466031 python3.9[54951]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:40:16 np0005466031 python3.9[55107]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:17 np0005466031 python3.9[55282]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:18 np0005466031 python3.9[55405]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759405216.9658337-90-169471353630611/.source.json _original_basename=.k98893fj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:19 np0005466031 python3.9[55557]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:19 np0005466031 python3.9[55680]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405218.824599-159-111723785926414/.source _original_basename=.o6jgu3wr follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:20 np0005466031 python3.9[55832]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:40:21 np0005466031 python3.9[55984]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:21 np0005466031 python3.9[56107]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405220.6848383-231-167167635005217/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:40:22 np0005466031 python3.9[56259]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:22 np0005466031 python3.9[56382]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405221.8346941-231-34656999168579/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:40:23 np0005466031 python3.9[56534]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:24 np0005466031 python3.9[56686]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:24 np0005466031 python3.9[56809]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405223.6764314-342-158706430057135/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:25 np0005466031 python3.9[56961]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:25 np0005466031 python3.9[57084]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405224.8948524-387-256553958401339/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:26 np0005466031 python3.9[57236]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:27 np0005466031 systemd[1]: Reloading.
Oct  2 07:40:27 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:27 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:27 np0005466031 systemd[1]: Reloading.
Oct  2 07:40:27 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:27 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:27 np0005466031 systemd[1]: Starting EDPM Container Shutdown...
Oct  2 07:40:27 np0005466031 systemd[1]: Finished EDPM Container Shutdown.
Oct  2 07:40:28 np0005466031 python3.9[57463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:28 np0005466031 python3.9[57586]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405227.7534475-457-224896022429438/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:29 np0005466031 python3.9[57738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:30 np0005466031 python3.9[57861]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405229.1219697-501-139315680010957/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:30 np0005466031 python3.9[58013]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:30 np0005466031 systemd[1]: Reloading.
Oct  2 07:40:31 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:31 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:31 np0005466031 systemd[1]: Reloading.
Oct  2 07:40:31 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:31 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:31 np0005466031 systemd[1]: Starting Create netns directory...
Oct  2 07:40:31 np0005466031 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:40:31 np0005466031 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:40:31 np0005466031 systemd[1]: Finished Create netns directory.
Oct  2 07:40:32 np0005466031 python3.9[58239]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:40:32 np0005466031 network[58256]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:40:32 np0005466031 network[58257]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:40:32 np0005466031 network[58258]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:40:36 np0005466031 python3.9[58522]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:36 np0005466031 systemd[1]: Reloading.
Oct  2 07:40:36 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:36 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:37 np0005466031 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  2 07:40:37 np0005466031 iptables.init[58561]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  2 07:40:37 np0005466031 iptables.init[58561]: iptables: Flushing firewall rules: [  OK  ]
Oct  2 07:40:37 np0005466031 systemd[1]: iptables.service: Deactivated successfully.
Oct  2 07:40:37 np0005466031 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  2 07:40:37 np0005466031 python3.9[58757]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:39 np0005466031 python3.9[58911]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:39 np0005466031 systemd[1]: Reloading.
Oct  2 07:40:39 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:39 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:39 np0005466031 systemd[1]: Starting Netfilter Tables...
Oct  2 07:40:39 np0005466031 systemd[1]: Finished Netfilter Tables.
Oct  2 07:40:40 np0005466031 python3.9[59103]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:40:41 np0005466031 python3.9[59256]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:41 np0005466031 python3.9[59381]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405240.7897015-709-50384126830452/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:42 np0005466031 python3.9[59532]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:41:08 np0005466031 systemd[1]: session-15.scope: Deactivated successfully.
Oct  2 07:41:08 np0005466031 systemd[1]: session-15.scope: Consumed 18.493s CPU time.
Oct  2 07:41:08 np0005466031 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Oct  2 07:41:08 np0005466031 systemd-logind[786]: Removed session 15.
Oct  2 07:41:21 np0005466031 systemd-logind[786]: New session 16 of user zuul.
Oct  2 07:41:21 np0005466031 systemd[1]: Started Session 16 of User zuul.
Oct  2 07:41:22 np0005466031 python3.9[59725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:41:23 np0005466031 python3.9[59881]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:24 np0005466031 python3.9[60056]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:24 np0005466031 python3.9[60134]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.0loerm0a recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:25 np0005466031 python3.9[60286]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:26 np0005466031 python3.9[60364]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.p0x7z8rc recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:27 np0005466031 python3.9[60516]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:28 np0005466031 python3.9[60668]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:29 np0005466031 python3.9[60746]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:30 np0005466031 python3.9[60898]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:30 np0005466031 python3.9[60976]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:31 np0005466031 python3.9[61128]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:32 np0005466031 python3.9[61280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:32 np0005466031 python3.9[61358]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:33 np0005466031 python3.9[61510]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:33 np0005466031 python3.9[61588]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:34 np0005466031 python3.9[61740]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:41:34 np0005466031 systemd[1]: Reloading.
Oct  2 07:41:34 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:41:34 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:41:35 np0005466031 python3.9[61929]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:36 np0005466031 python3.9[62007]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:37 np0005466031 python3.9[62159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:37 np0005466031 python3.9[62237]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:38 np0005466031 python3.9[62389]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:41:38 np0005466031 systemd[1]: Reloading.
Oct  2 07:41:38 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:41:38 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:41:38 np0005466031 systemd[1]: Starting Create netns directory...
Oct  2 07:41:38 np0005466031 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:41:38 np0005466031 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:41:38 np0005466031 systemd[1]: Finished Create netns directory.
Oct  2 07:41:39 np0005466031 python3.9[62580]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:41:39 np0005466031 network[62597]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:41:39 np0005466031 network[62598]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:41:39 np0005466031 network[62599]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:41:44 np0005466031 python3.9[62862]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:45 np0005466031 python3.9[62940]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:45 np0005466031 python3.9[63092]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:46 np0005466031 python3.9[63244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:47 np0005466031 python3.9[63367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405305.997445-616-22091964366248/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:48 np0005466031 python3.9[63519]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 07:41:48 np0005466031 systemd[1]: Starting Time & Date Service...
Oct  2 07:41:48 np0005466031 systemd[1]: Started Time & Date Service.
Oct  2 07:41:49 np0005466031 python3.9[63675]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:49 np0005466031 python3.9[63827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:50 np0005466031 python3.9[63950]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405309.3587983-721-102528905297888/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:51 np0005466031 python3.9[64102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:51 np0005466031 python3.9[64225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405310.8386815-766-67407613916672/.source.yaml _original_basename=.62p0f733 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:52 np0005466031 python3.9[64377]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:53 np0005466031 python3.9[64500]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405312.1789007-811-166926056933405/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:54 np0005466031 python3.9[64652]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:41:54 np0005466031 python3.9[64805]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:41:55 np0005466031 python3[64958]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:41:56 np0005466031 python3.9[65110]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:56 np0005466031 python3.9[65233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405315.7821996-928-63117241000166/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:57 np0005466031 python3.9[65385]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:58 np0005466031 python3.9[65508]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405317.028124-973-15579936141673/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:58 np0005466031 python3.9[65660]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:59 np0005466031 python3.9[65783]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405318.2770276-1018-200203680917562/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:59 np0005466031 python3.9[65935]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:00 np0005466031 python3.9[66058]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405319.485804-1063-139357108385775/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:01 np0005466031 python3.9[66210]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:01 np0005466031 python3.9[66333]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405320.7447803-1108-122197507827305/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:02 np0005466031 python3.9[66485]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:03 np0005466031 python3.9[66637]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:04 np0005466031 python3.9[66796]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:05 np0005466031 python3.9[66949]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:05 np0005466031 python3.9[67101]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:06 np0005466031 python3.9[67253]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:42:07 np0005466031 python3.9[67406]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:42:07 np0005466031 systemd[1]: session-16.scope: Deactivated successfully.
Oct  2 07:42:07 np0005466031 systemd[1]: session-16.scope: Consumed 29.467s CPU time.
Oct  2 07:42:07 np0005466031 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Oct  2 07:42:07 np0005466031 systemd-logind[786]: Removed session 16.
Oct  2 07:42:10 np0005466031 chronyd[54317]: Selected source 216.128.178.20 (pool.ntp.org)
Oct  2 07:42:12 np0005466031 systemd-logind[786]: New session 17 of user zuul.
Oct  2 07:42:12 np0005466031 systemd[1]: Started Session 17 of User zuul.
Oct  2 07:42:13 np0005466031 python3.9[67587]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  2 07:42:14 np0005466031 python3.9[67739]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:15 np0005466031 python3.9[67891]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:16 np0005466031 python3.9[68043]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfikJfuUE7Xs2lF9Qh9l0WUdl+Tct7ff0gJQZVpPwLHlAwFnY1lIlqF2IQ3J7LtFcsjYF5RcofKcj+ARkMTobXFoygI/H3Yl5EGDehZbaNONLkDXT20bcYtosTZBjJTZWMJaDGUobRPnKWEbt7P8G/CVwj+LKBYxYcl65Bs0m8Ii2JZObV/41E/44oNBbTT6VnLqrH1BjRfNgToFyoYZToIU6gJw+lDGgt/afrHnDeR8fo6fgHkoHZKHxctrFraqhPOEX+SW/RD5ra4/WxZTBDAcOelVyZhpZ0V6HTQuS0IuD/sy9RD9W59TrF0oFH8kP6H1F3EbhrMfM/wkGJqxcBEMPIlGjUgoOCOY4tgCsAuyKcqelTUJIoL5uTuk06fd+1+B0t8j//vY7eWDCGwHAYrOCbL954GsjqhEOd/SL8vW6cT4Eh+DaWzKpvnl+bEN+G7wkI9etJ4B8NugtDyE25Ikfn9nsBLIcPcuepnlcBQkTN4sC+w0I1AEm3Uo8MFOM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxo/cGygmGP55Hjd3RI5yFpLqrtrtdd2PGw/FbMnxJJ#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLbUwjRfNWPOWmPM9kXykw3bNz7sYSt7DYbalJhzh+E3yGMACUO+HxFuSQ4lHBBXquZltdOcmR202cRP+4s05oI=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb8D90laelhslbtmfz72Mp6Q7iCMu+KiPRuBFH59nBtb1LmjrIFjvU1qZnJ+wipHW+bRcdDzNWNM8KJ4IImBqFxbrg17RhHeunE84nnR8leX3OYiMZumpygvXYCykppXcKbe6pfxYUtyTc8Tz3bNoayi7uGoKgN/iaUeADLuyJUDDVyusj2q7uIj7gZ6PbtorR5cUUn0wBZTo3Jx84NmdiJr/xDGrtfawsV6ATz+Rpx3vzz4EE4dq4wN3eTUJiPCpc4jbTvHpp0GdJTK1BkZ4IANgw3a+loOO2MHq2JgMRjKJrH7sqrw7s9XgzHSh/ufOmEKAtgw75tWExEcy/05QGGbR2jnIKde4vVIS5JheT1z4gYASjKEEidjisDxig5nigPddxe3nSxKRQczKXPV+KUOB14AljRbnyqgbw4Dv9wtnkFL/QLMXFA0/NaOAZxhI+fOoAcg+No2ZsB95IgQ49ay/LN011x9o1vfwVPfReOtkjpVxQB8oCXhA53BfrG3M=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAtzqd+HKKUdtdjsFK/O61rbaIfH2/ANnbsFBvd1WLXA#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOyw0g2rIQxTWmEkqBGUUvYwuDopCg/ppyBGUh5LatbQKlwO7AkEzPUhEeFZv2/qzobLbOH4kVCTAQVjiQm//WM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+zJSXp4XBwGccVvswqz0/27MxV0mWhHJ9EKngmPOQ2Et2f+QArNFJsEaUEJankaYSrISVt8m0QscyZhZUgrxp07g0OV9pVQ2pkqF/CSC7RnN96odOHOeQjRmSOj9vF8Q3EeyRZ7MS1CWH6TT+jYOD77TFol6cQhi7o5bzgAdL6yB/ili/PG3bBxtbYtNwSqCSpiGaN8z8j/REszkW2GM6wvDGXk9NgNfBZT4goP4O3qz/wVeMM/OQFGQa/34tMNX3QEE/XOdAUIRXXLw0vmVj7oRDzGVMc12TDalGOqphS+LkUS4PB+ns/IaplTUzc8zlwhycQQPxnzEcm+z3QP8Bo+iBGw+aKpc5UTMMtZocXrjHCv0Q6irXug6N6b7aaANiHMmveZua/Gjp6Ef//Q/+thKtkvcvvhUDZknHLDrHGT5QbVQYjN23MyFdWCu6MgpBw8NNyeI5sO605lOrxk2oXwX19ah7Qt7iAU7KRijLzQBjnMjNb6bcSOCFXVzpl0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxmfzZIbNhcux/tJpdvzaDW/iX/PRMqNcEGpeyKOTEV#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANBfiBul8lZFa5T9kjEYk719DZo4CtW2bTDn+SPcbu/2U71Ms3Qc1tvqiM9B/ciT9t/uzxk25klpGuFqieJFkk=#012 create=True mode=0644 path=/tmp/ansible.hd5237em state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:16 np0005466031 python3.9[68195]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hd5237em' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:17 np0005466031 python3.9[68349]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hd5237em state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:18 np0005466031 systemd[1]: session-17.scope: Deactivated successfully.
Oct  2 07:42:18 np0005466031 systemd[1]: session-17.scope: Consumed 3.129s CPU time.
Oct  2 07:42:18 np0005466031 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Oct  2 07:42:18 np0005466031 systemd-logind[786]: Removed session 17.
Oct  2 07:42:18 np0005466031 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 07:42:23 np0005466031 systemd-logind[786]: New session 18 of user zuul.
Oct  2 07:42:23 np0005466031 systemd[1]: Started Session 18 of User zuul.
Oct  2 07:42:24 np0005466031 python3.9[68530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:25 np0005466031 python3.9[68686]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 07:42:26 np0005466031 python3.9[68840]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:42:27 np0005466031 python3.9[68993]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:28 np0005466031 python3.9[69146]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:28 np0005466031 python3.9[69300]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:29 np0005466031 python3.9[69455]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:30 np0005466031 systemd[1]: session-18.scope: Deactivated successfully.
Oct  2 07:42:30 np0005466031 systemd[1]: session-18.scope: Consumed 4.365s CPU time.
Oct  2 07:42:30 np0005466031 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Oct  2 07:42:30 np0005466031 systemd-logind[786]: Removed session 18.
Oct  2 07:42:35 np0005466031 systemd-logind[786]: New session 19 of user zuul.
Oct  2 07:42:35 np0005466031 systemd[1]: Started Session 19 of User zuul.
Oct  2 07:42:36 np0005466031 python3.9[69633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:37 np0005466031 python3.9[69789]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:42:38 np0005466031 python3.9[69873]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:42:40 np0005466031 python3.9[70024]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:42 np0005466031 python3.9[70175]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:42:42 np0005466031 python3.9[70325]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:42 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:42:42 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:42:43 np0005466031 python3.9[70476]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:43 np0005466031 systemd[1]: session-19.scope: Deactivated successfully.
Oct  2 07:42:43 np0005466031 systemd[1]: session-19.scope: Consumed 5.779s CPU time.
Oct  2 07:42:43 np0005466031 systemd-logind[786]: Session 19 logged out. Waiting for processes to exit.
Oct  2 07:42:43 np0005466031 systemd-logind[786]: Removed session 19.
Oct  2 07:42:51 np0005466031 systemd-logind[786]: New session 20 of user zuul.
Oct  2 07:42:51 np0005466031 systemd[1]: Started Session 20 of User zuul.
Oct  2 07:42:57 np0005466031 python3[71242]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:59 np0005466031 python3[71337]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 07:43:00 np0005466031 python3[71364]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:43:01 np0005466031 python3[71390]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:43:01 np0005466031 kernel: loop: module loaded
Oct  2 07:43:01 np0005466031 kernel: loop3: detected capacity change from 0 to 14680064
Oct  2 07:43:01 np0005466031 python3[71425]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:43:01 np0005466031 lvm[71428]: PV /dev/loop3 not used.
Oct  2 07:43:01 np0005466031 lvm[71430]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:43:01 np0005466031 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  2 07:43:01 np0005466031 lvm[71433]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  2 07:43:01 np0005466031 lvm[71440]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:43:01 np0005466031 lvm[71440]: VG ceph_vg0 finished
Oct  2 07:43:01 np0005466031 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  2 07:43:02 np0005466031 python3[71518]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:43:02 np0005466031 python3[71591]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405382.2562675-33447-60340684086987/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:03 np0005466031 python3[71641]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:43:03 np0005466031 systemd[1]: Reloading.
Oct  2 07:43:03 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:43:03 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:43:04 np0005466031 systemd[1]: Starting Ceph OSD losetup...
Oct  2 07:43:04 np0005466031 bash[71682]: /dev/loop3: [64513]:4349021 (/var/lib/ceph-osd-0.img)
Oct  2 07:43:04 np0005466031 systemd[1]: Finished Ceph OSD losetup.
Oct  2 07:43:04 np0005466031 lvm[71684]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:43:04 np0005466031 lvm[71684]: VG ceph_vg0 finished
Oct  2 07:43:06 np0005466031 python3[71708]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:43:47 np0005466031 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 07:45:01 np0005466031 systemd[1]: Created slice User Slice of UID 42477.
Oct  2 07:45:01 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  2 07:45:01 np0005466031 systemd-logind[786]: New session 21 of user ceph-admin.
Oct  2 07:45:01 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  2 07:45:01 np0005466031 systemd[1]: Starting User Manager for UID 42477...
Oct  2 07:45:01 np0005466031 systemd[71759]: Queued start job for default target Main User Target.
Oct  2 07:45:01 np0005466031 systemd[71759]: Created slice User Application Slice.
Oct  2 07:45:01 np0005466031 systemd[71759]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 07:45:01 np0005466031 systemd[71759]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:45:01 np0005466031 systemd[71759]: Reached target Paths.
Oct  2 07:45:01 np0005466031 systemd[71759]: Reached target Timers.
Oct  2 07:45:01 np0005466031 systemd[71759]: Starting D-Bus User Message Bus Socket...
Oct  2 07:45:01 np0005466031 systemd[71759]: Starting Create User's Volatile Files and Directories...
Oct  2 07:45:01 np0005466031 systemd[71759]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:45:01 np0005466031 systemd[71759]: Reached target Sockets.
Oct  2 07:45:01 np0005466031 systemd[71759]: Finished Create User's Volatile Files and Directories.
Oct  2 07:45:01 np0005466031 systemd[71759]: Reached target Basic System.
Oct  2 07:45:01 np0005466031 systemd[1]: Started User Manager for UID 42477.
Oct  2 07:45:01 np0005466031 systemd[71759]: Reached target Main User Target.
Oct  2 07:45:01 np0005466031 systemd[71759]: Startup finished in 119ms.
Oct  2 07:45:01 np0005466031 systemd[1]: Started Session 21 of User ceph-admin.
Oct  2 07:45:01 np0005466031 systemd-logind[786]: New session 23 of user ceph-admin.
Oct  2 07:45:01 np0005466031 systemd[1]: Started Session 23 of User ceph-admin.
Oct  2 07:45:02 np0005466031 systemd-logind[786]: New session 24 of user ceph-admin.
Oct  2 07:45:02 np0005466031 systemd[1]: Started Session 24 of User ceph-admin.
Oct  2 07:45:02 np0005466031 systemd-logind[786]: New session 25 of user ceph-admin.
Oct  2 07:45:02 np0005466031 systemd[1]: Started Session 25 of User ceph-admin.
Oct  2 07:45:03 np0005466031 systemd-logind[786]: New session 26 of user ceph-admin.
Oct  2 07:45:03 np0005466031 systemd[1]: Started Session 26 of User ceph-admin.
Oct  2 07:45:03 np0005466031 systemd-logind[786]: New session 27 of user ceph-admin.
Oct  2 07:45:03 np0005466031 systemd[1]: Started Session 27 of User ceph-admin.
Oct  2 07:45:03 np0005466031 systemd-logind[786]: New session 28 of user ceph-admin.
Oct  2 07:45:03 np0005466031 systemd[1]: Started Session 28 of User ceph-admin.
Oct  2 07:45:04 np0005466031 systemd-logind[786]: New session 29 of user ceph-admin.
Oct  2 07:45:04 np0005466031 systemd[1]: Started Session 29 of User ceph-admin.
Oct  2 07:45:04 np0005466031 systemd-logind[786]: New session 30 of user ceph-admin.
Oct  2 07:45:04 np0005466031 systemd[1]: Started Session 30 of User ceph-admin.
Oct  2 07:45:05 np0005466031 systemd-logind[786]: New session 31 of user ceph-admin.
Oct  2 07:45:05 np0005466031 systemd[1]: Started Session 31 of User ceph-admin.
Oct  2 07:45:05 np0005466031 systemd-logind[786]: New session 32 of user ceph-admin.
Oct  2 07:45:05 np0005466031 systemd[1]: Started Session 32 of User ceph-admin.
Oct  2 07:45:05 np0005466031 systemd-logind[786]: New session 33 of user ceph-admin.
Oct  2 07:45:05 np0005466031 systemd[1]: Started Session 33 of User ceph-admin.
Oct  2 07:45:06 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:41 np0005466031 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 72781 (sysctl)
Oct  2 07:45:42 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:42 np0005466031 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  2 07:45:42 np0005466031 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  2 07:45:42 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:43 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:43 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:44 np0005466031 systemd[1]: var-lib-containers-storage-overlay-compat3658358102-merged.mount: Deactivated successfully.
Oct  2 07:45:45 np0005466031 systemd[1]: var-lib-containers-storage-overlay-compat3658358102-lower\x2dmapped.mount: Deactivated successfully.
Oct  2 07:46:00 np0005466031 podman[73059]: 2025-10-02 11:46:00.256363577 +0000 UTC m=+17.050140213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:00 np0005466031 podman[73059]: 2025-10-02 11:46:00.28870816 +0000 UTC m=+17.082484776 container create 5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 07:46:00 np0005466031 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  2 07:46:00 np0005466031 systemd[1]: Started libpod-conmon-5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f.scope.
Oct  2 07:46:00 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:00 np0005466031 podman[73059]: 2025-10-02 11:46:00.425479383 +0000 UTC m=+17.219255999 container init 5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 07:46:00 np0005466031 podman[73059]: 2025-10-02 11:46:00.437621391 +0000 UTC m=+17.231398007 container start 5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:00 np0005466031 silly_mestorf[73121]: 167 167
Oct  2 07:46:00 np0005466031 systemd[1]: libpod-5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f.scope: Deactivated successfully.
Oct  2 07:46:00 np0005466031 podman[73059]: 2025-10-02 11:46:00.448109114 +0000 UTC m=+17.241885730 container attach 5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 07:46:00 np0005466031 podman[73059]: 2025-10-02 11:46:00.448574497 +0000 UTC m=+17.242351103 container died 5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:00 np0005466031 systemd[1]: var-lib-containers-storage-overlay-30f980aa1b0d537af37111627812c98282d34d4b602befd299916c274bbffb88-merged.mount: Deactivated successfully.
Oct  2 07:46:00 np0005466031 podman[73059]: 2025-10-02 11:46:00.495871544 +0000 UTC m=+17.289648160 container remove 5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:00 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:46:00 np0005466031 systemd[1]: libpod-conmon-5057ca95e38c915e839418b9c39fcce368376d257103c861b4e46184be90114f.scope: Deactivated successfully.
Oct  2 07:46:00 np0005466031 podman[73147]: 2025-10-02 11:46:00.665665827 +0000 UTC m=+0.044101501 container create e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:46:00 np0005466031 systemd[1]: Started libpod-conmon-e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd.scope.
Oct  2 07:46:00 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:00 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e18099cd018352312464e3dd2b18de583c86f604acf8ff01d7a15e437ba447b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:00 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e18099cd018352312464e3dd2b18de583c86f604acf8ff01d7a15e437ba447b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:00 np0005466031 podman[73147]: 2025-10-02 11:46:00.64280428 +0000 UTC m=+0.021239974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:00 np0005466031 podman[73147]: 2025-10-02 11:46:00.789609394 +0000 UTC m=+0.168045158 container init e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:00 np0005466031 podman[73147]: 2025-10-02 11:46:00.802758269 +0000 UTC m=+0.181193973 container start e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 07:46:00 np0005466031 podman[73147]: 2025-10-02 11:46:00.818239807 +0000 UTC m=+0.196675481 container attach e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]: [
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:    {
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        "available": false,
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        "ceph_device": false,
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        "lsm_data": {},
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        "lvs": [],
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        "path": "/dev/sr0",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        "rejected_reasons": [
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "Has a FileSystem",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "Insufficient space (<5GB)"
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        ],
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        "sys_api": {
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "actuators": null,
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "device_nodes": "sr0",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "devname": "sr0",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "human_readable_size": "482.00 KB",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "id_bus": "ata",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "model": "QEMU DVD-ROM",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "nr_requests": "2",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "parent": "/dev/sr0",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "partitions": {},
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "path": "/dev/sr0",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "removable": "1",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "rev": "2.5+",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "ro": "0",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "rotational": "0",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "sas_address": "",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "sas_device_handle": "",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "scheduler_mode": "mq-deadline",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "sectors": 0,
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "sectorsize": "2048",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "size": 493568.0,
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "support_discard": "2048",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "type": "disk",
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:            "vendor": "QEMU"
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:        }
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]:    }
Oct  2 07:46:01 np0005466031 pedantic_chaplygin[73164]: ]
Oct  2 07:46:01 np0005466031 systemd[1]: libpod-e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd.scope: Deactivated successfully.
Oct  2 07:46:01 np0005466031 systemd[1]: libpod-e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd.scope: Consumed 1.064s CPU time.
Oct  2 07:46:01 np0005466031 podman[73147]: 2025-10-02 11:46:01.874501056 +0000 UTC m=+1.252936730 container died e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 07:46:01 np0005466031 systemd[1]: var-lib-containers-storage-overlay-6e18099cd018352312464e3dd2b18de583c86f604acf8ff01d7a15e437ba447b-merged.mount: Deactivated successfully.
Oct  2 07:46:01 np0005466031 podman[73147]: 2025-10-02 11:46:01.938689749 +0000 UTC m=+1.317125423 container remove e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:01 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:46:01 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:46:01 np0005466031 systemd[1]: libpod-conmon-e51e5092cb1fe63b6942cf14e4ce97341ea7e9ceb5b64d566b4b04647b98bfdd.scope: Deactivated successfully.
Oct  2 07:46:06 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:46:06 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:46:06 np0005466031 podman[75990]: 2025-10-02 11:46:06.385263727 +0000 UTC m=+0.047558375 container create 2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 07:46:06 np0005466031 systemd[1]: Started libpod-conmon-2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5.scope.
Oct  2 07:46:06 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:06 np0005466031 podman[75990]: 2025-10-02 11:46:06.455869833 +0000 UTC m=+0.118164501 container init 2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 07:46:06 np0005466031 podman[75990]: 2025-10-02 11:46:06.364883067 +0000 UTC m=+0.027177765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:06 np0005466031 podman[75990]: 2025-10-02 11:46:06.461355251 +0000 UTC m=+0.123649899 container start 2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:06 np0005466031 podman[75990]: 2025-10-02 11:46:06.464545077 +0000 UTC m=+0.126839725 container attach 2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:06 np0005466031 youthful_sammet[76006]: 167 167
Oct  2 07:46:06 np0005466031 systemd[1]: libpod-2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5.scope: Deactivated successfully.
Oct  2 07:46:06 np0005466031 podman[75990]: 2025-10-02 11:46:06.466174241 +0000 UTC m=+0.128468889 container died 2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:06 np0005466031 podman[75990]: 2025-10-02 11:46:06.513051857 +0000 UTC m=+0.175346505 container remove 2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:06 np0005466031 systemd[1]: libpod-conmon-2753fcca14d1fb9fb43eb516b101bc350dcd027aa3648d8dae3ba8bf6c21b9e5.scope: Deactivated successfully.
Oct  2 07:46:06 np0005466031 podman[76028]: 2025-10-02 11:46:06.598958257 +0000 UTC m=+0.054642277 container create 58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 07:46:06 np0005466031 systemd[1]: Started libpod-conmon-58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633.scope.
Oct  2 07:46:06 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:06 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ea60aa392a525361928956b072c2828d5801b3d7518e2f86bb909575bd2531/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:06 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ea60aa392a525361928956b072c2828d5801b3d7518e2f86bb909575bd2531/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:06 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ea60aa392a525361928956b072c2828d5801b3d7518e2f86bb909575bd2531/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:06 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ea60aa392a525361928956b072c2828d5801b3d7518e2f86bb909575bd2531/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:06 np0005466031 podman[76028]: 2025-10-02 11:46:06.572965755 +0000 UTC m=+0.028649825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:06 np0005466031 podman[76028]: 2025-10-02 11:46:06.670148569 +0000 UTC m=+0.125832579 container init 58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:06 np0005466031 podman[76028]: 2025-10-02 11:46:06.675973896 +0000 UTC m=+0.131657876 container start 58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:06 np0005466031 podman[76028]: 2025-10-02 11:46:06.680107998 +0000 UTC m=+0.135791978 container attach 58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:06 np0005466031 systemd[1]: libpod-58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633.scope: Deactivated successfully.
Oct  2 07:46:06 np0005466031 podman[76028]: 2025-10-02 11:46:06.801446424 +0000 UTC m=+0.257130404 container died 58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:06 np0005466031 podman[76028]: 2025-10-02 11:46:06.84130916 +0000 UTC m=+0.296993140 container remove 58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:06 np0005466031 systemd[1]: libpod-conmon-58149eb9db289deae109568e76b8d09becabf50f5c5ea88686379e074d95f633.scope: Deactivated successfully.
Oct  2 07:46:06 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:06 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:07 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:07 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:46:07 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:07 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:07 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:07 np0005466031 systemd[1]: Reached target All Ceph clusters and services.
Oct  2 07:46:07 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:07 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:07 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:07 np0005466031 systemd[1]: Reached target Ceph cluster 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:46:07 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:07 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:07 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:07 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:08 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:08 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:08 np0005466031 systemd[1]: Created slice Slice /system/ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:46:08 np0005466031 systemd[1]: Reached target System Time Set.
Oct  2 07:46:08 np0005466031 systemd[1]: Reached target System Time Synchronized.
Oct  2 07:46:08 np0005466031 systemd[1]: Starting Ceph mon.compute-2 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:46:08 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:46:08 np0005466031 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:46:08 np0005466031 podman[76320]: 2025-10-02 11:46:08.452868232 +0000 UTC m=+0.043858486 container create b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:08 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e733fbb59dad23e20f6feb1e9f4662559acc13365fa95bd05696a30c4dd1fc45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:08 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e733fbb59dad23e20f6feb1e9f4662559acc13365fa95bd05696a30c4dd1fc45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:08 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e733fbb59dad23e20f6feb1e9f4662559acc13365fa95bd05696a30c4dd1fc45/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:08 np0005466031 podman[76320]: 2025-10-02 11:46:08.515592605 +0000 UTC m=+0.106582869 container init b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 07:46:08 np0005466031 podman[76320]: 2025-10-02 11:46:08.520617871 +0000 UTC m=+0.111608115 container start b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 07:46:08 np0005466031 bash[76320]: b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95
Oct  2 07:46:08 np0005466031 podman[76320]: 2025-10-02 11:46:08.435454071 +0000 UTC m=+0.026444365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:08 np0005466031 systemd[1]: Started Ceph mon.compute-2 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: pidfile_write: ignore empty --pid-file
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: load: jerasure load: lrc 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Git sha 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: DB SUMMARY
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: DB Session ID:  KJCGTWCK9W49B5CVXGTD
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                                     Options.env: 0x5570d4155c40
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                                Options.info_log: 0x5570d4fb4fc0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                                 Options.wal_dir: 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                    Options.write_buffer_manager: 0x5570d4fc4b40
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                               Options.row_cache: None
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                              Options.wal_filter: None
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.wal_compression: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.max_background_jobs: 2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.max_total_wal_size: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:       Options.compaction_readahead_size: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Compression algorithms supported:
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: #011kZSTD supported: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:           Options.merge_operator: 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5570d4fb4c00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5570d4fad1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:        Options.write_buffer_size: 33554432
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:  Options.max_write_buffer_number: 2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:          Options.compression: NoCompression
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 93a4d4e4-8268-411e-81f1-7a8fce5e679b
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405568559391, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405568562092, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405568562202, "job": 1, "event": "recovery_finished"}
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5570d4fd6e00
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: DB pointer 0x5570d50de000
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(???) e0 preinit fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).mds e1 new map
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Updating compute-1:/etc/ceph/ceph.conf
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Deploying daemon crash.compute-1 on compute-1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Deploying daemon osd.1 on compute-0
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Deploying daemon osd.0 on compute-1
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: OSD bench result of 9008.887835 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012] boot
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Adjusting osd_memory_target on compute-1 to  5247M
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: OSD bench result of 7039.207197 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Unable to set osd_memory_target on compute-0 to 134065766: error parsing value: Value '134065766' is below minimum 939524096
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485] boot
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Updating compute-2:/etc/ceph/ceph.conf
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Deploying daemon mon.compute-2 on compute-2
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: Cluster is now healthy
Oct  2 07:46:08 np0005466031 ceph-mon[76340]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct  2 07:46:10 np0005466031 ceph-mon[76340]: mon.compute-2@-1(probing) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  2 07:46:10 np0005466031 ceph-mon[76340]: mon.compute-2@-1(probing) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  2 07:46:10 np0005466031 ceph-mon[76340]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Oct  2 07:46:10 np0005466031 ceph-mon[76340]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Oct  2 07:46:10 np0005466031 ceph-mon[76340]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  2 07:46:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  2 07:46:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  2 07:46:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-10-02T11:46:06.714692Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: Deploying daemon mon.compute-1 on compute-1
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: mon.compute-0 calling monitor election
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: paxos.1).electionLogic(10) init, last seen epoch 10
Oct  2 07:46:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: mon.compute-0 calling monitor election
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: mon.compute-2 calling monitor election
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: mon.compute-1 calling monitor election
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 07:46:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  2 07:46:20 np0005466031 podman[76519]: 2025-10-02 11:46:20.188686479 +0000 UTC m=+0.079096127 container create 99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 07:46:20 np0005466031 podman[76519]: 2025-10-02 11:46:20.130012685 +0000 UTC m=+0.020422353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:20 np0005466031 systemd[1]: Started libpod-conmon-99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e.scope.
Oct  2 07:46:20 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:20 np0005466031 podman[76519]: 2025-10-02 11:46:20.309575283 +0000 UTC m=+0.199984951 container init 99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:20 np0005466031 podman[76519]: 2025-10-02 11:46:20.3172311 +0000 UTC m=+0.207640748 container start 99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 07:46:20 np0005466031 podman[76519]: 2025-10-02 11:46:20.322751939 +0000 UTC m=+0.213161617 container attach 99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:46:20 np0005466031 eager_hermann[76535]: 167 167
Oct  2 07:46:20 np0005466031 systemd[1]: libpod-99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e.scope: Deactivated successfully.
Oct  2 07:46:20 np0005466031 podman[76519]: 2025-10-02 11:46:20.324933607 +0000 UTC m=+0.215343255 container died 99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:20 np0005466031 systemd[1]: var-lib-containers-storage-overlay-f6870bf3cddbbab5789ea9b2939d1d55bfe40eb8ebdce7ea0a697f9cb2052433-merged.mount: Deactivated successfully.
Oct  2 07:46:20 np0005466031 podman[76519]: 2025-10-02 11:46:20.422335697 +0000 UTC m=+0.312745345 container remove 99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:46:20 np0005466031 systemd[1]: libpod-conmon-99bee93aa6d4c70a909fcdd77f61c4260974ce865bdb48e5a4b124115e17f10e.scope: Deactivated successfully.
Oct  2 07:46:20 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:20 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:20 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:20 np0005466031 ceph-mon[76340]: Deploying daemon mgr.compute-2.kvxdhw on compute-2
Oct  2 07:46:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:20 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e13 e13: 2 total, 2 up, 2 in
Oct  2 07:46:20 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:20 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:20 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:21 np0005466031 systemd[1]: Starting Ceph mgr.compute-2.kvxdhw for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:46:21 np0005466031 podman[76678]: 2025-10-02 11:46:21.30313778 +0000 UTC m=+0.045890801 container create 53e32e8f9cb9b36f45f434c30ef66be9d2e3027a2f55a4ba3bcf2eb308a7f790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 07:46:21 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8243720e1880f0fdbb68837005f8bed33a52e30a8bb72c968cfc3035c48f97f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:21 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8243720e1880f0fdbb68837005f8bed33a52e30a8bb72c968cfc3035c48f97f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:21 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8243720e1880f0fdbb68837005f8bed33a52e30a8bb72c968cfc3035c48f97f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:21 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8243720e1880f0fdbb68837005f8bed33a52e30a8bb72c968cfc3035c48f97f2/merged/var/lib/ceph/mgr/ceph-compute-2.kvxdhw supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:21 np0005466031 podman[76678]: 2025-10-02 11:46:21.369118251 +0000 UTC m=+0.111871272 container init 53e32e8f9cb9b36f45f434c30ef66be9d2e3027a2f55a4ba3bcf2eb308a7f790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 07:46:21 np0005466031 podman[76678]: 2025-10-02 11:46:21.281472975 +0000 UTC m=+0.024225996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:21 np0005466031 podman[76678]: 2025-10-02 11:46:21.37650268 +0000 UTC m=+0.119255681 container start 53e32e8f9cb9b36f45f434c30ef66be9d2e3027a2f55a4ba3bcf2eb308a7f790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 07:46:21 np0005466031 bash[76678]: 53e32e8f9cb9b36f45f434c30ef66be9d2e3027a2f55a4ba3bcf2eb308a7f790
Oct  2 07:46:21 np0005466031 systemd[1]: Started Ceph mgr.compute-2.kvxdhw for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:46:21 np0005466031 ceph-mgr[76697]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:46:21 np0005466031 ceph-mgr[76697]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  2 07:46:21 np0005466031 ceph-mgr[76697]: pidfile_write: ignore empty --pid-file
Oct  2 07:46:21 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'alerts'
Oct  2 07:46:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e14 e14: 2 total, 2 up, 2 in
Oct  2 07:46:21 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 07:46:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  2 07:46:21 np0005466031 ceph-mgr[76697]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 07:46:21 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'balancer'
Oct  2 07:46:21 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:21.906+0000 7f18c002f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 07:46:22 np0005466031 ceph-mgr[76697]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 07:46:22 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:22.203+0000 7f18c002f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 07:46:22 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'cephadm'
Oct  2 07:46:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e15 e15: 2 total, 2 up, 2 in
Oct  2 07:46:22 np0005466031 ceph-mon[76340]: Deploying daemon mgr.compute-1.wtokkj on compute-1
Oct  2 07:46:22 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:22 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e15 _set_new_cache_sizes cache_size:1019935911 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:23 np0005466031 podman[76863]: 2025-10-02 11:46:23.682968825 +0000 UTC m=+0.038683706 container create c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:23 np0005466031 systemd[1]: Started libpod-conmon-c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb.scope.
Oct  2 07:46:23 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:23 np0005466031 podman[76863]: 2025-10-02 11:46:23.666441628 +0000 UTC m=+0.022156529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  2 07:46:23 np0005466031 podman[76863]: 2025-10-02 11:46:23.767905978 +0000 UTC m=+0.123620869 container init c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 07:46:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e16 e16: 2 total, 2 up, 2 in
Oct  2 07:46:23 np0005466031 podman[76863]: 2025-10-02 11:46:23.780375155 +0000 UTC m=+0.136090026 container start c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:23 np0005466031 podman[76863]: 2025-10-02 11:46:23.783072147 +0000 UTC m=+0.138787058 container attach c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_archimedes, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:23 np0005466031 quizzical_archimedes[76880]: 167 167
Oct  2 07:46:23 np0005466031 systemd[1]: libpod-c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb.scope: Deactivated successfully.
Oct  2 07:46:23 np0005466031 podman[76863]: 2025-10-02 11:46:23.787774294 +0000 UTC m=+0.143489175 container died c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_archimedes, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:23 np0005466031 systemd[1]: var-lib-containers-storage-overlay-9c7fb12e260117e52b3014762b15711249b6c9ed5cc37c03fb2220b211169c22-merged.mount: Deactivated successfully.
Oct  2 07:46:23 np0005466031 podman[76863]: 2025-10-02 11:46:23.822304607 +0000 UTC m=+0.178019488 container remove c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct  2 07:46:23 np0005466031 systemd[1]: libpod-conmon-c7a31458692f51521fd1cd0c6094c960d06a06451195c16df221352c8f119efb.scope: Deactivated successfully.
Oct  2 07:46:23 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:23 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:23 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:24 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:24 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:24 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:24 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'crash'
Oct  2 07:46:24 np0005466031 systemd[1]: Starting Ceph crash.compute-2 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:46:24 np0005466031 ceph-mgr[76697]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 07:46:24 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'dashboard'
Oct  2 07:46:24 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:24.685+0000 7f18c002f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 07:46:24 np0005466031 podman[77036]: 2025-10-02 11:46:24.664513557 +0000 UTC m=+0.040452554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:24 np0005466031 podman[77036]: 2025-10-02 11:46:24.776289705 +0000 UTC m=+0.152228722 container create e2069e4312a7a045e178b15c0f75aaecb2c10e65c24e436171770a3e598082b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 07:46:24 np0005466031 ceph-mon[76340]: Deploying daemon crash.compute-2 on compute-2
Oct  2 07:46:24 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:24 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:24 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05fccb6d0c3b3a5dd32f6fb724d658fce2237e630813f0097f859fe14ce4cbc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:24 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05fccb6d0c3b3a5dd32f6fb724d658fce2237e630813f0097f859fe14ce4cbc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:24 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05fccb6d0c3b3a5dd32f6fb724d658fce2237e630813f0097f859fe14ce4cbc5/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:24 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05fccb6d0c3b3a5dd32f6fb724d658fce2237e630813f0097f859fe14ce4cbc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:24 np0005466031 podman[77036]: 2025-10-02 11:46:24.955209095 +0000 UTC m=+0.331148092 container init e2069e4312a7a045e178b15c0f75aaecb2c10e65c24e436171770a3e598082b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 07:46:24 np0005466031 podman[77036]: 2025-10-02 11:46:24.960824277 +0000 UTC m=+0.336763274 container start e2069e4312a7a045e178b15c0f75aaecb2c10e65c24e436171770a3e598082b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 07:46:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e17 e17: 2 total, 2 up, 2 in
Oct  2 07:46:24 np0005466031 bash[77036]: e2069e4312a7a045e178b15c0f75aaecb2c10e65c24e436171770a3e598082b6
Oct  2 07:46:24 np0005466031 systemd[1]: Started Ceph crash.compute-2 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: 2025-10-02T11:46:25.379+0000 7f2828538640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: 2025-10-02T11:46:25.379+0000 7f2828538640 -1 AuthRegistry(0x7f2820067150) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: 2025-10-02T11:46:25.380+0000 7f2828538640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: 2025-10-02T11:46:25.380+0000 7f2828538640 -1 AuthRegistry(0x7f2828537000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: 2025-10-02T11:46:25.381+0000 7f2826aae640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: 2025-10-02T11:46:25.382+0000 7f28262ad640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: 2025-10-02T11:46:25.382+0000 7f2825aac640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: 2025-10-02T11:46:25.382+0000 7f2828538640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  2 07:46:25 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-2[77051]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  2 07:46:25 np0005466031 podman[77207]: 2025-10-02 11:46:25.656560902 +0000 UTC m=+0.045645853 container create b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:25 np0005466031 systemd[1]: Started libpod-conmon-b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d.scope.
Oct  2 07:46:25 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:25 np0005466031 podman[77207]: 2025-10-02 11:46:25.633029117 +0000 UTC m=+0.022114088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:25 np0005466031 podman[77207]: 2025-10-02 11:46:25.739899331 +0000 UTC m=+0.128984302 container init b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:46:25 np0005466031 podman[77207]: 2025-10-02 11:46:25.746990113 +0000 UTC m=+0.136075064 container start b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 07:46:25 np0005466031 podman[77207]: 2025-10-02 11:46:25.750324153 +0000 UTC m=+0.139409134 container attach b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:25 np0005466031 interesting_greider[77224]: 167 167
Oct  2 07:46:25 np0005466031 systemd[1]: libpod-b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d.scope: Deactivated successfully.
Oct  2 07:46:25 np0005466031 podman[77207]: 2025-10-02 11:46:25.75320218 +0000 UTC m=+0.142287121 container died b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 07:46:25 np0005466031 systemd[1]: var-lib-containers-storage-overlay-e53f743f8d8c793b09b7baf69f89f425d55f9bb2e3046553e8cbe6bc6e0fba01-merged.mount: Deactivated successfully.
Oct  2 07:46:25 np0005466031 podman[77207]: 2025-10-02 11:46:25.798894174 +0000 UTC m=+0.187979125 container remove b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_greider, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 07:46:25 np0005466031 systemd[1]: libpod-conmon-b9fa7f327b85d380f9aa17040d08ef7f95828c4cdbe5aa94acfa3fc874fa786d.scope: Deactivated successfully.
Oct  2 07:46:25 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:25 np0005466031 podman[77248]: 2025-10-02 11:46:25.987278491 +0000 UTC m=+0.049891158 container create df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e18 e18: 2 total, 2 up, 2 in
Oct  2 07:46:26 np0005466031 systemd[1]: Started libpod-conmon-df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e.scope.
Oct  2 07:46:26 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:26 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6edd8241103838adf61de979ed8266c89e51e87f1184faddb33fe9d307308ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:26 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6edd8241103838adf61de979ed8266c89e51e87f1184faddb33fe9d307308ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:26 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6edd8241103838adf61de979ed8266c89e51e87f1184faddb33fe9d307308ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:26 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6edd8241103838adf61de979ed8266c89e51e87f1184faddb33fe9d307308ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:26 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6edd8241103838adf61de979ed8266c89e51e87f1184faddb33fe9d307308ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:26 np0005466031 podman[77248]: 2025-10-02 11:46:25.962331117 +0000 UTC m=+0.024943844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:26 np0005466031 podman[77248]: 2025-10-02 11:46:26.068819302 +0000 UTC m=+0.131431969 container init df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 07:46:26 np0005466031 podman[77248]: 2025-10-02 11:46:26.079009207 +0000 UTC m=+0.141621854 container start df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:26 np0005466031 podman[77248]: 2025-10-02 11:46:26.081725371 +0000 UTC m=+0.144338038 container attach df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:26 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'devicehealth'
Oct  2 07:46:26 np0005466031 ceph-mgr[76697]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 07:46:26 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'diskprediction_local'
Oct  2 07:46:26 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:26.498+0000 7f18c002f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 07:46:26 np0005466031 vigorous_williamson[77264]: --> passed data devices: 0 physical, 1 LVM
Oct  2 07:46:26 np0005466031 vigorous_williamson[77264]: --> relative data size: 1.0
Oct  2 07:46:26 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 07:46:26 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7e9b39ac-5928-4949-8bce-29a1be4f628f
Oct  2 07:46:27 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  2 07:46:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  2 07:46:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]:  from numpy import show_config as show_numpy_config
Oct  2 07:46:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e19 e19: 2 total, 2 up, 2 in
Oct  2 07:46:27 np0005466031 ceph-mgr[76697]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 07:46:27 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'influx'
Oct  2 07:46:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:27.098+0000 7f18c002f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 07:46:27 np0005466031 ceph-mgr[76697]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 07:46:27 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'insights'
Oct  2 07:46:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:27.352+0000 7f18c002f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 07:46:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"} v 0) v1
Oct  2 07:46:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3177347678' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct  2 07:46:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e20 e20: 3 total, 2 up, 3 in
Oct  2 07:46:27 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'iostat'
Oct  2 07:46:27 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 07:46:27 np0005466031 lvm[77312]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:46:27 np0005466031 lvm[77312]: VG ceph_vg0 finished
Oct  2 07:46:27 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct  2 07:46:27 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  2 07:46:27 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:46:27 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:27 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct  2 07:46:27 np0005466031 ceph-mgr[76697]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 07:46:27 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'k8sevents'
Oct  2 07:46:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:27.849+0000 7f18c002f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 07:46:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  2 07:46:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3491711820' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  2 07:46:28 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:28 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.102:0/3177347678' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct  2 07:46:28 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct  2 07:46:28 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]': finished
Oct  2 07:46:28 np0005466031 vigorous_williamson[77264]: stderr: got monmap epoch 3
Oct  2 07:46:28 np0005466031 vigorous_williamson[77264]: --> Creating keyring file for osd.2
Oct  2 07:46:28 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct  2 07:46:28 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct  2 07:46:28 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 7e9b39ac-5928-4949-8bce-29a1be4f628f --setuser ceph --setgroup ceph
Oct  2 07:46:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e21 e21: 3 total, 2 up, 3 in
Oct  2 07:46:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e21 _set_new_cache_sizes cache_size:1020053364 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:29 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:46:29 np0005466031 ceph-mon[76340]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:29 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:46:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:46:29 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'localpool'
Oct  2 07:46:29 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'mds_autoscaler'
Oct  2 07:46:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e22 e22: 3 total, 2 up, 3 in
Oct  2 07:46:30 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'mirroring'
Oct  2 07:46:30 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'nfs'
Oct  2 07:46:31 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:46:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:46:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:31 np0005466031 ceph-mgr[76697]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 07:46:31 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'orchestrator'
Oct  2 07:46:31 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:31.612+0000 7f18c002f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 07:46:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e23 e23: 3 total, 2 up, 3 in
Oct  2 07:46:32 np0005466031 ceph-mgr[76697]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 07:46:32 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'osd_perf_query'
Oct  2 07:46:32 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:32.358+0000 7f18c002f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 07:46:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:32 np0005466031 ceph-mgr[76697]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 07:46:32 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:32.637+0000 7f18c002f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 07:46:32 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'osd_support'
Oct  2 07:46:32 np0005466031 ceph-mgr[76697]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 07:46:32 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:32.897+0000 7f18c002f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 07:46:32 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'pg_autoscaler'
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct  2 07:46:33 np0005466031 ceph-mgr[76697]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 07:46:33 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'progress'
Oct  2 07:46:33 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:33.186+0000 7f18c002f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 07:46:33 np0005466031 ceph-mgr[76697]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 07:46:33 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'prometheus'
Oct  2 07:46:33 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:33.431+0000 7f18c002f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  2 07:46:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e24 _set_new_cache_sizes cache_size:1020054716 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct  2 07:46:34 np0005466031 ceph-mgr[76697]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'rbd_support'
Oct  2 07:46:34 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:34.437+0000 7f18c002f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: stderr: 2025-10-02T11:46:28.361+0000 7f3ec1bb3740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: stderr: 2025-10-02T11:46:28.362+0000 7f3ec1bb3740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: stderr: 2025-10-02T11:46:28.362+0000 7f3ec1bb3740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: stderr: 2025-10-02T11:46:28.362+0000 7f3ec1bb3740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  2 07:46:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:34 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  2 07:46:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:34 np0005466031 ceph-mon[76340]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  2 07:46:34 np0005466031 vigorous_williamson[77264]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  2 07:46:34 np0005466031 systemd[1]: libpod-df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e.scope: Deactivated successfully.
Oct  2 07:46:34 np0005466031 systemd[1]: libpod-df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e.scope: Consumed 2.392s CPU time.
Oct  2 07:46:34 np0005466031 ceph-mgr[76697]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'restful'
Oct  2 07:46:34 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:34.759+0000 7f18c002f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466031 podman[77248]: 2025-10-02 11:46:34.761508802 +0000 UTC m=+8.824121469 container died df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 07:46:34 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b6edd8241103838adf61de979ed8266c89e51e87f1184faddb33fe9d307308ae-merged.mount: Deactivated successfully.
Oct  2 07:46:34 np0005466031 podman[77248]: 2025-10-02 11:46:34.829248819 +0000 UTC m=+8.891861466 container remove df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:34 np0005466031 systemd[1]: libpod-conmon-df654f80403a51dcb935beeb463c890247288459baf3d487156eb1a3a3e5fb4e.scope: Deactivated successfully.
Oct  2 07:46:35 np0005466031 podman[78378]: 2025-10-02 11:46:35.509925475 +0000 UTC m=+0.041403727 container create c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 07:46:35 np0005466031 systemd[1]: Started libpod-conmon-c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff.scope.
Oct  2 07:46:35 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:35 np0005466031 podman[78378]: 2025-10-02 11:46:35.576015534 +0000 UTC m=+0.107493846 container init c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 07:46:35 np0005466031 podman[78378]: 2025-10-02 11:46:35.582635556 +0000 UTC m=+0.114113818 container start c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:35 np0005466031 podman[78378]: 2025-10-02 11:46:35.586374244 +0000 UTC m=+0.117852506 container attach c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:35 np0005466031 vibrant_kepler[78394]: 167 167
Oct  2 07:46:35 np0005466031 systemd[1]: libpod-c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff.scope: Deactivated successfully.
Oct  2 07:46:35 np0005466031 podman[78378]: 2025-10-02 11:46:35.589339799 +0000 UTC m=+0.120818071 container died c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:35 np0005466031 podman[78378]: 2025-10-02 11:46:35.494119238 +0000 UTC m=+0.025597550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:35 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  2 07:46:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:35 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'rgw'
Oct  2 07:46:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct  2 07:46:35 np0005466031 systemd[1]: var-lib-containers-storage-overlay-6e9e36b2f593cef4f651d18f5177223f509dffc5b28d95a17de8211e797a9914-merged.mount: Deactivated successfully.
Oct  2 07:46:35 np0005466031 podman[78378]: 2025-10-02 11:46:35.644622586 +0000 UTC m=+0.176100838 container remove c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:46:35 np0005466031 systemd[1]: libpod-conmon-c0c08492423559c18e96a111d8002df91e7f81000dded06d24f05bd2480c98ff.scope: Deactivated successfully.
Oct  2 07:46:35 np0005466031 podman[78417]: 2025-10-02 11:46:35.812097415 +0000 UTC m=+0.053655281 container create d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 07:46:35 np0005466031 systemd[1]: Started libpod-conmon-d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8.scope.
Oct  2 07:46:35 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:35 np0005466031 podman[78417]: 2025-10-02 11:46:35.78042144 +0000 UTC m=+0.021979266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:35 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56301c2a71f4de32ed33bee4e60c7e8d812884d09b50e35bf6dcff7580cb9908/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:35 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56301c2a71f4de32ed33bee4e60c7e8d812884d09b50e35bf6dcff7580cb9908/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:35 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56301c2a71f4de32ed33bee4e60c7e8d812884d09b50e35bf6dcff7580cb9908/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:35 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56301c2a71f4de32ed33bee4e60c7e8d812884d09b50e35bf6dcff7580cb9908/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:35 np0005466031 podman[78417]: 2025-10-02 11:46:35.8877107 +0000 UTC m=+0.129268526 container init d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 07:46:35 np0005466031 podman[78417]: 2025-10-02 11:46:35.897200784 +0000 UTC m=+0.138758590 container start d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 07:46:35 np0005466031 podman[78417]: 2025-10-02 11:46:35.900638183 +0000 UTC m=+0.142195989 container attach d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 07:46:36 np0005466031 ceph-mgr[76697]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 07:46:36 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'rook'
Oct  2 07:46:36 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:36.324+0000 7f18c002f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 07:46:36 np0005466031 festive_raman[78433]: {
Oct  2 07:46:36 np0005466031 festive_raman[78433]:    "2": [
Oct  2 07:46:36 np0005466031 festive_raman[78433]:        {
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "devices": [
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "/dev/loop3"
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            ],
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "lv_name": "ceph_lv0",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "lv_size": "7511998464",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=22zUtf-UOop-sbWR-qZYy-Iy23-22OR-wgmS6c,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7e9b39ac-5928-4949-8bce-29a1be4f628f,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "lv_uuid": "22zUtf-UOop-sbWR-qZYy-Iy23-22OR-wgmS6c",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "name": "ceph_lv0",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "tags": {
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.block_uuid": "22zUtf-UOop-sbWR-qZYy-Iy23-22OR-wgmS6c",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.cephx_lockbox_secret": "",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.cluster_name": "ceph",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.crush_device_class": "",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.encrypted": "0",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.osd_fsid": "7e9b39ac-5928-4949-8bce-29a1be4f628f",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.osd_id": "2",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.type": "block",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:                "ceph.vdo": "0"
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            },
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "type": "block",
Oct  2 07:46:36 np0005466031 festive_raman[78433]:            "vg_name": "ceph_vg0"
Oct  2 07:46:36 np0005466031 festive_raman[78433]:        }
Oct  2 07:46:36 np0005466031 festive_raman[78433]:    ]
Oct  2 07:46:36 np0005466031 festive_raman[78433]: }
Oct  2 07:46:36 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  2 07:46:36 np0005466031 systemd[1]: libpod-d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8.scope: Deactivated successfully.
Oct  2 07:46:36 np0005466031 podman[78417]: 2025-10-02 11:46:36.64403808 +0000 UTC m=+0.885595906 container died d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay-56301c2a71f4de32ed33bee4e60c7e8d812884d09b50e35bf6dcff7580cb9908-merged.mount: Deactivated successfully.
Oct  2 07:46:36 np0005466031 podman[78417]: 2025-10-02 11:46:36.697945527 +0000 UTC m=+0.939503323 container remove d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 07:46:36 np0005466031 systemd[1]: libpod-conmon-d70f90c5874347299c14c81c84a27d21bbca5c43b9ced2e249fdba27bfc536d8.scope: Deactivated successfully.
Oct  2 07:46:37 np0005466031 podman[78596]: 2025-10-02 11:46:37.309942398 +0000 UTC m=+0.033981932 container create f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:37 np0005466031 systemd[1]: Started libpod-conmon-f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861.scope.
Oct  2 07:46:37 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:37 np0005466031 podman[78596]: 2025-10-02 11:46:37.375504703 +0000 UTC m=+0.099544257 container init f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 07:46:37 np0005466031 podman[78596]: 2025-10-02 11:46:37.381966029 +0000 UTC m=+0.106005563 container start f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 07:46:37 np0005466031 affectionate_shirley[78614]: 167 167
Oct  2 07:46:37 np0005466031 systemd[1]: libpod-f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861.scope: Deactivated successfully.
Oct  2 07:46:37 np0005466031 podman[78596]: 2025-10-02 11:46:37.384924935 +0000 UTC m=+0.108964469 container attach f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 07:46:37 np0005466031 podman[78596]: 2025-10-02 11:46:37.386461129 +0000 UTC m=+0.110500663 container died f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 07:46:37 np0005466031 podman[78596]: 2025-10-02 11:46:37.293499033 +0000 UTC m=+0.017538587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:37 np0005466031 systemd[1]: var-lib-containers-storage-overlay-47775e4755432e924c62660f0c7dab262720643dae6ccee011b56da6c1c54a1a-merged.mount: Deactivated successfully.
Oct  2 07:46:37 np0005466031 podman[78596]: 2025-10-02 11:46:37.433639482 +0000 UTC m=+0.157679016 container remove f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 07:46:37 np0005466031 systemd[1]: libpod-conmon-f9ae7ff5686379eb3377c8a840012a6665ec8bc3d4527978543dc87d99f98861.scope: Deactivated successfully.
Oct  2 07:46:37 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  2 07:46:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  2 07:46:37 np0005466031 ceph-mon[76340]: Deploying daemon osd.2 on compute-2
Oct  2 07:46:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct  2 07:46:37 np0005466031 podman[78648]: 2025-10-02 11:46:37.709285036 +0000 UTC m=+0.047795062 container create b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 07:46:37 np0005466031 systemd[1]: Started libpod-conmon-b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41.scope.
Oct  2 07:46:37 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:37 np0005466031 podman[78648]: 2025-10-02 11:46:37.690459752 +0000 UTC m=+0.028969798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:37 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2106bde941db8884709d271e22ba0545b46e09d82af22df09d422a565a8ce75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:37 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2106bde941db8884709d271e22ba0545b46e09d82af22df09d422a565a8ce75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:37 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2106bde941db8884709d271e22ba0545b46e09d82af22df09d422a565a8ce75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:37 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2106bde941db8884709d271e22ba0545b46e09d82af22df09d422a565a8ce75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:37 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2106bde941db8884709d271e22ba0545b46e09d82af22df09d422a565a8ce75/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:37 np0005466031 podman[78648]: 2025-10-02 11:46:37.799033439 +0000 UTC m=+0.137543465 container init b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 07:46:37 np0005466031 podman[78648]: 2025-10-02 11:46:37.810553372 +0000 UTC m=+0.149063378 container start b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 07:46:37 np0005466031 podman[78648]: 2025-10-02 11:46:37.816802772 +0000 UTC m=+0.155312828 container attach b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:38 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test[78664]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  2 07:46:38 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test[78664]:                            [--no-systemd] [--no-tmpfs]
Oct  2 07:46:38 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test[78664]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  2 07:46:38 np0005466031 systemd[1]: libpod-b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41.scope: Deactivated successfully.
Oct  2 07:46:38 np0005466031 podman[78648]: 2025-10-02 11:46:38.496265723 +0000 UTC m=+0.834775749 container died b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:38 np0005466031 systemd[1]: var-lib-containers-storage-overlay-c2106bde941db8884709d271e22ba0545b46e09d82af22df09d422a565a8ce75-merged.mount: Deactivated successfully.
Oct  2 07:46:38 np0005466031 podman[78648]: 2025-10-02 11:46:38.579388684 +0000 UTC m=+0.917898700 container remove b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 07:46:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:38 np0005466031 ceph-mgr[76697]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 07:46:38 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'selftest'
Oct  2 07:46:38 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:38.590+0000 7f18c002f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 07:46:38 np0005466031 systemd[1]: libpod-conmon-b2d089aea3276af3830bd82b84ac4d54934cb09506cff9dda52bd4690cb5ed41.scope: Deactivated successfully.
Oct  2 07:46:38 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  2 07:46:38 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:38 np0005466031 ceph-mgr[76697]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 07:46:38 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'snap_schedule'
Oct  2 07:46:38 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:38.847+0000 7f18c002f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 07:46:38 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:38 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:39 np0005466031 ceph-mgr[76697]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 07:46:39 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'stats'
Oct  2 07:46:39 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:39.112+0000 7f18c002f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 07:46:39 np0005466031 systemd[1]: Reloading.
Oct  2 07:46:39 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:39 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:39 np0005466031 systemd[1]: Starting Ceph osd.2 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:46:39 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'status'
Oct  2 07:46:39 np0005466031 podman[78828]: 2025-10-02 11:46:39.609370442 +0000 UTC m=+0.049289425 container create da5ca39b3e2c2d6c6e1937f282da76ed8d98e185db1fb51a1a86c86ae10a855c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 07:46:39 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:39 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332c3e515ce0c586db4f64adb269d1fa51da62fe83799cd3912ddfdb6367df3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:39 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332c3e515ce0c586db4f64adb269d1fa51da62fe83799cd3912ddfdb6367df3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:39 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332c3e515ce0c586db4f64adb269d1fa51da62fe83799cd3912ddfdb6367df3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:39 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332c3e515ce0c586db4f64adb269d1fa51da62fe83799cd3912ddfdb6367df3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:39 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332c3e515ce0c586db4f64adb269d1fa51da62fe83799cd3912ddfdb6367df3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:39 np0005466031 podman[78828]: 2025-10-02 11:46:39.592894046 +0000 UTC m=+0.032813049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:39 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  2 07:46:39 np0005466031 ceph-mon[76340]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:39 np0005466031 ceph-mgr[76697]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 07:46:39 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'telegraf'
Oct  2 07:46:39 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:39.706+0000 7f18c002f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 07:46:39 np0005466031 podman[78828]: 2025-10-02 11:46:39.711943905 +0000 UTC m=+0.151862958 container init da5ca39b3e2c2d6c6e1937f282da76ed8d98e185db1fb51a1a86c86ae10a855c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 07:46:39 np0005466031 podman[78828]: 2025-10-02 11:46:39.717040493 +0000 UTC m=+0.156959476 container start da5ca39b3e2c2d6c6e1937f282da76ed8d98e185db1fb51a1a86c86ae10a855c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:39 np0005466031 podman[78828]: 2025-10-02 11:46:39.720712589 +0000 UTC m=+0.160631592 container attach da5ca39b3e2c2d6c6e1937f282da76ed8d98e185db1fb51a1a86c86ae10a855c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 07:46:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct  2 07:46:39 np0005466031 ceph-mgr[76697]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 07:46:39 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:39.977+0000 7f18c002f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 07:46:39 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'telemetry'
Oct  2 07:46:40 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate[78843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 07:46:40 np0005466031 bash[78828]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 07:46:40 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate[78843]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:46:40 np0005466031 bash[78828]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:46:40 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate[78843]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:46:40 np0005466031 bash[78828]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:46:40 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate[78843]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:46:40 np0005466031 bash[78828]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:46:40 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate[78843]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:40 np0005466031 bash[78828]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:40 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate[78843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 07:46:40 np0005466031 bash[78828]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 07:46:40 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate[78843]: --> ceph-volume raw activate successful for osd ID: 2
Oct  2 07:46:40 np0005466031 bash[78828]: --> ceph-volume raw activate successful for osd ID: 2
Oct  2 07:46:40 np0005466031 systemd[1]: libpod-da5ca39b3e2c2d6c6e1937f282da76ed8d98e185db1fb51a1a86c86ae10a855c.scope: Deactivated successfully.
Oct  2 07:46:40 np0005466031 podman[78828]: 2025-10-02 11:46:40.62485659 +0000 UTC m=+1.064775563 container died da5ca39b3e2c2d6c6e1937f282da76ed8d98e185db1fb51a1a86c86ae10a855c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  2 07:46:40 np0005466031 ceph-mgr[76697]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 07:46:40 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'test_orchestrator'
Oct  2 07:46:40 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:40.647+0000 7f18c002f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 07:46:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay-e332c3e515ce0c586db4f64adb269d1fa51da62fe83799cd3912ddfdb6367df3-merged.mount: Deactivated successfully.
Oct  2 07:46:40 np0005466031 podman[78828]: 2025-10-02 11:46:40.687985643 +0000 UTC m=+1.127904626 container remove da5ca39b3e2c2d6c6e1937f282da76ed8d98e185db1fb51a1a86c86ae10a855c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:40 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  2 07:46:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct  2 07:46:40 np0005466031 podman[79003]: 2025-10-02 11:46:40.873963546 +0000 UTC m=+0.041644064 container create 920bca769216b11a19822c04486ada07ba7def8ebf864ae7fecc9d5f55077c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:40 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4787b6eb9fac409bd9e45dff3541eeca11c7e1cc9be8a3e300803d1b0f952b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:40 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4787b6eb9fac409bd9e45dff3541eeca11c7e1cc9be8a3e300803d1b0f952b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:40 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4787b6eb9fac409bd9e45dff3541eeca11c7e1cc9be8a3e300803d1b0f952b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:40 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4787b6eb9fac409bd9e45dff3541eeca11c7e1cc9be8a3e300803d1b0f952b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:40 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4787b6eb9fac409bd9e45dff3541eeca11c7e1cc9be8a3e300803d1b0f952b6a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:40 np0005466031 podman[79003]: 2025-10-02 11:46:40.944168595 +0000 UTC m=+0.111849133 container init 920bca769216b11a19822c04486ada07ba7def8ebf864ae7fecc9d5f55077c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:40 np0005466031 podman[79003]: 2025-10-02 11:46:40.851063445 +0000 UTC m=+0.018743983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:40 np0005466031 podman[79003]: 2025-10-02 11:46:40.95125988 +0000 UTC m=+0.118940398 container start 920bca769216b11a19822c04486ada07ba7def8ebf864ae7fecc9d5f55077c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 07:46:40 np0005466031 bash[79003]: 920bca769216b11a19822c04486ada07ba7def8ebf864ae7fecc9d5f55077c20
Oct  2 07:46:40 np0005466031 systemd[1]: Started Ceph osd.2 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: pidfile_write: ignore empty --pid-file
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bdev(0x559e30533800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bdev(0x559e30533800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bdev(0x559e30533800 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bdev(0x559e31348400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bdev(0x559e31348400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bdev(0x559e31348400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Oct  2 07:46:40 np0005466031 ceph-osd[79023]: bdev(0x559e31348400 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e30533800 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 07:46:41 np0005466031 ceph-mgr[76697]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 07:46:41 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'volumes'
Oct  2 07:46:41 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:41.433+0000 7f18c002f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: load: jerasure load: lrc 
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 07:46:41 np0005466031 podman[79185]: 2025-10-02 11:46:41.611356441 +0000 UTC m=+0.054252019 container create 125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lumiere, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:41 np0005466031 systemd[1]: Started libpod-conmon-125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27.scope.
Oct  2 07:46:41 np0005466031 podman[79185]: 2025-10-02 11:46:41.582321072 +0000 UTC m=+0.025216700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:41 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:41 np0005466031 podman[79185]: 2025-10-02 11:46:41.725503628 +0000 UTC m=+0.168399276 container init 125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:41 np0005466031 podman[79185]: 2025-10-02 11:46:41.737487455 +0000 UTC m=+0.180383023 container start 125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lumiere, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 07:46:41 np0005466031 podman[79185]: 2025-10-02 11:46:41.741358616 +0000 UTC m=+0.184254274 container attach 125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lumiere, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:41 np0005466031 priceless_lumiere[79202]: 167 167
Oct  2 07:46:41 np0005466031 podman[79185]: 2025-10-02 11:46:41.746951828 +0000 UTC m=+0.189847396 container died 125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lumiere, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 07:46:41 np0005466031 systemd[1]: libpod-125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27.scope: Deactivated successfully.
Oct  2 07:46:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay-c8a3f3d01b966de268db75f9bafa113fffe353d2fef11a91eb984ef0e4551bc8-merged.mount: Deactivated successfully.
Oct  2 07:46:41 np0005466031 podman[79185]: 2025-10-02 11:46:41.786031097 +0000 UTC m=+0.228926675 container remove 125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:41 np0005466031 systemd[1]: libpod-conmon-125a4e219191251211bcf780566d5c30941d064237e2a4e318a515f75ab66d27.scope: Deactivated successfully.
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:46:41 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 07:46:41 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  2 07:46:41 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  2 07:46:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:41 np0005466031 podman[79231]: 2025-10-02 11:46:41.956115001 +0000 UTC m=+0.041261253 container create 9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 07:46:42 np0005466031 systemd[1]: Started libpod-conmon-9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1.scope.
Oct  2 07:46:42 np0005466031 podman[79231]: 2025-10-02 11:46:41.937706729 +0000 UTC m=+0.022853011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:42 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:42 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a0ca5bb7df717f1ec6b7ae0d7d5fbc121e5f5476884432be0bacdb8df2a272/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:42 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a0ca5bb7df717f1ec6b7ae0d7d5fbc121e5f5476884432be0bacdb8df2a272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:42 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a0ca5bb7df717f1ec6b7ae0d7d5fbc121e5f5476884432be0bacdb8df2a272/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:42 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a0ca5bb7df717f1ec6b7ae0d7d5fbc121e5f5476884432be0bacdb8df2a272/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct  2 07:46:42 np0005466031 podman[79231]: 2025-10-02 11:46:42.059391375 +0000 UTC m=+0.144537657 container init 9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:42 np0005466031 podman[79231]: 2025-10-02 11:46:42.067937642 +0000 UTC m=+0.153083914 container start 9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 07:46:42 np0005466031 podman[79231]: 2025-10-02 11:46:42.071444093 +0000 UTC m=+0.156590355 container attach 9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31348000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31552400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31552400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31552400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs mount
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs mount shared_bdev_used = 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Git sha 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: DB SUMMARY
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: DB Session ID:  U2JB9WH78DMIVUP7OKX7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                                     Options.env: 0x559e313d1d50
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                                Options.info_log: 0x559e305be780
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.write_buffer_manager: 0x559e314ca460
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.row_cache: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                              Options.wal_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.wal_compression: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.max_background_jobs: 4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Compression algorithms supported:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kZSTD supported: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bedc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a62d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bedc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a62d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bedc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a62d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bedc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a62d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bedc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a62d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bedc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a62d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bedc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a62d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bed80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a6850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bed80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a6850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e305bed80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a6850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1f3739cf-4fd0-49db-8fb2-17240a029969
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405602119454, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405602119718, "job": 1, "event": "recovery_finished"}
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: freelist init
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: freelist _read_cfg
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs umount
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31552400 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 07:46:42 np0005466031 ceph-mgr[76697]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 07:46:42 np0005466031 ceph-mgr[76697]: mgr[py] Loading python module 'zabbix'
Oct  2 07:46:42 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:42.245+0000 7f18c002f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31552400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31552400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bdev(0x559e31552400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs mount
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluefs mount shared_bdev_used = 4718592
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Git sha 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: DB SUMMARY
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: DB Session ID:  U2JB9WH78DMIVUP7OKX6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                                     Options.env: 0x559e313d1d50
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                                Options.info_log: 0x559e305bf2c0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.write_buffer_manager: 0x559e314ca8c0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.row_cache: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                              Options.wal_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.wal_compression: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.max_background_jobs: 4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Compression algorithms supported:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kZSTD supported: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e3063d1c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e3063d1c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e3063d1c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e3063d1c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e3063d1c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e3063d1c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e3063d1c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e313c9700)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a7770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e313c9700)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a7770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:           Options.merge_operator: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e313c9700)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559e305a7770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.compression: LZ4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1f3739cf-4fd0-49db-8fb2-17240a029969
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405602418664, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405602435044, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405602, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f3739cf-4fd0-49db-8fb2-17240a029969", "db_session_id": "U2JB9WH78DMIVUP7OKX6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405602439354, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405602, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f3739cf-4fd0-49db-8fb2-17240a029969", "db_session_id": "U2JB9WH78DMIVUP7OKX6", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405602444734, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405602, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1f3739cf-4fd0-49db-8fb2-17240a029969", "db_session_id": "U2JB9WH78DMIVUP7OKX6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405602447103, "job": 1, "event": "recovery_finished"}
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559e315aa700
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: DB pointer 0x559e314bfa00
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: _get_class not permitted to load lua
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: _get_class not permitted to load sdk
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: _get_class not permitted to load test_remote_reads
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: osd.2 0 load_pgs
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: osd.2 0 load_pgs opened 0 pgs
Oct  2 07:46:42 np0005466031 ceph-osd[79023]: osd.2 0 log_to_monitors true
Oct  2 07:46:42 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2[79019]: 2025-10-02T11:46:42.479+0000 7f5dc82fc740 -1 osd.2 0 log_to_monitors true
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 07:46:42 np0005466031 ceph-mgr[76697]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 07:46:42 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-2-kvxdhw[76693]: 2025-10-02T11:46:42.547+0000 7f18c002f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 07:46:42 np0005466031 ceph-mgr[76697]: ms_deliver_dispatch: unhandled message 0x5593356f51e0 mon_map magic: 0 v1 from mon.1 v2:192.168.122.102:3300/0
Oct  2 07:46:42 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 07:46:42 np0005466031 ceph-mon[76340]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 07:46:42 np0005466031 tender_joliot[79247]: {
Oct  2 07:46:42 np0005466031 tender_joliot[79247]:    "7e9b39ac-5928-4949-8bce-29a1be4f628f": {
Oct  2 07:46:42 np0005466031 tender_joliot[79247]:        "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct  2 07:46:42 np0005466031 tender_joliot[79247]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 07:46:42 np0005466031 tender_joliot[79247]:        "osd_id": 2,
Oct  2 07:46:42 np0005466031 tender_joliot[79247]:        "osd_uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f",
Oct  2 07:46:42 np0005466031 tender_joliot[79247]:        "type": "bluestore"
Oct  2 07:46:42 np0005466031 tender_joliot[79247]:    }
Oct  2 07:46:42 np0005466031 tender_joliot[79247]: }
Oct  2 07:46:42 np0005466031 systemd[1]: libpod-9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1.scope: Deactivated successfully.
Oct  2 07:46:42 np0005466031 podman[79231]: 2025-10-02 11:46:42.994747988 +0000 UTC m=+1.079894240 container died 9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:43 np0005466031 systemd[1]: var-lib-containers-storage-overlay-96a0ca5bb7df717f1ec6b7ae0d7d5fbc121e5f5476884432be0bacdb8df2a272-merged.mount: Deactivated successfully.
Oct  2 07:46:43 np0005466031 podman[79231]: 2025-10-02 11:46:43.063192896 +0000 UTC m=+1.148339148 container remove 9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 07:46:43 np0005466031 systemd[1]: libpod-conmon-9781df45d323255ba48c8ff249438c82e2ee8b9271477b6a35e5133b235f54c1.scope: Deactivated successfully.
Oct  2 07:46:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct  2 07:46:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Oct  2 07:46:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  2 07:46:43 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  2 07:46:43 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  2 07:46:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:44 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  2 07:46:44 np0005466031 ceph-mon[76340]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  2 07:46:44 np0005466031 ceph-mon[76340]: from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  2 07:46:44 np0005466031 ceph-mon[76340]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  2 07:46:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct  2 07:46:44 np0005466031 ceph-osd[79023]: osd.2 0 done with init, starting boot process
Oct  2 07:46:44 np0005466031 ceph-osd[79023]: osd.2 0 start_boot
Oct  2 07:46:44 np0005466031 ceph-osd[79023]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  2 07:46:44 np0005466031 ceph-osd[79023]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  2 07:46:44 np0005466031 ceph-osd[79023]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  2 07:46:44 np0005466031 ceph-osd[79023]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  2 07:46:44 np0005466031 ceph-osd[79023]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct  2 07:46:44 np0005466031 podman[79910]: 2025-10-02 11:46:44.523626238 +0000 UTC m=+0.056481253 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:44 np0005466031 podman[79910]: 2025-10-02 11:46:44.836282351 +0000 UTC m=+0.369137336 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 07:46:45 np0005466031 ceph-mon[76340]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Oct  2 07:46:45 np0005466031 ceph-mon[76340]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 07:46:45 np0005466031 ceph-mon[76340]: Cluster is now healthy
Oct  2 07:46:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:46 np0005466031 podman[80266]: 2025-10-02 11:46:46.664754238 +0000 UTC m=+0.047191074 container create 29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 07:46:46 np0005466031 podman[80266]: 2025-10-02 11:46:46.641459945 +0000 UTC m=+0.023896801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:46 np0005466031 systemd[1]: Started libpod-conmon-29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a.scope.
Oct  2 07:46:46 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:46 np0005466031 podman[80266]: 2025-10-02 11:46:46.787794053 +0000 UTC m=+0.170230909 container init 29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 07:46:46 np0005466031 podman[80266]: 2025-10-02 11:46:46.797603996 +0000 UTC m=+0.180040832 container start 29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 07:46:46 np0005466031 gallant_faraday[80282]: 167 167
Oct  2 07:46:46 np0005466031 systemd[1]: libpod-29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a.scope: Deactivated successfully.
Oct  2 07:46:46 np0005466031 podman[80266]: 2025-10-02 11:46:46.81399261 +0000 UTC m=+0.196429476 container attach 29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 07:46:46 np0005466031 podman[80266]: 2025-10-02 11:46:46.81574979 +0000 UTC m=+0.198186626 container died 29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay-a66854645c4d10a36b14a9e15dedede86754f5bf962dc20901d3cef783d58c6f-merged.mount: Deactivated successfully.
Oct  2 07:46:46 np0005466031 podman[80266]: 2025-10-02 11:46:46.920432455 +0000 UTC m=+0.302869291 container remove 29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:46 np0005466031 systemd[1]: libpod-conmon-29ca04bb5a13a82e74a8ed8101aa7fdc2d2227686bf71e74baff8cf46648397a.scope: Deactivated successfully.
Oct  2 07:46:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:47 np0005466031 podman[80308]: 2025-10-02 11:46:47.1182534 +0000 UTC m=+0.049957444 container create 10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 07:46:47 np0005466031 systemd[1]: Started libpod-conmon-10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111.scope.
Oct  2 07:46:47 np0005466031 podman[80308]: 2025-10-02 11:46:47.100306061 +0000 UTC m=+0.032010135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:47 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:47 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/524b904f47208f2f34d3f9ad97ff6fd3e688be4fb79cc0a712c73179d996b12c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:47 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/524b904f47208f2f34d3f9ad97ff6fd3e688be4fb79cc0a712c73179d996b12c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:47 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/524b904f47208f2f34d3f9ad97ff6fd3e688be4fb79cc0a712c73179d996b12c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:47 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/524b904f47208f2f34d3f9ad97ff6fd3e688be4fb79cc0a712c73179d996b12c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:47 np0005466031 podman[80308]: 2025-10-02 11:46:47.255391581 +0000 UTC m=+0.187095635 container init 10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:46:47 np0005466031 podman[80308]: 2025-10-02 11:46:47.265608536 +0000 UTC m=+0.197312580 container start 10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:47 np0005466031 podman[80308]: 2025-10-02 11:46:47.283792551 +0000 UTC m=+0.215496595 container attach 10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.200 iops: 8243.166 elapsed_sec: 0.364
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: log_channel(cluster) log [WRN] : OSD bench result of 8243.165808 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 0 waiting for initial osdmap
Oct  2 07:46:47 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2[79019]: 2025-10-02T11:46:47.821+0000 7f5dc4a93640 -1 osd.2 0 waiting for initial osdmap
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 32 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 32 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 32 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 32 check_osdmap_features require_osd_release unknown -> reef
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 32 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 32 set_numa_affinity not setting numa affinity
Oct  2 07:46:47 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-2[79019]: 2025-10-02T11:46:47.850+0000 7f5dbf8a4640 -1 osd.2 32 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 07:46:47 np0005466031 ceph-osd[79023]: osd.2 32 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct  2 07:46:48 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  2 07:46:48 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]: [
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:    {
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        "available": false,
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        "ceph_device": false,
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        "lsm_data": {},
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        "lvs": [],
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        "path": "/dev/sr0",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        "rejected_reasons": [
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "Insufficient space (<5GB)",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "Has a FileSystem"
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        ],
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        "sys_api": {
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "actuators": null,
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "device_nodes": "sr0",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "devname": "sr0",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "human_readable_size": "482.00 KB",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "id_bus": "ata",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "model": "QEMU DVD-ROM",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "nr_requests": "2",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "parent": "/dev/sr0",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "partitions": {},
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "path": "/dev/sr0",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "removable": "1",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "rev": "2.5+",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "ro": "0",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "rotational": "0",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "sas_address": "",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "sas_device_handle": "",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "scheduler_mode": "mq-deadline",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "sectors": 0,
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "sectorsize": "2048",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "size": 493568.0,
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "support_discard": "2048",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "type": "disk",
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:            "vendor": "QEMU"
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:        }
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]:    }
Oct  2 07:46:48 np0005466031 wizardly_curran[80325]: ]
Oct  2 07:46:48 np0005466031 systemd[1]: libpod-10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111.scope: Deactivated successfully.
Oct  2 07:46:48 np0005466031 systemd[1]: libpod-10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111.scope: Consumed 1.258s CPU time.
Oct  2 07:46:48 np0005466031 podman[80308]: 2025-10-02 11:46:48.516741513 +0000 UTC m=+1.448445627 container died 10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:46:48 np0005466031 systemd[1]: var-lib-containers-storage-overlay-524b904f47208f2f34d3f9ad97ff6fd3e688be4fb79cc0a712c73179d996b12c-merged.mount: Deactivated successfully.
Oct  2 07:46:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:48 np0005466031 podman[80308]: 2025-10-02 11:46:48.594918051 +0000 UTC m=+1.526622095 container remove 10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct  2 07:46:48 np0005466031 systemd[1]: libpod-conmon-10713274d8a75a57ecdb83373f673f1b4a7de5833f1b0259e64bd39fcd6cf111.scope: Deactivated successfully.
Oct  2 07:46:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e33 e33: 3 total, 3 up, 3 in
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 33 state: booting -> active
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[4.1f( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[3.15( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[4.15( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[2.12( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[3.11( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[3.e( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[4.9( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[4.1( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[3.9( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[5.e( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[3.1d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[3.1a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 33 pg[4.8( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: OSD bench result of 8243.165808 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295] boot
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: Updating compute-0:/etc/ceph/ceph.conf
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: Updating compute-1:/etc/ceph/ceph.conf
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: Updating compute-2:/etc/ceph/ceph.conf
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1687359328' entity='client.admin' 
Oct  2 07:46:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.9( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.e( empty local-lis/les=33/34 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.15( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.15( empty local-lis/les=33/34 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.1f( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.4( empty local-lis/les=33/34 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.e( empty local-lis/les=33/34 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.8( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.1a( empty local-lis/les=33/34 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.1a( empty local-lis/les=33/34 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33) [2] r=0 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.11( empty local-lis/les=33/34 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.1d( empty local-lis/les=33/34 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.1( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.9( empty local-lis/les=33/34 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.14( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=24/24 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.6( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.3( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.1d( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.2( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.1c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.1b( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.19( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.8( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.14( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=33/34 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=33/34 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=33/34 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.0( empty local-lis/les=33/34 n=0 ec=15/15 lis/c=24/24 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=33/34 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.0( empty local-lis/les=33/34 n=0 ec=19/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.3( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.6( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=33/34 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.2( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.1b( empty local-lis/les=33/34 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.1c( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=23/13 lis/c=30/30 les/c/f=31/31/0 sis=33) [2] r=0 lpr=34 pi=[30,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.19( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[3.8( empty local-lis/les=33/34 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[24,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 34 pg[4.1d( empty local-lis/les=33/34 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=33) [2] r=0 lpr=34 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:50 np0005466031 ceph-mon[76340]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:50 np0005466031 ceph-mon[76340]: Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:50 np0005466031 ceph-mon[76340]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: Saving service ingress.rgw.default spec with placement count:2
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:52 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct  2 07:46:52 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct  2 07:46:52 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:52 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e2 new map
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:46:53.022725+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct  2 07:46:53 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct  2 07:46:53 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:54 np0005466031 ceph-mon[76340]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  2 07:46:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:55 np0005466031 ceph-mon[76340]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  2 07:46:57 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct  2 07:46:57 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct  2 07:46:57 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  2 07:46:57 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  2 07:46:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:46:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:46:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:58 np0005466031 podman[82284]: 2025-10-02 11:46:58.117492986 +0000 UTC m=+0.076081638 container create eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:58 np0005466031 podman[82284]: 2025-10-02 11:46:58.06429611 +0000 UTC m=+0.022884752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:58 np0005466031 systemd[1]: Started libpod-conmon-eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6.scope.
Oct  2 07:46:58 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:46:58 np0005466031 podman[82284]: 2025-10-02 11:46:58.4858608 +0000 UTC m=+0.444449442 container init eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:58 np0005466031 podman[82284]: 2025-10-02 11:46:58.493112349 +0000 UTC m=+0.451700971 container start eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:58 np0005466031 practical_satoshi[82301]: 167 167
Oct  2 07:46:58 np0005466031 systemd[1]: libpod-eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6.scope: Deactivated successfully.
Oct  2 07:46:58 np0005466031 podman[82284]: 2025-10-02 11:46:58.572890864 +0000 UTC m=+0.531479486 container attach eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 07:46:58 np0005466031 podman[82284]: 2025-10-02 11:46:58.573255684 +0000 UTC m=+0.531844306 container died eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:46:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:58 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b946a3deb8469f0188f164bfc7084a9c901143580707c3668a58ddb870eaaf21-merged.mount: Deactivated successfully.
Oct  2 07:46:58 np0005466031 ceph-mon[76340]: Deploying daemon rgw.rgw.compute-2.tsbazp on compute-2
Oct  2 07:46:59 np0005466031 podman[82284]: 2025-10-02 11:46:59.026312654 +0000 UTC m=+0.984901286 container remove eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_satoshi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:59 np0005466031 systemd[1]: libpod-conmon-eabe75e9c120e63c67c6f95795f0d99c0130a32f6b4e155c4b888cbf0fdc63f6.scope: Deactivated successfully.
Oct  2 07:47:00 np0005466031 systemd[1]: Reloading.
Oct  2 07:47:00 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:00 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:00 np0005466031 systemd[1]: Reloading.
Oct  2 07:47:00 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:00 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:00 np0005466031 systemd[1]: Starting Ceph rgw.rgw.compute-2.tsbazp for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:47:01 np0005466031 podman[82445]: 2025-10-02 11:47:01.042519074 +0000 UTC m=+0.026077194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:47:01 np0005466031 podman[82445]: 2025-10-02 11:47:01.167390512 +0000 UTC m=+0.150948602 container create c179296945475505f1b006af534454e6e38a290accf0fdbe6edfea3d30c2493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-2-tsbazp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 07:47:01 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae426d1510a5d01b19f4c365184ca0266208b291fe47c34e89d6ef36e0a198d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:01 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae426d1510a5d01b19f4c365184ca0266208b291fe47c34e89d6ef36e0a198d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:01 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae426d1510a5d01b19f4c365184ca0266208b291fe47c34e89d6ef36e0a198d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:01 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae426d1510a5d01b19f4c365184ca0266208b291fe47c34e89d6ef36e0a198d5/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.tsbazp supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:01 np0005466031 podman[82445]: 2025-10-02 11:47:01.520374489 +0000 UTC m=+0.503932629 container init c179296945475505f1b006af534454e6e38a290accf0fdbe6edfea3d30c2493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-2-tsbazp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:47:01 np0005466031 podman[82445]: 2025-10-02 11:47:01.526315611 +0000 UTC m=+0.509873711 container start c179296945475505f1b006af534454e6e38a290accf0fdbe6edfea3d30c2493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-2-tsbazp, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 07:47:01 np0005466031 radosgw[82465]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:47:01 np0005466031 radosgw[82465]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct  2 07:47:01 np0005466031 radosgw[82465]: framework: beast
Oct  2 07:47:01 np0005466031 radosgw[82465]: framework conf key: endpoint, val: 192.168.122.102:8082
Oct  2 07:47:01 np0005466031 radosgw[82465]: init_numa not setting numa affinity
Oct  2 07:47:01 np0005466031 bash[82445]: c179296945475505f1b006af534454e6e38a290accf0fdbe6edfea3d30c2493c
Oct  2 07:47:01 np0005466031 systemd[1]: Started Ceph rgw.rgw.compute-2.tsbazp for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:47:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct  2 07:47:02 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/2424414405' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  2 07:47:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct  2 07:47:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 07:47:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:47:03 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 07:47:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:47:03 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 07:47:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct  2 07:47:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct  2 07:47:04 np0005466031 ceph-mon[76340]: Deploying daemon rgw.rgw.compute-1.vuotmz on compute-1
Oct  2 07:47:04 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  2 07:47:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct  2 07:47:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:47:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:06 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct  2 07:47:06 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: Deploying daemon rgw.rgw.compute-0.hlkvzi on compute-0
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct  2 07:47:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: Deploying daemon haproxy.rgw.default.compute-0.zhecum on compute-0
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct  2 07:47:08 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts
Oct  2 07:47:08 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok
Oct  2 07:47:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:09 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct  2 07:47:09 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct  2 07:47:09 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:47:09 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:47:09 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:47:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct  2 07:47:10 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct  2 07:47:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct  2 07:47:12 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct  2 07:47:12 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct  2 07:47:12 np0005466031 radosgw[82465]: LDAP not started since no server URIs were provided in the configuration.
Oct  2 07:47:12 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-2-tsbazp[82461]: 2025-10-02T11:47:12.701+0000 7f1d54f43940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  2 07:47:12 np0005466031 radosgw[82465]: framework: beast
Oct  2 07:47:12 np0005466031 radosgw[82465]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  2 07:47:12 np0005466031 radosgw[82465]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  2 07:47:12 np0005466031 radosgw[82465]: starting handler: beast
Oct  2 07:47:12 np0005466031 radosgw[82465]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:47:12 np0005466031 radosgw[82465]: mgrc service_daemon_register rgw.24145 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.tsbazp,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=16ba9875-e611-4c67-897a-e19079014af6,zone_name=default,zonegroup_id=407d395c-624c-4136-be08-de285eb61d42,zonegroup_name=default}
Oct  2 07:47:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 07:47:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 07:47:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:13 np0005466031 ceph-mon[76340]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:47:13 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:47:13 np0005466031 ceph-mon[76340]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:47:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:14 np0005466031 ceph-mon[76340]: Deploying daemon haproxy.rgw.default.compute-2.zptkij on compute-2
Oct  2 07:47:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:15 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Oct  2 07:47:15 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Oct  2 07:47:16 np0005466031 podman[82677]: 2025-10-02 11:47:16.422571039 +0000 UTC m=+3.759755854 container create 2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f (image=quay.io/ceph/haproxy:2.3, name=affectionate_wilson)
Oct  2 07:47:16 np0005466031 systemd[71759]: Starting Mark boot as successful...
Oct  2 07:47:16 np0005466031 systemd[1]: Started libpod-conmon-2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f.scope.
Oct  2 07:47:16 np0005466031 systemd[71759]: Finished Mark boot as successful.
Oct  2 07:47:16 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:47:16 np0005466031 podman[82677]: 2025-10-02 11:47:16.480585405 +0000 UTC m=+3.817770250 container init 2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f (image=quay.io/ceph/haproxy:2.3, name=affectionate_wilson)
Oct  2 07:47:16 np0005466031 podman[82677]: 2025-10-02 11:47:16.488815523 +0000 UTC m=+3.826000338 container start 2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f (image=quay.io/ceph/haproxy:2.3, name=affectionate_wilson)
Oct  2 07:47:16 np0005466031 podman[82677]: 2025-10-02 11:47:16.49182375 +0000 UTC m=+3.829008565 container attach 2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f (image=quay.io/ceph/haproxy:2.3, name=affectionate_wilson)
Oct  2 07:47:16 np0005466031 podman[82677]: 2025-10-02 11:47:16.405031992 +0000 UTC m=+3.742216827 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  2 07:47:16 np0005466031 systemd[1]: libpod-2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f.scope: Deactivated successfully.
Oct  2 07:47:16 np0005466031 affectionate_wilson[83337]: 0 0
Oct  2 07:47:16 np0005466031 conmon[83337]: conmon 2f8b6ca382751f0720d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f.scope/container/memory.events
Oct  2 07:47:16 np0005466031 podman[82677]: 2025-10-02 11:47:16.497118363 +0000 UTC m=+3.834303178 container died 2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f (image=quay.io/ceph/haproxy:2.3, name=affectionate_wilson)
Oct  2 07:47:16 np0005466031 systemd[1]: var-lib-containers-storage-overlay-eb510a6d843e562e25032fcc43df057fb068eba0282f0cd1c0c23ade63976c27-merged.mount: Deactivated successfully.
Oct  2 07:47:16 np0005466031 podman[82677]: 2025-10-02 11:47:16.531418864 +0000 UTC m=+3.868603679 container remove 2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f (image=quay.io/ceph/haproxy:2.3, name=affectionate_wilson)
Oct  2 07:47:16 np0005466031 systemd[1]: libpod-conmon-2f8b6ca382751f0720d0fedd93d231abfe71a04e4050f0565611fecfc5669e8f.scope: Deactivated successfully.
Oct  2 07:47:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:16 np0005466031 systemd[1]: Reloading.
Oct  2 07:47:16 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:16 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:16.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:16 np0005466031 systemd[1]: Reloading.
Oct  2 07:47:16 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:16 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:17 np0005466031 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.zptkij for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:47:17 np0005466031 podman[83479]: 2025-10-02 11:47:17.408252436 +0000 UTC m=+0.031920163 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  2 07:47:17 np0005466031 podman[83479]: 2025-10-02 11:47:17.53230018 +0000 UTC m=+0.155967917 container create f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:47:17 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e9cb17d7c8f0ff1eab6311c0bc256dadd255635049a2aebf99306e9666562b/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:17 np0005466031 podman[83479]: 2025-10-02 11:47:17.792114626 +0000 UTC m=+0.415782353 container init f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:47:17 np0005466031 podman[83479]: 2025-10-02 11:47:17.799388567 +0000 UTC m=+0.423056274 container start f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:47:17 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij[83494]: [NOTICE] 274/114717 (2) : New worker #1 (4) forked
Oct  2 07:47:17 np0005466031 bash[83479]: f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d
Oct  2 07:47:17 np0005466031 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.zptkij for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:47:18 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts
Oct  2 07:47:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:18 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok
Oct  2 07:47:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:18.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:47:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:19.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:47:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:20.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:47:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:47:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:21 np0005466031 ceph-mon[76340]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  2 07:47:21 np0005466031 ceph-mon[76340]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  2 07:47:21 np0005466031 ceph-mon[76340]: Deploying daemon keepalived.rgw.default.compute-2.emwnjv on compute-2
Oct  2 07:47:22 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct  2 07:47:22 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct  2 07:47:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:22.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:23.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:23 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct  2 07:47:23 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct  2 07:47:24 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct  2 07:47:24 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct  2 07:47:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:25.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:25 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct  2 07:47:25 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct  2 07:47:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:26 np0005466031 podman[83651]: 2025-10-02 11:47:26.203806419 +0000 UTC m=+6.875912423 container create 2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272 (image=quay.io/ceph/keepalived:2.2.4, name=pensive_galois, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Oct  2 07:47:26 np0005466031 podman[83651]: 2025-10-02 11:47:26.1349652 +0000 UTC m=+6.807071194 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  2 07:47:26 np0005466031 systemd[1]: Started libpod-conmon-2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272.scope.
Oct  2 07:47:26 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:47:26 np0005466031 podman[83651]: 2025-10-02 11:47:26.380517424 +0000 UTC m=+7.052623408 container init 2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272 (image=quay.io/ceph/keepalived:2.2.4, name=pensive_galois, release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Oct  2 07:47:26 np0005466031 podman[83651]: 2025-10-02 11:47:26.387613449 +0000 UTC m=+7.059719403 container start 2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272 (image=quay.io/ceph/keepalived:2.2.4, name=pensive_galois, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=keepalived, version=2.2.4, architecture=x86_64, release=1793, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.component=keepalived-container)
Oct  2 07:47:26 np0005466031 pensive_galois[83750]: 0 0
Oct  2 07:47:26 np0005466031 systemd[1]: libpod-2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272.scope: Deactivated successfully.
Oct  2 07:47:26 np0005466031 podman[83651]: 2025-10-02 11:47:26.528618583 +0000 UTC m=+7.200724567 container attach 2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272 (image=quay.io/ceph/keepalived:2.2.4, name=pensive_galois, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2)
Oct  2 07:47:26 np0005466031 podman[83651]: 2025-10-02 11:47:26.529007464 +0000 UTC m=+7.201113428 container died 2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272 (image=quay.io/ceph/keepalived:2.2.4, name=pensive_galois, io.openshift.expose-services=, distribution-scope=public, release=1793, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.component=keepalived-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Oct  2 07:47:26 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct  2 07:47:26 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct  2 07:47:26 np0005466031 systemd[1]: var-lib-containers-storage-overlay-a639a1f796cefc7d6383a0b142192ddf927c79f1d7a6655bdf277fac6c67538f-merged.mount: Deactivated successfully.
Oct  2 07:47:26 np0005466031 podman[83651]: 2025-10-02 11:47:26.712671379 +0000 UTC m=+7.384777343 container remove 2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272 (image=quay.io/ceph/keepalived:2.2.4, name=pensive_galois, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct  2 07:47:26 np0005466031 systemd[1]: Reloading.
Oct  2 07:47:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:26 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:26 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:27 np0005466031 systemd[1]: libpod-conmon-2026013d2482ab96acdaad4a5a29519727f73a29b8cfcd8031db7b55646d9272.scope: Deactivated successfully.
Oct  2 07:47:27 np0005466031 systemd[1]: Reloading.
Oct  2 07:47:27 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:27 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:27 np0005466031 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.emwnjv for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:47:27 np0005466031 podman[83896]: 2025-10-02 11:47:27.548706603 +0000 UTC m=+0.049304115 container create ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793, vendor=Red Hat, Inc.)
Oct  2 07:47:27 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304518e496ab564e6300c4553f2458a45171a7b33197b288528a316b5a88538a/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:27 np0005466031 podman[83896]: 2025-10-02 11:47:27.610475448 +0000 UTC m=+0.111072990 container init ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, version=2.2.4, name=keepalived, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2)
Oct  2 07:47:27 np0005466031 podman[83896]: 2025-10-02 11:47:27.616586225 +0000 UTC m=+0.117183747 container start ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-type=git, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, architecture=x86_64, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  2 07:47:27 np0005466031 podman[83896]: 2025-10-02 11:47:27.524772282 +0000 UTC m=+0.025369814 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  2 07:47:27 np0005466031 bash[83896]: ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f
Oct  2 07:47:27 np0005466031 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.emwnjv for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: Starting VRRP child process, pid=4
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: Startup complete
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: (VI_0) Entering BACKUP STATE (init)
Oct  2 07:47:27 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:27 2025: VRRP_Script(check_backend) succeeded
Oct  2 07:47:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:28 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Oct  2 07:47:28 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Oct  2 07:47:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:28.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:29 np0005466031 ceph-mon[76340]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  2 07:47:29 np0005466031 ceph-mon[76340]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  2 07:47:29 np0005466031 ceph-mon[76340]: Deploying daemon keepalived.rgw.default.compute-0.nghmbz on compute-0
Oct  2 07:47:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:30.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:47:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:31.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:47:31 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:31 2025: (VI_0) Entering MASTER STATE
Oct  2 07:47:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:33.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:33 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct  2 07:47:33 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct  2 07:47:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  2 07:47:34 np0005466031 podman[84198]: 2025-10-02 11:47:34.317052135 +0000 UTC m=+0.452178468 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 07:47:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct  2 07:47:34 np0005466031 podman[84198]: 2025-10-02 11:47:34.434089283 +0000 UTC m=+0.569215586 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:47:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:34 np0005466031 podman[84336]: 2025-10-02 11:47:34.893749219 +0000 UTC m=+0.041412481 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:47:34 np0005466031 podman[84336]: 2025-10-02 11:47:34.903801181 +0000 UTC m=+0.051464423 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:47:35 np0005466031 podman[84402]: 2025-10-02 11:47:35.086422986 +0000 UTC m=+0.049770405 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, vcs-type=git, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20)
Oct  2 07:47:35 np0005466031 podman[84402]: 2025-10-02 11:47:35.095891431 +0000 UTC m=+0.059238820 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-type=git, name=keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  2 07:47:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:35.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:47:35 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:35 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Oct  2 07:47:35 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv[83911]: Thu Oct  2 11:47:35 2025: (VI_0) Entering BACKUP STATE
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  2 07:47:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:36 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct  2 07:47:36 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct  2 07:47:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:36.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:37.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct  2 07:47:38 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct  2 07:47:38 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:39.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct  2 07:47:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:40 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct  2 07:47:40 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct  2 07:47:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct  2 07:47:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:40.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:41.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct  2 07:47:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:42 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct  2 07:47:42 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct  2 07:47:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:47:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:47:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:42.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:43 np0005466031 podman[84580]: 2025-10-02 11:47:43.046772367 +0000 UTC m=+0.051065422 container create 9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:47:43 np0005466031 systemd[1]: Started libpod-conmon-9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963.scope.
Oct  2 07:47:43 np0005466031 podman[84580]: 2025-10-02 11:47:43.024060931 +0000 UTC m=+0.028353976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:47:43 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:47:43 np0005466031 podman[84580]: 2025-10-02 11:47:43.153484436 +0000 UTC m=+0.157777541 container init 9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct  2 07:47:43 np0005466031 podman[84580]: 2025-10-02 11:47:43.164136585 +0000 UTC m=+0.168429620 container start 9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:47:43 np0005466031 podman[84580]: 2025-10-02 11:47:43.16827436 +0000 UTC m=+0.172567405 container attach 9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:47:43 np0005466031 confident_mclean[84597]: 167 167
Oct  2 07:47:43 np0005466031 systemd[1]: libpod-9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963.scope: Deactivated successfully.
Oct  2 07:47:43 np0005466031 conmon[84597]: conmon 9be6553c53c21d3521e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963.scope/container/memory.events
Oct  2 07:47:43 np0005466031 podman[84580]: 2025-10-02 11:47:43.173107476 +0000 UTC m=+0.177400491 container died 9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 07:47:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:43.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:43 np0005466031 systemd[1]: var-lib-containers-storage-overlay-985e8eb57ef18b3abeeff6a74bbddc5dc69fcc985147f799339e2a1d7699c348-merged.mount: Deactivated successfully.
Oct  2 07:47:43 np0005466031 podman[84580]: 2025-10-02 11:47:43.214607218 +0000 UTC m=+0.218900233 container remove 9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:47:43 np0005466031 systemd[1]: libpod-conmon-9be6553c53c21d3521e124ba4eb173870fc8054c4afdaebebbc5233db6712963.scope: Deactivated successfully.
Oct  2 07:47:43 np0005466031 systemd[1]: Reloading.
Oct  2 07:47:43 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:43 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:43 np0005466031 systemd[1]: Reloading.
Oct  2 07:47:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:43 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:43 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:43 np0005466031 ceph-mon[76340]: Deploying daemon mds.cephfs.compute-2.dtavud on compute-2
Oct  2 07:47:43 np0005466031 systemd[1]: Starting Ceph mds.cephfs.compute-2.dtavud for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:47:44 np0005466031 podman[84742]: 2025-10-02 11:47:44.124356012 +0000 UTC m=+0.037686497 container create 731b55d89019c498d09fa4a23911243ab9319f58034c21d402656dc06d0b8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-2-dtavud, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 07:47:44 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1548fce252e6a5210050ebb9f6d72928c24954dffe8be7dde2d399ba9f558521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:44 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1548fce252e6a5210050ebb9f6d72928c24954dffe8be7dde2d399ba9f558521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:44 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1548fce252e6a5210050ebb9f6d72928c24954dffe8be7dde2d399ba9f558521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:44 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1548fce252e6a5210050ebb9f6d72928c24954dffe8be7dde2d399ba9f558521/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.dtavud supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:44 np0005466031 podman[84742]: 2025-10-02 11:47:44.107977833 +0000 UTC m=+0.021308338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:47:44 np0005466031 podman[84742]: 2025-10-02 11:47:44.247003307 +0000 UTC m=+0.160333842 container init 731b55d89019c498d09fa4a23911243ab9319f58034c21d402656dc06d0b8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-2-dtavud, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 07:47:44 np0005466031 podman[84742]: 2025-10-02 11:47:44.251842943 +0000 UTC m=+0.165173438 container start 731b55d89019c498d09fa4a23911243ab9319f58034c21d402656dc06d0b8154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-2-dtavud, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 07:47:44 np0005466031 bash[84742]: 731b55d89019c498d09fa4a23911243ab9319f58034c21d402656dc06d0b8154
Oct  2 07:47:44 np0005466031 systemd[1]: Started Ceph mds.cephfs.compute-2.dtavud for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: main not setting numa affinity
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: pidfile_write: ignore empty --pid-file
Oct  2 07:47:44 np0005466031 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-2-dtavud[84758]: starting mds.cephfs.compute-2.dtavud at 
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud Updating MDS map to version 2 from mon.1
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e3 new map
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:44.341938+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.dtavud{0:24154} state up:creating seq 1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud Updating MDS map to version 3 from mon.1
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.3 handle_mds_map i am now mds.0.3
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x1
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x100
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x600
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x601
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x602
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x603
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x604
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x605
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x606
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x607
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x608
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.cache creating system inode with ino:0x609
Oct  2 07:47:44 np0005466031 ceph-mds[84762]: mds.0.3 creating_done
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: daemon mds.cephfs.compute-2.dtavud assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: Cluster is now healthy
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: daemon mds.cephfs.compute-2.dtavud is now active in filesystem cephfs as rank 0
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:47:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:47:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:44.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:45.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e4 new map
Oct  2 07:47:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:45.438767+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  2 07:47:45 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud Updating MDS map to version 4 from mon.1
Oct  2 07:47:45 np0005466031 ceph-mds[84762]: mds.0.3 handle_mds_map i am now mds.0.3
Oct  2 07:47:45 np0005466031 ceph-mds[84762]: mds.0.3 handle_mds_map state change up:creating --> up:active
Oct  2 07:47:45 np0005466031 ceph-mds[84762]: mds.0.3 recovery_done -- successful recovery!
Oct  2 07:47:45 np0005466031 ceph-mds[84762]: mds.0.3 active_start
Oct  2 07:47:45 np0005466031 ceph-mon[76340]: Deploying daemon mds.cephfs.compute-0.yqiqns on compute-0
Oct  2 07:47:46 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct  2 07:47:46 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e5 new map
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:45.438767+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:46.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e6 new map
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:45.438767+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:47:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:47:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:47.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:48 np0005466031 ceph-mon[76340]: Deploying daemon mds.cephfs.compute-1.bhscyq on compute-1
Oct  2 07:47:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[11.16( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[11.13( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.a( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.5( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[7.16( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[10.3( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[11.a( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.16( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.3( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[7.1d( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 53 pg[7.14( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct  2 07:47:48 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct  2 07:47:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:48.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:49.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e7 new map
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:49.237571+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct  2 07:47:49 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud Updating MDS map to version 7 from mon.1
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[7.14( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.1c( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.c( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.1f( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=53/54 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[7.1d( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=53/54 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.11( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.3( v 37'4 (0'0,37'4] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.6( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.16( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.9( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[7.16( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[6.f( v 52'5 lc 52'1 (0'0,52'5] local-lis/les=53/54 n=3 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=52'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[10.3( v 52'51 lc 41'43 (0'0,52'51] local-lis/les=53/54 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=52'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[6.d( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[11.a( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[6.7( v 52'2 lc 52'1 (0'0,52'2] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=52'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.d( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[6.3( v 52'2 lc 0'0 (0'0,52'2] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=52'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[6.b( v 52'3 lc 0'0 (0'0,52'3] local-lis/les=53/54 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=52'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[6.1( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.2( v 37'4 (0'0,37'4] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.f( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.5( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.b( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[6.5( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=53/54 n=2 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=53/54 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[6.9( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[11.13( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.a( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[8.15( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [2] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[11.16( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466031 podman[85018]: 2025-10-02 11:47:49.568489869 +0000 UTC m=+0.117234185 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:47:49 np0005466031 podman[85018]: 2025-10-02 11:47:49.670921609 +0000 UTC m=+0.219665915 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:47:50 np0005466031 podman[85158]: 2025-10-02 11:47:50.245488623 +0000 UTC m=+0.084433016 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:47:50 np0005466031 podman[85158]: 2025-10-02 11:47:50.283060426 +0000 UTC m=+0.122004789 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:47:50 np0005466031 podman[85223]: 2025-10-02 11:47:50.521620398 +0000 UTC m=+0.052710607 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, version=2.2.4, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, distribution-scope=public, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  2 07:47:50 np0005466031 podman[85223]: 2025-10-02 11:47:50.541140255 +0000 UTC m=+0.072230464 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4)
Oct  2 07:47:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:50.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e8 new map
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:49.237571+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:51.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:47:52 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts
Oct  2 07:47:52 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok
Oct  2 07:47:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e9 new map
Oct  2 07:47:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:49.237571+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:52.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:53.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:54 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct  2 07:47:54 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct  2 07:47:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:54.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:55.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:55 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct  2 07:47:55 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct  2 07:47:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 07:47:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 07:47:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct  2 07:47:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:56.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 07:47:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 07:47:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:57.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:58 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 07:47:58 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 07:47:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56) [2] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56) [2] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56) [2] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56) [2] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56) [2] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56) [2] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56) [2] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=56) [2] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[6.7( v 52'2 (0'0,52'2] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56 pruub=15.245525360s) [0] r=-1 lpr=56 pi=[53,56)/1 crt=52'2 mlcod 52'2 active pruub 91.092933655s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[6.f( v 52'5 (0'0,52'5] local-lis/les=53/54 n=3 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56 pruub=15.245415688s) [0] r=-1 lpr=56 pi=[53,56)/1 crt=52'5 mlcod 52'5 active pruub 91.092849731s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[6.7( v 52'2 (0'0,52'2] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56 pruub=15.245448112s) [0] r=-1 lpr=56 pi=[53,56)/1 crt=52'2 mlcod 0'0 unknown NOTIFY pruub 91.092933655s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[6.f( v 52'5 (0'0,52'5] local-lis/les=53/54 n=3 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56 pruub=15.245210648s) [0] r=-1 lpr=56 pi=[53,56)/1 crt=52'5 mlcod 0'0 unknown NOTIFY pruub 91.092849731s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[6.3( v 52'2 (0'0,52'2] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56 pruub=15.245177269s) [0] r=-1 lpr=56 pi=[53,56)/1 crt=52'2 mlcod 52'2 active pruub 91.092964172s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[6.3( v 52'2 (0'0,52'2] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56 pruub=15.245119095s) [0] r=-1 lpr=56 pi=[53,56)/1 crt=52'2 mlcod 0'0 unknown NOTIFY pruub 91.092964172s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[6.b( v 52'3 (0'0,52'3] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56 pruub=15.244920731s) [0] r=-1 lpr=56 pi=[53,56)/1 crt=52'3 mlcod 52'3 active pruub 91.093093872s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:58 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 56 pg[6.b( v 52'3 (0'0,52'3] local-lis/les=53/54 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56 pruub=15.244877815s) [0] r=-1 lpr=56 pi=[53,56)/1 crt=52'3 mlcod 0'0 unknown NOTIFY pruub 91.093093872s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:58.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:47:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:47:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:59.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:47:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 07:47:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 07:47:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:59 np0005466031 ceph-mon[76340]: Reconfiguring mon.compute-0 (monmap changed)...
Oct  2 07:47:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:47:59 np0005466031 ceph-mon[76340]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  2 07:47:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=57) [2]/[1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Oct  2 07:47:59 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: Reconfiguring mgr.compute-0.unmtoh (monmap changed)...
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:48:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct  2 07:48:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:00.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:01.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:01 np0005466031 ceph-mon[76340]: Reconfiguring crash.compute-0 (monmap changed)...
Oct  2 07:48:01 np0005466031 ceph-mon[76340]: Reconfiguring daemon crash.compute-0 on compute-0
Oct  2 07:48:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  2 07:48:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:01 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 59 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct  2 07:48:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 60 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 60 pg[9.b( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 60 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:02 np0005466031 ceph-mon[76340]: Reconfiguring osd.1 (monmap changed)...
Oct  2 07:48:02 np0005466031 ceph-mon[76340]: Reconfiguring daemon osd.1 on compute-0
Oct  2 07:48:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:02 np0005466031 ceph-mon[76340]: Reconfiguring crash.compute-1 (monmap changed)...
Oct  2 07:48:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:48:02 np0005466031 ceph-mon[76340]: Reconfiguring daemon crash.compute-1 on compute-1
Oct  2 07:48:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 60 pg[9.17( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 60 pg[9.13( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 60 pg[9.7( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 60 pg[9.3( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=6 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 60 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=57/49 les/c/f=58/50/0 sis=59) [2] r=0 lpr=59 pi=[49,59)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:48:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:03.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: Reconfiguring osd.0 (monmap changed)...
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: Reconfiguring daemon osd.0 on compute-1
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:48:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:03 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct  2 07:48:03 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct  2 07:48:04 np0005466031 ceph-mon[76340]: Reconfiguring mon.compute-1 (monmap changed)...
Oct  2 07:48:04 np0005466031 ceph-mon[76340]: Reconfiguring daemon mon.compute-1 on compute-1
Oct  2 07:48:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:48:04 np0005466031 podman[85629]: 2025-10-02 11:48:04.542134763 +0000 UTC m=+0.039627381 container create fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 07:48:04 np0005466031 systemd[1]: Started libpod-conmon-fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9.scope.
Oct  2 07:48:04 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:48:04 np0005466031 podman[85629]: 2025-10-02 11:48:04.524459178 +0000 UTC m=+0.021951816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:48:04 np0005466031 podman[85629]: 2025-10-02 11:48:04.629607813 +0000 UTC m=+0.127100461 container init fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:48:04 np0005466031 podman[85629]: 2025-10-02 11:48:04.637691199 +0000 UTC m=+0.135183817 container start fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 07:48:04 np0005466031 podman[85629]: 2025-10-02 11:48:04.641024543 +0000 UTC m=+0.138517181 container attach fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:48:04 np0005466031 romantic_blackwell[85646]: 167 167
Oct  2 07:48:04 np0005466031 systemd[1]: libpod-fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9.scope: Deactivated successfully.
Oct  2 07:48:04 np0005466031 podman[85629]: 2025-10-02 11:48:04.643555994 +0000 UTC m=+0.141048622 container died fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:48:04 np0005466031 systemd[1]: var-lib-containers-storage-overlay-35189a518e11c139ff7d8ded3ece8c11ca1d8f45a57351c67d130aec8ece671d-merged.mount: Deactivated successfully.
Oct  2 07:48:04 np0005466031 podman[85629]: 2025-10-02 11:48:04.683922194 +0000 UTC m=+0.181414822 container remove fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_blackwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:48:04 np0005466031 systemd[1]: libpod-conmon-fa20a5168042a5cc02489c975cf1d89dc98497d2e269405212924f0b894de9a9.scope: Deactivated successfully.
Oct  2 07:48:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:04.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:05.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:05 np0005466031 podman[85838]: 2025-10-02 11:48:05.421512755 +0000 UTC m=+0.049278381 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 07:48:05 np0005466031 ceph-mon[76340]: Reconfiguring mon.compute-2 (monmap changed)...
Oct  2 07:48:05 np0005466031 ceph-mon[76340]: Reconfiguring daemon mon.compute-2 on compute-2
Oct  2 07:48:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:05 np0005466031 podman[85838]: 2025-10-02 11:48:05.542954677 +0000 UTC m=+0.170720323 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 07:48:06 np0005466031 podman[85976]: 2025-10-02 11:48:06.000736549 +0000 UTC m=+0.048025207 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:48:06 np0005466031 podman[85998]: 2025-10-02 11:48:06.063706573 +0000 UTC m=+0.050643879 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:48:06 np0005466031 podman[85976]: 2025-10-02 11:48:06.068065966 +0000 UTC m=+0.115354623 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:48:06 np0005466031 podman[86043]: 2025-10-02 11:48:06.261207986 +0000 UTC m=+0.061722360 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, release=1793)
Oct  2 07:48:06 np0005466031 podman[86043]: 2025-10-02 11:48:06.275171867 +0000 UTC m=+0.075686251 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, description=keepalived for Ceph, name=keepalived, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container)
Oct  2 07:48:06 np0005466031 systemd-logind[786]: Session 20 logged out. Waiting for processes to exit.
Oct  2 07:48:06 np0005466031 systemd[1]: session-20.scope: Deactivated successfully.
Oct  2 07:48:06 np0005466031 systemd[1]: session-20.scope: Consumed 7.943s CPU time.
Oct  2 07:48:06 np0005466031 systemd-logind[786]: Removed session 20.
Oct  2 07:48:06 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Oct  2 07:48:06 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Oct  2 07:48:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:06.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 07:48:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 07:48:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct  2 07:48:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:07.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:07 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct  2 07:48:07 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct  2 07:48:08 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 62 pg[6.d( v 52'3 (0'0,52'3] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=12.920793533s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=52'3 mlcod 52'3 active pruub 99.093002319s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:08 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 62 pg[6.d( v 52'3 (0'0,52'3] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=12.920672417s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=52'3 mlcod 0'0 unknown NOTIFY pruub 99.093002319s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:08 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 62 pg[6.5( v 52'3 (0'0,52'3] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=12.920436859s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=52'3 mlcod 52'3 active pruub 99.093597412s@ mbc={255={}}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:08 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 62 pg[6.5( v 52'3 (0'0,52'3] local-lis/les=53/54 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=12.920343399s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=52'3 mlcod 0'0 unknown NOTIFY pruub 99.093597412s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:09.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 07:48:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62) [2] r=0 lpr=62 pi=[49,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62) [2] r=0 lpr=62 pi=[49,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62) [2] r=0 lpr=62 pi=[49,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=62) [2] r=0 lpr=62 pi=[49,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct  2 07:48:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:10.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct  2 07:48:11 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 65 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:11 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 65 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:11 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 65 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:11 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 65 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:11 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 65 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:11 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 65 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:11 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 65 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:11 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 65 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct  2 07:48:12 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 66 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:12 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 66 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:12 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 66 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:12 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 66 pg[9.5( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=6 ec=49/38 lis/c=63/49 les/c/f=64/50/0 sis=65) [2] r=0 lpr=65 pi=[49,65)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:12.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:13.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:15.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:15 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Oct  2 07:48:15 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Oct  2 07:48:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 07:48:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 07:48:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct  2 07:48:16 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  2 07:48:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:16 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  2 07:48:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 07:48:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 07:48:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct  2 07:48:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:17.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct  2 07:48:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:18.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:18 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct  2 07:48:18 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct  2 07:48:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct  2 07:48:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:19.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct  2 07:48:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:20.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:20 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct  2 07:48:20 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct  2 07:48:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:21.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct  2 07:48:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 07:48:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 07:48:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:22.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:22 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct  2 07:48:23 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct  2 07:48:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:23.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 07:48:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 07:48:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct  2 07:48:24 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:24 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:24 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 07:48:24 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 07:48:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:24.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:25 np0005466031 systemd-logind[786]: New session 34 of user zuul.
Oct  2 07:48:25 np0005466031 systemd[1]: Started Session 34 of User zuul.
Oct  2 07:48:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:25.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct  2 07:48:25 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[49,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:25 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[49,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:25 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[49,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:25 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[49,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 07:48:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 07:48:26 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct  2 07:48:26 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct  2 07:48:26 np0005466031 python3.9[86356]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:48:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct  2 07:48:26 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 75 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=75) [2] r=0 lpr=75 pi=[49,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:26 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 75 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=75) [2] r=0 lpr=75 pi=[49,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:26 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 75 pg[6.9( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=75 pruub=11.033081055s) [1] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 active pruub 115.093978882s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:26 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 75 pg[6.9( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=75 pruub=11.032951355s) [1] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.093978882s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 07:48:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 07:48:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:26.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:27.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct  2 07:48:27 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 76 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:27 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 76 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:27 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 76 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:27 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 76 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:27 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 76 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:27 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 76 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[49,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:27 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 76 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:27 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 76 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 07:48:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 07:48:27 np0005466031 python3.9[86571]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e77 crush map has features 3314933000854323200, adjusting msgr requires
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 77 crush map has features 432629239337189376, adjusting msgr requires for clients
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 77 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 77 crush map has features 3314933000854323200, adjusting msgr requires for osds
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 77 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=13.813757896s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=44'1012 mlcod 0'0 active pruub 119.966606140s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 77 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=13.813622475s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 119.966606140s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 77 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=6 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=13.808356285s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=44'1012 mlcod 0'0 active pruub 119.961921692s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 77 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=6 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=13.808159828s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 119.961921692s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 77 pg[9.18( v 44'1012 (0'0,44'1012] local-lis/les=76/77 n=5 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:28 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 77 pg[9.8( v 44'1012 (0'0,44'1012] local-lis/les=76/77 n=6 ec=49/38 lis/c=74/49 les/c/f=75/50/0 sis=76) [2] r=0 lpr=76 pi=[49,76)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]: dispatch
Oct  2 07:48:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]: dispatch
Oct  2 07:48:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:28.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:29.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct  2 07:48:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 78 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=6 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=0 lpr=78 pi=[59,78)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 78 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 78 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=6 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=0 lpr=78 pi=[59,78)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 78 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=0 lpr=78 pi=[59,78)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 78 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=0 lpr=78 pi=[59,78)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 78 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 78 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 78 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 07:48:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 07:48:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]': finished
Oct  2 07:48:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]': finished
Oct  2 07:48:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct  2 07:48:30 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 79 pg[9.9( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=6 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:30 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 79 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=76/49 les/c/f=77/50/0 sis=78) [2] r=0 lpr=78 pi=[49,78)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:30 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 79 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] async=[0] r=0 lpr=78 pi=[59,78)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:30 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 79 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=6 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] async=[0] r=0 lpr=78 pi=[59,78)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 07:48:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 07:48:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 07:48:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 07:48:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:30.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:31 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct  2 07:48:31 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct  2 07:48:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:31.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct  2 07:48:31 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 80 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80 pruub=15.179935455s) [0] async=[0] r=-1 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 44'1012 active pruub 124.181076050s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:31 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 80 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80 pruub=15.179801941s) [0] r=-1 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 124.181076050s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:31 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 80 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=6 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80 pruub=15.179363251s) [0] async=[0] r=-1 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 44'1012 active pruub 124.181060791s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:31 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 80 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=6 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80 pruub=15.179241180s) [0] r=-1 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 124.181060791s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct  2 07:48:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:32.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:33 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct  2 07:48:33 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct  2 07:48:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:33.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:34.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:35.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:35 np0005466031 systemd[1]: session-34.scope: Deactivated successfully.
Oct  2 07:48:35 np0005466031 systemd[1]: session-34.scope: Consumed 8.199s CPU time.
Oct  2 07:48:35 np0005466031 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Oct  2 07:48:35 np0005466031 systemd-logind[786]: Removed session 34.
Oct  2 07:48:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:36.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct  2 07:48:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 07:48:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 07:48:38 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct  2 07:48:38 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct  2 07:48:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 07:48:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 07:48:39 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Oct  2 07:48:39 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Oct  2 07:48:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:39.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct  2 07:48:40 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 83 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=6 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=83 pruub=12.471842766s) [0] r=-1 lpr=83 pi=[65,83)/1 crt=44'1012 mlcod 0'0 active pruub 130.027023315s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:40 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 83 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=83 pruub=12.471803665s) [0] r=-1 lpr=83 pi=[65,83)/1 crt=44'1012 mlcod 0'0 active pruub 130.027023315s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:40 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 83 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=6 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=83 pruub=12.471773148s) [0] r=-1 lpr=83 pi=[65,83)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 130.027023315s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:40 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 83 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=83 pruub=12.471718788s) [0] r=-1 lpr=83 pi=[65,83)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 130.027023315s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 07:48:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 07:48:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:40.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct  2 07:48:41 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 84 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=0 lpr=84 pi=[65,84)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:41 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 84 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=0 lpr=84 pi=[65,84)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:41 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 84 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=6 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=0 lpr=84 pi=[65,84)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:41 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 84 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=6 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=0 lpr=84 pi=[65,84)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 07:48:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 07:48:41 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct  2 07:48:41 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct  2 07:48:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:41.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct  2 07:48:42 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct  2 07:48:42 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct  2 07:48:42 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 85 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=84/85 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] async=[0] r=0 lpr=84 pi=[65,84)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:42 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 85 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=84/85 n=6 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] async=[0] r=0 lpr=84 pi=[65,84)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 07:48:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 07:48:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 07:48:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 07:48:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:43.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct  2 07:48:43 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 86 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=84/85 n=5 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86 pruub=14.965180397s) [0] async=[0] r=-1 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 44'1012 active pruub 135.762588501s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:43 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 86 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=84/85 n=6 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86 pruub=14.965164185s) [0] async=[0] r=-1 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 44'1012 active pruub 135.762588501s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:43 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 86 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=84/85 n=5 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86 pruub=14.965044022s) [0] r=-1 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 135.762588501s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:43 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 86 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=84/85 n=6 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86 pruub=14.964843750s) [0] r=-1 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 135.762588501s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct  2 07:48:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e87 crush map has features 3314933000852226048, adjusting msgr requires
Oct  2 07:48:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:48:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:48:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:48:44 np0005466031 ceph-osd[79023]: osd.2 87 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  2 07:48:44 np0005466031 ceph-osd[79023]: osd.2 87 crush map has features 288514051259236352 was 432629239337198081, adjusting msgr requires for mons
Oct  2 07:48:44 np0005466031 ceph-osd[79023]: osd.2 87 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  2 07:48:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 07:48:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 07:48:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:44.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:45.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct  2 07:48:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 07:48:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 07:48:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct  2 07:48:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  2 07:48:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:46.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:47.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  2 07:48:48 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  2 07:48:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct  2 07:48:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:48.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:48:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:48:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct  2 07:48:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  2 07:48:50 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct  2 07:48:50 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct  2 07:48:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct  2 07:48:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  2 07:48:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  2 07:48:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:50.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:48:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:51.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:48:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct  2 07:48:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct  2 07:48:52 np0005466031 systemd-logind[786]: New session 35 of user zuul.
Oct  2 07:48:52 np0005466031 systemd[1]: Started Session 35 of User zuul.
Oct  2 07:48:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:52.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:53 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct  2 07:48:53 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct  2 07:48:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:48:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:53.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:48:53 np0005466031 python3.9[86843]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  2 07:48:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct  2 07:48:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct  2 07:48:54 np0005466031 python3.9[87068]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:48:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:54.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:55 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct  2 07:48:55 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct  2 07:48:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:55.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:55 np0005466031 python3.9[87224]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:48:56 np0005466031 python3.9[87378]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:48:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:48:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:56.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:48:57 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct  2 07:48:57 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct  2 07:48:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:48:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:48:57 np0005466031 python3.9[87532]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:48:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct  2 07:48:58 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  2 07:48:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:58 np0005466031 python3.9[87683]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:48:58 np0005466031 network[87700]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:48:58 np0005466031 network[87701]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:48:58 np0005466031 network[87702]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:48:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:58.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:48:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:59.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:59 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct  2 07:48:59 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct  2 07:48:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  2 07:49:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct  2 07:49:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  2 07:49:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:00.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:01.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  2 07:49:02 np0005466031 python3.9[87967]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  2 07:49:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct  2 07:49:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 99 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=99 pruub=13.680448532s) [0] r=-1 lpr=99 pi=[65,99)/1 crt=44'1012 mlcod 0'0 active pruub 154.029510498s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:02 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 99 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=99 pruub=13.680362701s) [0] r=-1 lpr=99 pi=[65,99)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 154.029510498s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:49:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:02.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:49:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:03.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:03 np0005466031 python3.9[88117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:49:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  2 07:49:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct  2 07:49:03 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 100 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=100) [0]/[2] r=0 lpr=100 pi=[65,100)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:03 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 100 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=65/66 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=100) [0]/[2] r=0 lpr=100 pi=[65,100)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:04 np0005466031 python3.9[88272]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:49:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  2 07:49:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct  2 07:49:04 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=101) [2] r=0 lpr=101 pi=[70,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:04 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 101 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=100/101 n=5 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[65,100)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:04.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:05 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct  2 07:49:05 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct  2 07:49:05 np0005466031 python3.9[88430]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:49:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  2 07:49:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct  2 07:49:05 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[70,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:05 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[70,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:05 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 102 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=100/101 n=5 ec=49/38 lis/c=100/65 les/c/f=101/66/0 sis=102 pruub=14.833071709s) [0] async=[0] r=-1 lpr=102 pi=[65,102)/1 crt=44'1012 mlcod 44'1012 active pruub 158.335021973s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:05 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 102 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=100/101 n=5 ec=49/38 lis/c=100/65 les/c/f=101/66/0 sis=102 pruub=14.832939148s) [0] r=-1 lpr=102 pi=[65,102)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 158.335021973s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:06 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Oct  2 07:49:06 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Oct  2 07:49:06 np0005466031 python3.9[88515]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:49:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:06.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct  2 07:49:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct  2 07:49:08 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 104 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=102/70 les/c/f=103/71/0 sis=104) [2] r=0 lpr=104 pi=[70,104)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:08 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 104 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=102/70 les/c/f=103/71/0 sis=104) [2] r=0 lpr=104 pi=[70,104)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:08.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct  2 07:49:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:09.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:09 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 105 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=104/105 n=5 ec=49/38 lis/c=102/70 les/c/f=103/71/0 sis=104) [2] r=0 lpr=104 pi=[70,104)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:10.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:11.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct  2 07:49:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  2 07:49:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:13.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  2 07:49:13 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct  2 07:49:13 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct  2 07:49:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct  2 07:49:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  2 07:49:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:15 np0005466031 podman[88811]: 2025-10-02 11:49:15.041269567 +0000 UTC m=+0.073636873 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:49:15 np0005466031 podman[88811]: 2025-10-02 11:49:15.142857951 +0000 UTC m=+0.175225257 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:49:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:15.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  2 07:49:15 np0005466031 podman[88947]: 2025-10-02 11:49:15.698977972 +0000 UTC m=+0.069758402 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:49:15 np0005466031 podman[88970]: 2025-10-02 11:49:15.763835293 +0000 UTC m=+0.049666266 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:49:15 np0005466031 podman[88947]: 2025-10-02 11:49:15.816057801 +0000 UTC m=+0.186838211 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:49:16 np0005466031 podman[89015]: 2025-10-02 11:49:16.242867163 +0000 UTC m=+0.148417408 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container)
Oct  2 07:49:16 np0005466031 podman[89034]: 2025-10-02 11:49:16.35296162 +0000 UTC m=+0.091305889 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, version=2.2.4, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct  2 07:49:16 np0005466031 podman[89015]: 2025-10-02 11:49:16.400684029 +0000 UTC m=+0.306234184 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, name=keepalived, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9)
Oct  2 07:49:16 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct  2 07:49:16 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct  2 07:49:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  2 07:49:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct  2 07:49:16 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 108 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=78/78 les/c/f=79/79/0 sis=108 pruub=9.884797096s) [1] r=-1 lpr=108 pi=[78,108)/1 crt=44'1012 mlcod 0'0 active pruub 164.182571411s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:16 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 108 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=78/78 les/c/f=79/79/0 sis=108 pruub=9.884727478s) [1] r=-1 lpr=108 pi=[78,108)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 164.182571411s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  2 07:49:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:49:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:49:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct  2 07:49:17 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 109 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=78/78 les/c/f=79/79/0 sis=109) [1]/[2] r=0 lpr=109 pi=[78,109)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:17 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 109 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=78/79 n=5 ec=49/38 lis/c=78/78 les/c/f=79/79/0 sis=109) [1]/[2] r=0 lpr=109 pi=[78,109)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:18 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct  2 07:49:18 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct  2 07:49:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  2 07:49:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct  2 07:49:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:19.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:19 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 110 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=109/110 n=5 ec=49/38 lis/c=78/78 les/c/f=79/79/0 sis=109) [1]/[2] async=[1] r=0 lpr=109 pi=[78,109)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:19.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct  2 07:49:19 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 111 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=109/110 n=5 ec=49/38 lis/c=109/78 les/c/f=110/79/0 sis=111 pruub=15.704358101s) [1] async=[1] r=-1 lpr=111 pi=[78,111)/1 crt=44'1012 mlcod 44'1012 active pruub 172.683242798s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:19 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 111 pg[9.19( v 44'1012 (0'0,44'1012] local-lis/les=109/110 n=5 ec=49/38 lis/c=109/78 les/c/f=110/79/0 sis=111 pruub=15.704183578s) [1] r=-1 lpr=111 pi=[78,111)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 172.683242798s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:19 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct  2 07:49:19 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct  2 07:49:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  2 07:49:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct  2 07:49:20 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 112 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=112 pruub=9.964494705s) [1] r=-1 lpr=112 pi=[59,112)/1 crt=44'1012 mlcod 0'0 active pruub 167.963363647s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:20 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 112 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=112 pruub=9.963596344s) [1] r=-1 lpr=112 pi=[59,112)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 167.963363647s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:21.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  2 07:49:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  2 07:49:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct  2 07:49:21 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 113 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=113) [1]/[2] r=0 lpr=113 pi=[59,113)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:21 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 113 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=59/60 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=113) [1]/[2] r=0 lpr=113 pi=[59,113)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct  2 07:49:22 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 114 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=113/114 n=5 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=113) [1]/[2] async=[1] r=0 lpr=113 pi=[59,113)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:49:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:49:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:23.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct  2 07:49:24 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 115 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=113/114 n=5 ec=49/38 lis/c=113/59 les/c/f=114/60/0 sis=115 pruub=14.577094078s) [1] async=[1] r=-1 lpr=115 pi=[59,115)/1 crt=44'1012 mlcod 44'1012 active pruub 176.394226074s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:24 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 115 pg[9.1b( v 44'1012 (0'0,44'1012] local-lis/les=113/114 n=5 ec=49/38 lis/c=113/59 les/c/f=114/60/0 sis=115 pruub=14.577008247s) [1] r=-1 lpr=115 pi=[59,115)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 176.394226074s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:24 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct  2 07:49:24 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct  2 07:49:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:25.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.333914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765333999, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7229, "num_deletes": 256, "total_data_size": 13643892, "memory_usage": 13839904, "flush_reason": "Manual Compaction"}
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765385930, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7980881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 237, "largest_seqno": 7234, "table_properties": {"data_size": 7953180, "index_size": 18114, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 79936, "raw_average_key_size": 23, "raw_value_size": 7887090, "raw_average_value_size": 2319, "num_data_blocks": 802, "num_entries": 3400, "num_filter_entries": 3400, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 1759405568, "file_creation_time": 1759405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 52090 microseconds, and 16050 cpu microseconds.
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.386006) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7980881 bytes OK
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.386030) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.388920) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.388950) EVENT_LOG_v1 {"time_micros": 1759405765388942, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.388971) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13606473, prev total WAL file size 13606473, number of live WAL files 2.
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.391662) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7793KB) 8(1648B)]
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765391776, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7982529, "oldest_snapshot_seqno": -1}
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3147 keys, 7977100 bytes, temperature: kUnknown
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765442189, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7977100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7950089, "index_size": 18069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75720, "raw_average_key_size": 24, "raw_value_size": 7887148, "raw_average_value_size": 2506, "num_data_blocks": 802, "num_entries": 3147, "num_filter_entries": 3147, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.442503) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7977100 bytes
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.444361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.0 rd, 157.9 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.6, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3405, records dropped: 258 output_compression: NoCompression
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.444384) EVENT_LOG_v1 {"time_micros": 1759405765444372, "job": 4, "event": "compaction_finished", "compaction_time_micros": 50508, "compaction_time_cpu_micros": 20602, "output_level": 6, "num_output_files": 1, "total_output_size": 7977100, "num_input_records": 3405, "num_output_records": 3147, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765446442, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765446515, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  2 07:49:25 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:49:25.391561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:49:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  2 07:49:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct  2 07:49:26 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct  2 07:49:26 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct  2 07:49:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:27.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  2 07:49:27 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct  2 07:49:27 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct  2 07:49:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct  2 07:49:28 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 118 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=118) [2] r=0 lpr=118 pi=[86,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  2 07:49:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:29.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct  2 07:49:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 119 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=119) [2]/[0] r=-1 lpr=119 pi=[86,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:29 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 119 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=119) [2]/[0] r=-1 lpr=119 pi=[86,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  2 07:49:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct  2 07:49:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:31.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct  2 07:49:31 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 121 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=119/86 les/c/f=120/87/0 sis=121) [2] r=0 lpr=121 pi=[86,121)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:31 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 121 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=119/86 les/c/f=120/87/0 sis=121) [2] r=0 lpr=121 pi=[86,121)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct  2 07:49:32 np0005466031 ceph-osd[79023]: osd.2 pg_epoch: 122 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=121/122 n=5 ec=49/38 lis/c=119/86 les/c/f=120/87/0 sis=121) [2] r=0 lpr=121 pi=[86,121)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:33.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  2 07:49:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  2 07:49:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:33.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:33 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct  2 07:49:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:33 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct  2 07:49:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct  2 07:49:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:35.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct  2 07:49:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:49:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:35.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:35 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct  2 07:49:35 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct  2 07:49:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:49:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct  2 07:49:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:37.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:37.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct  2 07:49:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct  2 07:49:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:39.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:49:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:39.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:49:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct  2 07:49:39 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct  2 07:49:39 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct  2 07:49:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:41.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:41.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:49:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:43.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:49:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:43.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:43 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct  2 07:49:43 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct  2 07:49:44 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Oct  2 07:49:44 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Oct  2 07:49:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:45.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:45.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:45 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.e deep-scrub starts
Oct  2 07:49:45 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.e deep-scrub ok
Oct  2 07:49:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:47.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:49.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:51.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:51.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:51 np0005466031 python3.9[89541]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:49:52 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Oct  2 07:49:52 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Oct  2 07:49:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:53.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:53 np0005466031 python3.9[89828]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  2 07:49:54 np0005466031 python3.9[90031]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  2 07:49:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:49:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:55.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:49:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:55 np0005466031 python3.9[90183]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:56 np0005466031 python3.9[90336]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  2 07:49:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:49:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:57.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:49:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:57.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:57 np0005466031 python3.9[90488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:57 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct  2 07:49:57 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct  2 07:49:58 np0005466031 python3.9[90641]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:58 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct  2 07:49:58 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct  2 07:49:58 np0005466031 python3.9[90719]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:59.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:49:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:59.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:59 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct  2 07:49:59 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct  2 07:50:00 np0005466031 python3.9[90872]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  2 07:50:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 07:50:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:01.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:01 np0005466031 python3.9[91025]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  2 07:50:01 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct  2 07:50:01 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct  2 07:50:02 np0005466031 python3.9[91179]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:50:02 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct  2 07:50:02 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct  2 07:50:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:03.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:03 np0005466031 python3.9[91331]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  2 07:50:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:03 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct  2 07:50:03 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct  2 07:50:04 np0005466031 python3.9[91484]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:04 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct  2 07:50:04 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct  2 07:50:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:05.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:06 np0005466031 python3.9[91638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:06 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct  2 07:50:06 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct  2 07:50:07 np0005466031 python3.9[91790]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:07.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:07 np0005466031 python3.9[91868]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:08 np0005466031 python3.9[92021]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:08 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct  2 07:50:08 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct  2 07:50:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:08 np0005466031 python3.9[92099]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:50:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:09.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:50:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:09 np0005466031 python3.9[92251]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:11.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:12 np0005466031 python3.9[92404]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:12 np0005466031 python3.9[92556]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  2 07:50:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:13.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:13.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:13 np0005466031 python3.9[92706]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:13 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct  2 07:50:13 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct  2 07:50:15 np0005466031 python3.9[92909]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:15 np0005466031 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  2 07:50:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:15.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:15 np0005466031 systemd[1]: tuned.service: Deactivated successfully.
Oct  2 07:50:15 np0005466031 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  2 07:50:15 np0005466031 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:50:15 np0005466031 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:50:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:15.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:16 np0005466031 python3.9[93071]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  2 07:50:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:17.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:17.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:19.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:19 np0005466031 python3.9[93224]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:19.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:20 np0005466031 python3.9[93379]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:20 np0005466031 systemd[1]: session-35.scope: Deactivated successfully.
Oct  2 07:50:20 np0005466031 systemd[1]: session-35.scope: Consumed 1min 4.295s CPU time.
Oct  2 07:50:20 np0005466031 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Oct  2 07:50:20 np0005466031 systemd-logind[786]: Removed session 35.
Oct  2 07:50:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:21.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:50:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:21.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:50:22 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  2 07:50:22 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  2 07:50:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:23.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:25.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:25.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:25 np0005466031 systemd-logind[786]: New session 36 of user zuul.
Oct  2 07:50:25 np0005466031 systemd[1]: Started Session 36 of User zuul.
Oct  2 07:50:25 np0005466031 podman[93577]: 2025-10-02 11:50:25.938766161 +0000 UTC m=+0.497643581 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:50:26 np0005466031 podman[93577]: 2025-10-02 11:50:26.031987888 +0000 UTC m=+0.590865288 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 07:50:26 np0005466031 python3.9[93795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:50:26 np0005466031 podman[93872]: 2025-10-02 11:50:26.673158274 +0000 UTC m=+0.054057589 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:50:26 np0005466031 podman[93872]: 2025-10-02 11:50:26.686065672 +0000 UTC m=+0.066964927 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:50:26 np0005466031 podman[93944]: 2025-10-02 11:50:26.886292441 +0000 UTC m=+0.051284586 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-type=git, version=2.2.4, architecture=x86_64, distribution-scope=public)
Oct  2 07:50:26 np0005466031 podman[93944]: 2025-10-02 11:50:26.899023925 +0000 UTC m=+0.064016040 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.component=keepalived-container, version=2.2.4)
Oct  2 07:50:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:50:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:27.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:50:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:27.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:27 np0005466031 python3.9[94243]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  2 07:50:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:50:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:50:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:28 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct  2 07:50:28 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct  2 07:50:28 np0005466031 python3.9[94427]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:50:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:29.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:29.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:29 np0005466031 python3.9[94511]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:50:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:31.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:31.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:31 np0005466031 python3.9[94665]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:33.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:33.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:34 np0005466031 python3.9[94870]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:50:34 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct  2 07:50:34 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct  2 07:50:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:35.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:35 np0005466031 python3.9[95073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:50:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:35.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:36 np0005466031 python3.9[95226]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  2 07:50:36 np0005466031 systemd[71759]: Created slice User Background Tasks Slice.
Oct  2 07:50:36 np0005466031 systemd[71759]: Starting Cleanup of User's Temporary Files and Directories...
Oct  2 07:50:36 np0005466031 systemd[71759]: Finished Cleanup of User's Temporary Files and Directories.
Oct  2 07:50:36 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.11 deep-scrub starts
Oct  2 07:50:36 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.11 deep-scrub ok
Oct  2 07:50:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:37.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:37.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:37 np0005466031 python3.9[95377]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:50:38 np0005466031 python3.9[95536]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:39.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:40 np0005466031 python3.9[95690]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:50:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:41.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:41 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct  2 07:50:41 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct  2 07:50:42 np0005466031 python3.9[95978]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:50:42 np0005466031 python3.9[96128]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:43.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:43 np0005466031 python3.9[96282]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:43 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.6 deep-scrub starts
Oct  2 07:50:43 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.6 deep-scrub ok
Oct  2 07:50:44 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct  2 07:50:44 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct  2 07:50:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:45.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:45 np0005466031 python3.9[96436]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:46 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct  2 07:50:46 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct  2 07:50:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:47.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:50:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:47.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:50:47 np0005466031 python3.9[96591]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:48 np0005466031 python3.9[96745]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  2 07:50:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:49.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:49.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:49 np0005466031 systemd[1]: session-36.scope: Deactivated successfully.
Oct  2 07:50:49 np0005466031 systemd[1]: session-36.scope: Consumed 17.172s CPU time.
Oct  2 07:50:49 np0005466031 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Oct  2 07:50:49 np0005466031 systemd-logind[786]: Removed session 36.
Oct  2 07:50:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:51.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:51.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:52 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct  2 07:50:52 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct  2 07:50:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:53.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:53.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:54 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct  2 07:50:54 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct  2 07:50:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:55.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:55 np0005466031 systemd-logind[786]: New session 37 of user zuul.
Oct  2 07:50:55 np0005466031 systemd[1]: Started Session 37 of User zuul.
Oct  2 07:50:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:55.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:50:56 np0005466031 python3.9[96977]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:50:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:57.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:57 np0005466031 python3.9[97131]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:50:57 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct  2 07:50:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:57.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:57 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct  2 07:50:58 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct  2 07:50:58 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct  2 07:50:58 np0005466031 python3.9[97325]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:50:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:58 np0005466031 systemd[1]: session-37.scope: Deactivated successfully.
Oct  2 07:50:58 np0005466031 systemd[1]: session-37.scope: Consumed 2.362s CPU time.
Oct  2 07:50:58 np0005466031 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Oct  2 07:50:58 np0005466031 systemd-logind[786]: Removed session 37.
Oct  2 07:50:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:50:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:50:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:59.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:01.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:01.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:03.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:03.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:04 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct  2 07:51:04 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct  2 07:51:04 np0005466031 systemd-logind[786]: New session 38 of user zuul.
Oct  2 07:51:04 np0005466031 systemd[1]: Started Session 38 of User zuul.
Oct  2 07:51:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:05.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:05 np0005466031 python3.9[97508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:05.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:06 np0005466031 python3.9[97663]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:07.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:07 np0005466031 python3.9[97819]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:51:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:07.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:08 np0005466031 python3.9[97904]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:51:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:09.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:09.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:10 np0005466031 python3.9[98058]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:51:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:11.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:11.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:11 np0005466031 python3.9[98253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:12 np0005466031 python3.9[98406]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:51:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:13.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:13.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:13 np0005466031 python3.9[98571]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:13 np0005466031 python3.9[98650]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:14 np0005466031 python3.9[98802]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:15 np0005466031 python3.9[98930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:15.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:15.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:15 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct  2 07:51:15 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct  2 07:51:16 np0005466031 python3.9[99083]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:16 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct  2 07:51:16 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct  2 07:51:16 np0005466031 python3.9[99235]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:17.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:17 np0005466031 python3.9[99387]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:17.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:18 np0005466031 python3.9[99540]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:18 np0005466031 python3.9[99692]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:51:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:19.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:19.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:20 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct  2 07:51:20 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct  2 07:51:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:21.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:21.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:21 np0005466031 python3.9[99846]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:22 np0005466031 python3.9[100001]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:51:22 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct  2 07:51:22 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct  2 07:51:22 np0005466031 python3.9[100153]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:51:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:23.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:23.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:23 np0005466031 python3.9[100306]: ansible-service_facts Invoked
Oct  2 07:51:23 np0005466031 network[100323]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:51:24 np0005466031 network[100324]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:51:24 np0005466031 network[100325]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:51:24 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct  2 07:51:24 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct  2 07:51:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:25.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:51:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:25.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:51:25 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct  2 07:51:25 np0005466031 ceph-osd[79023]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct  2 07:51:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:27.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:27.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:28 np0005466031 python3.9[100782]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:51:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:29.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:29.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:31.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:31.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:31 np0005466031 python3.9[100936]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  2 07:51:33 np0005466031 python3.9[101089]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:33.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:33.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:33 np0005466031 python3.9[101167]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:34 np0005466031 python3.9[101395]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:34 np0005466031 podman[101570]: 2025-10-02 11:51:34.761398432 +0000 UTC m=+0.054884214 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 07:51:34 np0005466031 podman[101570]: 2025-10-02 11:51:34.86190633 +0000 UTC m=+0.155392082 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 07:51:34 np0005466031 python3.9[101568]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:35.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:35.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:35 np0005466031 podman[101779]: 2025-10-02 11:51:35.448536568 +0000 UTC m=+0.086665530 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:51:35 np0005466031 podman[101801]: 2025-10-02 11:51:35.522782579 +0000 UTC m=+0.051959089 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:51:35 np0005466031 podman[101779]: 2025-10-02 11:51:35.529666798 +0000 UTC m=+0.167795750 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:51:35 np0005466031 podman[101847]: 2025-10-02 11:51:35.786528705 +0000 UTC m=+0.055512412 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-type=git, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct  2 07:51:35 np0005466031 podman[101847]: 2025-10-02 11:51:35.798865951 +0000 UTC m=+0.067849608 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, io.openshift.expose-services=, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, name=keepalived, release=1793, distribution-scope=public, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  2 07:51:36 np0005466031 python3.9[102100]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:51:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:51:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:37.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:37.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:37 np0005466031 python3.9[102307]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:51:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:39 np0005466031 python3.9[102391]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:39.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:39.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:40 np0005466031 systemd[1]: session-38.scope: Deactivated successfully.
Oct  2 07:51:40 np0005466031 systemd[1]: session-38.scope: Consumed 23.472s CPU time.
Oct  2 07:51:40 np0005466031 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Oct  2 07:51:40 np0005466031 systemd-logind[786]: Removed session 38.
Oct  2 07:51:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:41.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:41.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:43.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:43.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:45.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:45 np0005466031 systemd-logind[786]: New session 39 of user zuul.
Oct  2 07:51:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:45.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:45 np0005466031 systemd[1]: Started Session 39 of User zuul.
Oct  2 07:51:46 np0005466031 python3.9[102628]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:47 np0005466031 python3.9[102780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:47.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:47.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:47 np0005466031 python3.9[102858]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:48 np0005466031 systemd[1]: session-39.scope: Deactivated successfully.
Oct  2 07:51:48 np0005466031 systemd[1]: session-39.scope: Consumed 1.510s CPU time.
Oct  2 07:51:48 np0005466031 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Oct  2 07:51:48 np0005466031 systemd-logind[786]: Removed session 39.
Oct  2 07:51:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:49.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:49.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:51.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:51.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:53.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:53 np0005466031 systemd-logind[786]: New session 40 of user zuul.
Oct  2 07:51:53 np0005466031 systemd[1]: Started Session 40 of User zuul.
Oct  2 07:51:54 np0005466031 python3.9[103041]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:51:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:55.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:51:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:55.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:55 np0005466031 python3.9[103248]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:56 np0005466031 python3.9[103423]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:57 np0005466031 python3.9[103501]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.m3v084l1 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:57.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:51:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:57.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:51:58 np0005466031 python3.9[103654]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:59 np0005466031 python3.9[103732]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.mnh9hwwl recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:51:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:59.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:59 np0005466031 python3.9[103884]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:00 np0005466031 python3.9[104037]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:01 np0005466031 python3.9[104115]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:01.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:01.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:01 np0005466031 python3.9[104267]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:02 np0005466031 python3.9[104346]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:02 np0005466031 python3.9[104498]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:03.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:03.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:03 np0005466031 python3.9[104650]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:04 np0005466031 python3.9[104729]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:04 np0005466031 python3.9[104881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:05 np0005466031 python3.9[104959]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:05.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:05.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:06 np0005466031 python3.9[105112]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:52:06 np0005466031 systemd[1]: Reloading.
Oct  2 07:52:06 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:52:06 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:52:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:07 np0005466031 python3.9[105302]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:07.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:07 np0005466031 python3.9[105381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:08 np0005466031 python3.9[105533]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:09 np0005466031 python3.9[105611]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:09.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:09.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:09 np0005466031 python3.9[105763]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:52:09 np0005466031 systemd[1]: Reloading.
Oct  2 07:52:10 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:52:10 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:52:10 np0005466031 systemd[1]: Starting Create netns directory...
Oct  2 07:52:10 np0005466031 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:52:10 np0005466031 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:52:10 np0005466031 systemd[1]: Finished Create netns directory.
Oct  2 07:52:11 np0005466031 python3.9[105956]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:52:11 np0005466031 network[105973]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:52:11 np0005466031 network[105974]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:52:11 np0005466031 network[105975]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:52:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:11.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:11.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:13.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:52:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:13.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:52:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:15.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:17.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:17.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:18 np0005466031 python3.9[106294]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:18 np0005466031 python3.9[106372]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:19.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:19 np0005466031 python3.9[106524]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:20 np0005466031 python3.9[106677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:20 np0005466031 python3.9[106755]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:21.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:21.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:21 np0005466031 python3.9[106907]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 07:52:21 np0005466031 systemd[1]: Starting Time & Date Service...
Oct  2 07:52:21 np0005466031 systemd[1]: Started Time & Date Service.
Oct  2 07:52:22 np0005466031 python3.9[107064]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:23.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:23 np0005466031 python3.9[107216]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.095155) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944095428, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2597, "num_deletes": 251, "total_data_size": 5245926, "memory_usage": 5317784, "flush_reason": "Manual Compaction"}
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944111777, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3433301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7239, "largest_seqno": 9831, "table_properties": {"data_size": 3423416, "index_size": 5803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25468, "raw_average_key_size": 21, "raw_value_size": 3401260, "raw_average_value_size": 2853, "num_data_blocks": 259, "num_entries": 1192, "num_filter_entries": 1192, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405766, "oldest_key_time": 1759405766, "file_creation_time": 1759405944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16688 microseconds, and 8025 cpu microseconds.
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.111868) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3433301 bytes OK
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.111922) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.113677) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.113695) EVENT_LOG_v1 {"time_micros": 1759405944113688, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.113716) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5233988, prev total WAL file size 5233988, number of live WAL files 2.
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.114855) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3352KB)], [15(7790KB)]
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944114920, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11410401, "oldest_snapshot_seqno": -1}
Oct  2 07:52:24 np0005466031 python3.9[107295]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3818 keys, 9730284 bytes, temperature: kUnknown
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944173605, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9730284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9698966, "index_size": 20648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92012, "raw_average_key_size": 24, "raw_value_size": 9624304, "raw_average_value_size": 2520, "num_data_blocks": 903, "num_entries": 3818, "num_filter_entries": 3818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759405944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.173842) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9730284 bytes
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.175315) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.2 rd, 165.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 4339, records dropped: 521 output_compression: NoCompression
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.175333) EVENT_LOG_v1 {"time_micros": 1759405944175324, "job": 6, "event": "compaction_finished", "compaction_time_micros": 58752, "compaction_time_cpu_micros": 19778, "output_level": 6, "num_output_files": 1, "total_output_size": 9730284, "num_input_records": 4339, "num_output_records": 3818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944175950, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944177116, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.114754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.177168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.177175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.177176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.177178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:52:24.177180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466031 python3.9[107447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:25 np0005466031 python3.9[107525]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.vu01irnx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:25.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:26 np0005466031 python3.9[107678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:26 np0005466031 python3.9[107756]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:27 np0005466031 python3.9[107908]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:27.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:28 np0005466031 python3[108062]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:52:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:29.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:29 np0005466031 python3.9[108214]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:29.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:29 np0005466031 python3.9[108292]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:30 np0005466031 python3.9[108446]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:31.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:31 np0005466031 python3.9[108524]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:31.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:32 np0005466031 python3.9[108677]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:32 np0005466031 python3.9[108755]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:33.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:33 np0005466031 python3.9[108907]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:33.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:33 np0005466031 python3.9[108985]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:34 np0005466031 python3.9[109138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:35 np0005466031 python3.9[109216]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:35.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:35.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:36 np0005466031 python3.9[109419]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:36 np0005466031 python3.9[109574]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:37.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:37 np0005466031 python3.9[109726]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:38 np0005466031 python3.9[109879]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:39 np0005466031 python3.9[110031]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:52:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:39.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:40 np0005466031 python3.9[110184]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:52:40 np0005466031 systemd[1]: session-40.scope: Deactivated successfully.
Oct  2 07:52:40 np0005466031 systemd[1]: session-40.scope: Consumed 30.489s CPU time.
Oct  2 07:52:40 np0005466031 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Oct  2 07:52:40 np0005466031 systemd-logind[786]: Removed session 40.
Oct  2 07:52:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:41.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:41.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:43.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:43.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:44 np0005466031 podman[110379]: 2025-10-02 11:52:44.716249953 +0000 UTC m=+0.063101810 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 07:52:44 np0005466031 podman[110379]: 2025-10-02 11:52:44.828088486 +0000 UTC m=+0.174940343 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:52:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:45.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:45 np0005466031 podman[110512]: 2025-10-02 11:52:45.419352681 +0000 UTC m=+0.071351022 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:52:45 np0005466031 podman[110512]: 2025-10-02 11:52:45.430075574 +0000 UTC m=+0.082073915 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 07:52:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:45.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:45 np0005466031 podman[110577]: 2025-10-02 11:52:45.681302445 +0000 UTC m=+0.066804344 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9)
Oct  2 07:52:45 np0005466031 podman[110577]: 2025-10-02 11:52:45.718414571 +0000 UTC m=+0.103916490 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph)
Oct  2 07:52:45 np0005466031 systemd-logind[786]: New session 41 of user zuul.
Oct  2 07:52:45 np0005466031 systemd[1]: Started Session 41 of User zuul.
Oct  2 07:52:46 np0005466031 python3.9[110916]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  2 07:52:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:52:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:52:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:47.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:47.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:47 np0005466031 python3.9[111068]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:52:48 np0005466031 python3.9[111223]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  2 07:52:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:49.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:49 np0005466031 python3.9[111375]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.f15bgvwn follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:49.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:49 np0005466031 python3.9[111501]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.f15bgvwn mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405968.827887-109-163824483506098/.source.f15bgvwn _original_basename=.vfh68pys follow=False checksum=d30a0a751a32c4c6fb89a77f8bd3d66e091396ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:51 np0005466031 python3.9[111653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:52:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:51.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:52:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:51.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:52:52 np0005466031 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 07:52:52 np0005466031 python3.9[111806]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfikJfuUE7Xs2lF9Qh9l0WUdl+Tct7ff0gJQZVpPwLHlAwFnY1lIlqF2IQ3J7LtFcsjYF5RcofKcj+ARkMTobXFoygI/H3Yl5EGDehZbaNONLkDXT20bcYtosTZBjJTZWMJaDGUobRPnKWEbt7P8G/CVwj+LKBYxYcl65Bs0m8Ii2JZObV/41E/44oNBbTT6VnLqrH1BjRfNgToFyoYZToIU6gJw+lDGgt/afrHnDeR8fo6fgHkoHZKHxctrFraqhPOEX+SW/RD5ra4/WxZTBDAcOelVyZhpZ0V6HTQuS0IuD/sy9RD9W59TrF0oFH8kP6H1F3EbhrMfM/wkGJqxcBEMPIlGjUgoOCOY4tgCsAuyKcqelTUJIoL5uTuk06fd+1+B0t8j//vY7eWDCGwHAYrOCbL954GsjqhEOd/SL8vW6cT4Eh+DaWzKpvnl+bEN+G7wkI9etJ4B8NugtDyE25Ikfn9nsBLIcPcuepnlcBQkTN4sC+w0I1AEm3Uo8MFOM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxo/cGygmGP55Hjd3RI5yFpLqrtrtdd2PGw/FbMnxJJ#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLbUwjRfNWPOWmPM9kXykw3bNz7sYSt7DYbalJhzh+E3yGMACUO+HxFuSQ4lHBBXquZltdOcmR202cRP+4s05oI=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+zJSXp4XBwGccVvswqz0/27MxV0mWhHJ9EKngmPOQ2Et2f+QArNFJsEaUEJankaYSrISVt8m0QscyZhZUgrxp07g0OV9pVQ2pkqF/CSC7RnN96odOHOeQjRmSOj9vF8Q3EeyRZ7MS1CWH6TT+jYOD77TFol6cQhi7o5bzgAdL6yB/ili/PG3bBxtbYtNwSqCSpiGaN8z8j/REszkW2GM6wvDGXk9NgNfBZT4goP4O3qz/wVeMM/OQFGQa/34tMNX3QEE/XOdAUIRXXLw0vmVj7oRDzGVMc12TDalGOqphS+LkUS4PB+ns/IaplTUzc8zlwhycQQPxnzEcm+z3QP8Bo+iBGw+aKpc5UTMMtZocXrjHCv0Q6irXug6N6b7aaANiHMmveZua/Gjp6Ef//Q/+thKtkvcvvhUDZknHLDrHGT5QbVQYjN23MyFdWCu6MgpBw8NNyeI5sO605lOrxk2oXwX19ah7Qt7iAU7KRijLzQBjnMjNb6bcSOCFXVzpl0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxmfzZIbNhcux/tJpdvzaDW/iX/PRMqNcEGpeyKOTEV#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANBfiBul8lZFa5T9kjEYk719DZo4CtW2bTDn+SPcbu/2U71Ms3Qc1tvqiM9B/ciT9t/uzxk25klpGuFqieJFkk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb8D90laelhslbtmfz72Mp6Q7iCMu+KiPRuBFH59nBtb1LmjrIFjvU1qZnJ+wipHW+bRcdDzNWNM8KJ4IImBqFxbrg17RhHeunE84nnR8leX3OYiMZumpygvXYCykppXcKbe6pfxYUtyTc8Tz3bNoayi7uGoKgN/iaUeADLuyJUDDVyusj2q7uIj7gZ6PbtorR5cUUn0wBZTo3Jx84NmdiJr/xDGrtfawsV6ATz+Rpx3vzz4EE4dq4wN3eTUJiPCpc4jbTvHpp0GdJTK1BkZ4IANgw3a+loOO2MHq2JgMRjKJrH7sqrw7s9XgzHSh/ufOmEKAtgw75tWExEcy/05QGGbR2jnIKde4vVIS5JheT1z4gYASjKEEidjisDxig5nigPddxe3nSxKRQczKXPV+KUOB14AljRbnyqgbw4Dv9wtnkFL/QLMXFA0/NaOAZxhI+fOoAcg+No2ZsB95IgQ49ay/LN011x9o1vfwVPfReOtkjpVxQB8oCXhA53BfrG3M=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAtzqd+HKKUdtdjsFK/O61rbaIfH2/ANnbsFBvd1WLXA#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOyw0g2rIQxTWmEkqBGUUvYwuDopCg/ppyBGUh5LatbQKlwO7AkEzPUhEeFZv2/qzobLbOH4kVCTAQVjiQm//WM=#012 create=True mode=0644 path=/tmp/ansible.f15bgvwn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:53 np0005466031 python3.9[111960]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.f15bgvwn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:53.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:53.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:54 np0005466031 python3.9[112115]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.f15bgvwn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:54 np0005466031 systemd[1]: session-41.scope: Deactivated successfully.
Oct  2 07:52:54 np0005466031 systemd[1]: session-41.scope: Consumed 5.182s CPU time.
Oct  2 07:52:54 np0005466031 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Oct  2 07:52:54 np0005466031 systemd-logind[786]: Removed session 41.
Oct  2 07:52:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:55.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:55.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:57.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:52:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:57.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:52:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:59.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:52:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:59.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:00 np0005466031 systemd-logind[786]: New session 42 of user zuul.
Oct  2 07:53:00 np0005466031 systemd[1]: Started Session 42 of User zuul.
Oct  2 07:53:01 np0005466031 python3.9[112396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:53:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:01.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:01.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:02 np0005466031 python3.9[112553]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 07:53:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:53:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:03.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:53:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:03.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:03 np0005466031 python3.9[112707]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:53:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:04 np0005466031 python3.9[112861]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:53:05 np0005466031 python3.9[113015]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:05.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:05.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:06 np0005466031 python3.9[113168]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:06 np0005466031 systemd[1]: session-42.scope: Deactivated successfully.
Oct  2 07:53:06 np0005466031 systemd[1]: session-42.scope: Consumed 3.773s CPU time.
Oct  2 07:53:06 np0005466031 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Oct  2 07:53:06 np0005466031 systemd-logind[786]: Removed session 42.
Oct  2 07:53:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:07.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:07.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:09.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:09.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:11.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:53:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:11.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:53:12 np0005466031 systemd-logind[786]: New session 43 of user zuul.
Oct  2 07:53:12 np0005466031 systemd[1]: Started Session 43 of User zuul.
Oct  2 07:53:13 np0005466031 python3.9[113349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:53:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:13.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:13.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:14 np0005466031 python3.9[113506]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:53:14 np0005466031 python3.9[113590]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:53:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:15.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:15.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:17 np0005466031 python3.9[113792]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:53:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:17.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:17.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:18 np0005466031 python3.9[113944]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:53:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:19 np0005466031 python3.9[114094]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:19.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:19 np0005466031 python3.9[114245]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:20 np0005466031 systemd[1]: session-43.scope: Deactivated successfully.
Oct  2 07:53:20 np0005466031 systemd[1]: session-43.scope: Consumed 5.785s CPU time.
Oct  2 07:53:20 np0005466031 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Oct  2 07:53:20 np0005466031 systemd-logind[786]: Removed session 43.
Oct  2 07:53:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:21.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:23.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:23.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.511689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004511769, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 250, "total_data_size": 1617118, "memory_usage": 1640568, "flush_reason": "Manual Compaction"}
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004517484, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 701185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9836, "largest_seqno": 10643, "table_properties": {"data_size": 697888, "index_size": 1141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8288, "raw_average_key_size": 19, "raw_value_size": 691034, "raw_average_value_size": 1653, "num_data_blocks": 50, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405944, "oldest_key_time": 1759405944, "file_creation_time": 1759406004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5818 microseconds, and 2631 cpu microseconds.
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.517523) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 701185 bytes OK
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.517560) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518742) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518755) EVENT_LOG_v1 {"time_micros": 1759406004518751, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518774) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1612924, prev total WAL file size 1612924, number of live WAL files 2.
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.519401) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(684KB)], [18(9502KB)]
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004519485, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10431469, "oldest_snapshot_seqno": -1}
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3745 keys, 7707492 bytes, temperature: kUnknown
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004563244, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7707492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7679702, "index_size": 17312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90936, "raw_average_key_size": 24, "raw_value_size": 7609259, "raw_average_value_size": 2031, "num_data_blocks": 757, "num_entries": 3745, "num_filter_entries": 3745, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759406004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.563571) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7707492 bytes
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.564985) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.9 rd, 175.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.3 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(25.9) write-amplify(11.0) OK, records in: 4236, records dropped: 491 output_compression: NoCompression
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.565008) EVENT_LOG_v1 {"time_micros": 1759406004564996, "job": 8, "event": "compaction_finished", "compaction_time_micros": 43844, "compaction_time_cpu_micros": 25588, "output_level": 6, "num_output_files": 1, "total_output_size": 7707492, "num_input_records": 4236, "num_output_records": 3745, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004565267, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004567254, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.519265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.567326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.567333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.567335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.567337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:53:24.567339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:25 np0005466031 systemd-logind[786]: New session 44 of user zuul.
Oct  2 07:53:25 np0005466031 systemd[1]: Started Session 44 of User zuul.
Oct  2 07:53:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:25.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:26 np0005466031 python3.9[114427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:53:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:27.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:27.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:27 np0005466031 python3.9[114583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:28 np0005466031 python3.9[114736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:29 np0005466031 python3.9[114888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:29.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:29.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:30 np0005466031 python3.9[115012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406008.761382-164-41034999642440/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=edfe03f0a14457bb0a07577792bb3a4d77a483bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:30 np0005466031 python3.9[115164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:31.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:31 np0005466031 python3.9[115287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406010.3874934-164-105640720652171/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=3ecf5e4f77066a77590b5118d192b2b931dec8bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:53:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:31.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:53:32 np0005466031 python3.9[115440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:32 np0005466031 python3.9[115563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406011.623324-164-151305919232767/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=5f1469499e712325b861dd160ba4d2ddac09ea4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:33 np0005466031 python3.9[115715]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:33.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:33.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:33 np0005466031 python3.9[115868]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:34 np0005466031 python3.9[116020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:35 np0005466031 python3.9[116143]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406014.107172-341-168498063509010/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=7db2afed4b13296c45a2cf00abadabd50a0a1682 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:35.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:35.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:35 np0005466031 python3.9[116295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:36 np0005466031 python3.9[116469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406015.2573953-341-128003747098142/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=900083829e6d3cf8d122351d6d42abd08dd175ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:36 np0005466031 python3.9[116621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:37 np0005466031 python3.9[116744]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406016.4996092-341-47607726157650/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=95708530dd194e5cb489ae940b927577eaddb2d5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:37.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:38 np0005466031 python3.9[116897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:38 np0005466031 python3.9[117049]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:39.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:39 np0005466031 python3.9[117201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:39 np0005466031 python3.9[117325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406018.9839025-518-62240685556827/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=cbf22d5b9260d60aa4c92bfa5daee4958f111007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:40 np0005466031 python3.9[117477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:41 np0005466031 python3.9[117600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406020.0897114-518-176467304188767/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=900083829e6d3cf8d122351d6d42abd08dd175ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:41.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:41.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:41 np0005466031 python3.9[117752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:42 np0005466031 python3.9[117876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406021.2082782-518-254180939181804/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=7fe6a0ffc045068bea955eeb4268d5a99c1a632e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:43.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:43 np0005466031 python3.9[118028]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:43.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:44 np0005466031 python3.9[118181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:44 np0005466031 python3.9[118304]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406023.634067-722-80004006399799/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:45.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:45 np0005466031 python3.9[118456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:45.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:46 np0005466031 python3.9[118609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:46 np0005466031 python3.9[118732]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406025.66928-795-214872541741639/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:47 np0005466031 python3.9[118884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:47 np0005466031 python3.9[119037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:48 np0005466031 python3.9[119160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406027.4634893-864-35448260207812/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:49 np0005466031 python3.9[119312]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:49.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:49 np0005466031 python3.9[119465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:50 np0005466031 python3.9[119588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406029.4254656-936-143988641270472/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:51 np0005466031 python3.9[119740]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:51.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:51.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:51 np0005466031 python3.9[119892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:52 np0005466031 python3.9[120016]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406031.3565211-1009-36164624599144/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:53 np0005466031 python3.9[120168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:53.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:53.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:53 np0005466031 python3.9[120320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:54 np0005466031 python3.9[120444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406033.2463481-1084-73015920666386/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:55 np0005466031 systemd[1]: session-44.scope: Deactivated successfully.
Oct  2 07:53:55 np0005466031 systemd[1]: session-44.scope: Consumed 23.679s CPU time.
Oct  2 07:53:55 np0005466031 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Oct  2 07:53:55 np0005466031 systemd-logind[786]: Removed session 44.
Oct  2 07:53:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:55.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:53:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:53:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:53:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:57.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:57.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:53:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:53:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:53:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:00 np0005466031 systemd-logind[786]: New session 45 of user zuul.
Oct  2 07:54:00 np0005466031 systemd[1]: Started Session 45 of User zuul.
Oct  2 07:54:01 np0005466031 python3.9[120809]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:01.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:02 np0005466031 python3.9[120962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:02 np0005466031 python3.9[121135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406041.4814367-69-65435599754720/.source.conf _original_basename=ceph.conf follow=False checksum=bc6368cedc2ad3c8a4bd89508113374e22439583 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:54:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:54:03 np0005466031 python3.9[121287]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:03.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:03 np0005466031 python3.9[121411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406042.9391356-69-252458083960468/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=75f34a13e5eafe465b3328865c9fc53d2eab5578 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:04 np0005466031 systemd[1]: session-45.scope: Deactivated successfully.
Oct  2 07:54:04 np0005466031 systemd[1]: session-45.scope: Consumed 2.625s CPU time.
Oct  2 07:54:04 np0005466031 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Oct  2 07:54:04 np0005466031 systemd-logind[786]: Removed session 45.
Oct  2 07:54:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:05.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:05.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:07.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:07.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:09.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:09.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:09 np0005466031 systemd-logind[786]: New session 46 of user zuul.
Oct  2 07:54:09 np0005466031 systemd[1]: Started Session 46 of User zuul.
Oct  2 07:54:10 np0005466031 python3.9[121592]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:54:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:11.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:11.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:12 np0005466031 python3.9[121749]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:12 np0005466031 python3.9[121901]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:13.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:13.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:13 np0005466031 python3.9[122051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:54:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:14 np0005466031 python3.9[122204]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  2 07:54:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:15.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:15.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:16 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct  2 07:54:16 np0005466031 python3.9[122412]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:54:17 np0005466031 python3.9[122496]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:54:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:17.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:19.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:19.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:20 np0005466031 python3.9[122651]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:54:20 np0005466031 python3[122806]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  2 07:54:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:21.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:21.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:21 np0005466031 python3.9[122958]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:22 np0005466031 python3.9[123111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:22 np0005466031 python3.9[123189]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:23.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:23.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:23 np0005466031 python3.9[123341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:24 np0005466031 python3.9[123420]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.fbc37p_m recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:24 np0005466031 python3.9[123572]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:25 np0005466031 python3.9[123650]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:25.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:26 np0005466031 python3.9[123803]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:26 np0005466031 python3[123956]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:54:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:27.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:27.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:27 np0005466031 python3.9[124108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:28 np0005466031 python3.9[124234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406067.16034-438-280832182681988/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:29 np0005466031 python3.9[124386]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:29.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:29.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:29 np0005466031 python3.9[124511]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406068.6904101-483-35819370454116/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:30 np0005466031 python3.9[124664]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:30 np0005466031 python3.9[124789]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406069.8942273-528-215551149429464/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:31.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:31 np0005466031 python3.9[124941]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:31.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:32 np0005466031 python3.9[125067]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406071.1332443-573-233772206337064/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:32 np0005466031 python3.9[125219]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:33 np0005466031 python3.9[125344]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406072.3650477-619-98457723956131/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:33.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:33.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:34 np0005466031 python3.9[125497]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:34 np0005466031 python3.9[125649]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:35.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:35 np0005466031 python3.9[125804]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:36 np0005466031 python3.9[125977]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:37 np0005466031 python3.9[126160]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:54:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:37.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:37.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:37 np0005466031 python3.9[126314]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:38 np0005466031 python3.9[126470]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:39.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:39 np0005466031 python3.9[126620]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:54:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:40 np0005466031 python3.9[126774]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:40 np0005466031 ovs-vsctl[126775]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  2 07:54:41 np0005466031 python3.9[126927]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:41.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:41.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:42 np0005466031 python3.9[127083]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:42 np0005466031 ovs-vsctl[127084]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  2 07:54:42 np0005466031 python3.9[127234]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:54:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:43.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:43 np0005466031 python3.9[127388]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:54:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:43.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:54:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:44 np0005466031 python3.9[127541]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:44 np0005466031 python3.9[127619]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:45 np0005466031 python3.9[127771]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:45.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:45.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:45 np0005466031 python3.9[127849]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:46 np0005466031 python3.9[128002]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:47 np0005466031 python3.9[128154]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:54:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:47.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:54:47 np0005466031 python3.9[128232]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:47.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:48 np0005466031 python3.9[128385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:48 np0005466031 python3.9[128463]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:49.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:49 np0005466031 python3.9[128615]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:54:49 np0005466031 systemd[1]: Reloading.
Oct  2 07:54:49 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:54:49 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:54:50 np0005466031 python3.9[128804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:51 np0005466031 python3.9[128882]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:51.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:51.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:51 np0005466031 python3.9[129035]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:52 np0005466031 python3.9[129113]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:53 np0005466031 python3.9[129265]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:54:53 np0005466031 systemd[1]: Reloading.
Oct  2 07:54:53 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:54:53 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:54:53 np0005466031 systemd[1]: Starting Create netns directory...
Oct  2 07:54:53 np0005466031 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:54:53 np0005466031 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:54:53 np0005466031 systemd[1]: Finished Create netns directory.
Oct  2 07:54:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:53.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:54 np0005466031 python3.9[129460]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:55 np0005466031 python3.9[129612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:55.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:55 np0005466031 python3.9[129735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406094.6869273-1371-132200628468073/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:55.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:56 np0005466031 python3.9[129938]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:57 np0005466031 python3.9[130090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:57.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:57.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:58 np0005466031 python3.9[130214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406097.0537708-1446-16981051868431/.source.json _original_basename=.m_fpqra4 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:58 np0005466031 python3.9[130366]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:59.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:54:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:59.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:01 np0005466031 python3.9[130794]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  2 07:55:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:01.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:01.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:02 np0005466031 python3.9[130949]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:55:03 np0005466031 python3.9[131213]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 07:55:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:03.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:03.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:55:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:55:04 np0005466031 python3[131409]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:55:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:05.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:05.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:07.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:07.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:09 np0005466031 podman[131422]: 2025-10-02 11:55:09.341229612 +0000 UTC m=+4.409379850 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:55:09 np0005466031 podman[131544]: 2025-10-02 11:55:09.471855664 +0000 UTC m=+0.046281768 container create f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 07:55:09 np0005466031 podman[131544]: 2025-10-02 11:55:09.446387528 +0000 UTC m=+0.020813612 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:55:09 np0005466031 python3[131409]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:55:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:09.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:09.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:10 np0005466031 python3.9[131736]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:55:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:11.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:11 np0005466031 python3.9[131890]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:12 np0005466031 python3.9[131967]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:55:13 np0005466031 python3.9[132118]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406112.3898365-1710-135197248565760/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:13.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:13 np0005466031 python3.9[132194]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:55:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:13.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:13 np0005466031 systemd[1]: Reloading.
Oct  2 07:55:13 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:13 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:14 np0005466031 python3.9[132356]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:55:14 np0005466031 systemd[1]: Reloading.
Oct  2 07:55:14 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:14 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:14 np0005466031 systemd[1]: Starting ovn_controller container...
Oct  2 07:55:15 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:55:15 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f469e7501e355e8d8a46f1cdb8609f50b2add68685fd35b66d8c8aa4fd4ac15/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  2 07:55:15 np0005466031 systemd[1]: Started /usr/bin/podman healthcheck run f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097.
Oct  2 07:55:15 np0005466031 podman[132397]: 2025-10-02 11:55:15.098433838 +0000 UTC m=+0.149917210 container init f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + sudo -E kolla_set_configs
Oct  2 07:55:15 np0005466031 podman[132397]: 2025-10-02 11:55:15.1320987 +0000 UTC m=+0.183582072 container start f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 07:55:15 np0005466031 edpm-start-podman-container[132397]: ovn_controller
Oct  2 07:55:15 np0005466031 systemd[1]: Created slice User Slice of UID 0.
Oct  2 07:55:15 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  2 07:55:15 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  2 07:55:15 np0005466031 systemd[1]: Starting User Manager for UID 0...
Oct  2 07:55:15 np0005466031 edpm-start-podman-container[132396]: Creating additional drop-in dependency for "ovn_controller" (f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097)
Oct  2 07:55:15 np0005466031 podman[132419]: 2025-10-02 11:55:15.242623791 +0000 UTC m=+0.100435671 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 07:55:15 np0005466031 systemd[1]: f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097-3ee979442e525a09.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 07:55:15 np0005466031 systemd[1]: f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097-3ee979442e525a09.service: Failed with result 'exit-code'.
Oct  2 07:55:15 np0005466031 systemd[1]: Reloading.
Oct  2 07:55:15 np0005466031 systemd[132446]: Queued start job for default target Main User Target.
Oct  2 07:55:15 np0005466031 systemd[132446]: Created slice User Application Slice.
Oct  2 07:55:15 np0005466031 systemd[132446]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  2 07:55:15 np0005466031 systemd[132446]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:55:15 np0005466031 systemd[132446]: Reached target Paths.
Oct  2 07:55:15 np0005466031 systemd[132446]: Reached target Timers.
Oct  2 07:55:15 np0005466031 systemd[132446]: Starting D-Bus User Message Bus Socket...
Oct  2 07:55:15 np0005466031 systemd[132446]: Starting Create User's Volatile Files and Directories...
Oct  2 07:55:15 np0005466031 systemd[132446]: Finished Create User's Volatile Files and Directories.
Oct  2 07:55:15 np0005466031 systemd[132446]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:55:15 np0005466031 systemd[132446]: Reached target Sockets.
Oct  2 07:55:15 np0005466031 systemd[132446]: Reached target Basic System.
Oct  2 07:55:15 np0005466031 systemd[132446]: Reached target Main User Target.
Oct  2 07:55:15 np0005466031 systemd[132446]: Startup finished in 135ms.
Oct  2 07:55:15 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:15 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:15 np0005466031 systemd[1]: Started User Manager for UID 0.
Oct  2 07:55:15 np0005466031 systemd[1]: Started ovn_controller container.
Oct  2 07:55:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:15.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:15 np0005466031 systemd[1]: Started Session c1 of User root.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: INFO:__main__:Validating config file
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: INFO:__main__:Writing out command to execute
Oct  2 07:55:15 np0005466031 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: ++ cat /run_command
Oct  2 07:55:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:15.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + ARGS=
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + sudo kolla_copy_cacerts
Oct  2 07:55:15 np0005466031 systemd[1]: Started Session c2 of User root.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + [[ ! -n '' ]]
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + . kolla_extend_start
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + umask 0022
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  2 07:55:15 np0005466031 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.7626] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.7638] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.7655] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.7661] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.7665] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  2 07:55:15 np0005466031 kernel: br-int: entered promiscuous mode
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.7843] manager: (ovn-db2221-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  2 07:55:15 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:15Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.7849] manager: (ovn-bfdd72-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct  2 07:55:15 np0005466031 systemd-udevd[132547]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:55:15 np0005466031 kernel: genev_sys_6081: entered promiscuous mode
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.8154] device (genev_sys_6081): carrier: link connected
Oct  2 07:55:15 np0005466031 NetworkManager[44907]: <info>  [1759406115.8156] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Oct  2 07:55:15 np0005466031 systemd-udevd[132548]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:55:16 np0005466031 NetworkManager[44907]: <info>  [1759406116.0917] manager: (ovn-17f118-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct  2 07:55:16 np0005466031 python3.9[132682]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:55:16 np0005466031 ovs-vsctl[132730]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  2 07:55:17 np0005466031 python3.9[132882]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:55:17 np0005466031 ovs-vsctl[132884]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  2 07:55:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:17.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:17.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:18 np0005466031 python3.9[133038]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:55:18 np0005466031 ovs-vsctl[133039]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  2 07:55:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:18 np0005466031 systemd[1]: session-46.scope: Deactivated successfully.
Oct  2 07:55:18 np0005466031 systemd[1]: session-46.scope: Consumed 55.310s CPU time.
Oct  2 07:55:18 np0005466031 systemd-logind[786]: Session 46 logged out. Waiting for processes to exit.
Oct  2 07:55:18 np0005466031 systemd-logind[786]: Removed session 46.
Oct  2 07:55:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:19.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:19.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:21.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:21.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:23.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:24 np0005466031 systemd-logind[786]: New session 48 of user zuul.
Oct  2 07:55:24 np0005466031 systemd[1]: Started Session 48 of User zuul.
Oct  2 07:55:25 np0005466031 python3.9[133220]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:55:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:25.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:25 np0005466031 systemd[1]: Stopping User Manager for UID 0...
Oct  2 07:55:25 np0005466031 systemd[132446]: Activating special unit Exit the Session...
Oct  2 07:55:25 np0005466031 systemd[132446]: Stopped target Main User Target.
Oct  2 07:55:25 np0005466031 systemd[132446]: Stopped target Basic System.
Oct  2 07:55:25 np0005466031 systemd[132446]: Stopped target Paths.
Oct  2 07:55:25 np0005466031 systemd[132446]: Stopped target Sockets.
Oct  2 07:55:25 np0005466031 systemd[132446]: Stopped target Timers.
Oct  2 07:55:25 np0005466031 systemd[132446]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 07:55:25 np0005466031 systemd[132446]: Closed D-Bus User Message Bus Socket.
Oct  2 07:55:25 np0005466031 systemd[132446]: Stopped Create User's Volatile Files and Directories.
Oct  2 07:55:25 np0005466031 systemd[132446]: Removed slice User Application Slice.
Oct  2 07:55:25 np0005466031 systemd[132446]: Reached target Shutdown.
Oct  2 07:55:25 np0005466031 systemd[132446]: Finished Exit the Session.
Oct  2 07:55:25 np0005466031 systemd[132446]: Reached target Exit the Session.
Oct  2 07:55:25 np0005466031 systemd[1]: user@0.service: Deactivated successfully.
Oct  2 07:55:25 np0005466031 systemd[1]: Stopped User Manager for UID 0.
Oct  2 07:55:25 np0005466031 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  2 07:55:25 np0005466031 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  2 07:55:25 np0005466031 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  2 07:55:25 np0005466031 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  2 07:55:25 np0005466031 systemd[1]: Removed slice User Slice of UID 0.
Oct  2 07:55:26 np0005466031 python3.9[133380]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:27 np0005466031 python3.9[133532]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:27.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:27.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:27 np0005466031 python3.9[133684]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:28 np0005466031 python3.9[133837]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:29 np0005466031 python3.9[133989]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:29.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:29.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:29 np0005466031 python3.9[134139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:55:30 np0005466031 python3.9[134292]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  2 07:55:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:31.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:31.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:32 np0005466031 python3.9[134443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:33.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:33.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:34 np0005466031 python3.9[134564]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406131.5332434-225-200038415628166/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:34 np0005466031 python3.9[134716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:35 np0005466031 python3.9[134837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406134.3977613-270-198305392811918/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:35.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:35.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:36 np0005466031 python3.9[134990]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:55:37 np0005466031 python3.9[135124]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:55:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:37.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:37.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:39.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:39 np0005466031 python3.9[135278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:55:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:39.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:40 np0005466031 python3.9[135432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:41 np0005466031 python3.9[135553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406139.9864109-381-272449214545139/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:41 np0005466031 python3.9[135703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:41.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:41.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:42 np0005466031 python3.9[135825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406141.1643894-381-140620034867069/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:43.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:43.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:43 np0005466031 python3.9[135975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:44 np0005466031 python3.9[136097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406143.374948-513-119655582828116/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:45 np0005466031 python3.9[136247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:45 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:45Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Oct  2 07:55:45 np0005466031 ovn_controller[132413]: 2025-10-02T11:55:45Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct  2 07:55:45 np0005466031 podman[136342]: 2025-10-02 11:55:45.505067506 +0000 UTC m=+0.175088897 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:55:45 np0005466031 python3.9[136380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406144.579625-513-271387656263617/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:45.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:45.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:46 np0005466031 python3.9[136545]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:55:46 np0005466031 python3.9[136699]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:47 np0005466031 python3.9[136851]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:47.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:47.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:47 np0005466031 python3.9[136930]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:48 np0005466031 python3.9[137082]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:49 np0005466031 python3.9[137160]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:49.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:49.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:49 np0005466031 python3.9[137312]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:50 np0005466031 python3.9[137465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:51 np0005466031 python3.9[137543]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:51.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:51 np0005466031 python3.9[137695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:51.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:52 np0005466031 python3.9[137774]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:52 np0005466031 python3.9[137926]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:55:52 np0005466031 systemd[1]: Reloading.
Oct  2 07:55:52 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:53 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:53.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:53.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:53 np0005466031 python3.9[138115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:54 np0005466031 python3.9[138193]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:55 np0005466031 python3.9[138345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:55 np0005466031 python3.9[138423]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:55.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:55.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:56 np0005466031 python3.9[138576]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:55:56 np0005466031 systemd[1]: Reloading.
Oct  2 07:55:56 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:56 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:56 np0005466031 systemd[1]: Starting Create netns directory...
Oct  2 07:55:56 np0005466031 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:55:56 np0005466031 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:55:56 np0005466031 systemd[1]: Finished Create netns directory.
Oct  2 07:55:57 np0005466031 python3.9[138819]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:57.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:57.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:58 np0005466031 python3.9[138972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:58 np0005466031 python3.9[139095]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406157.8305962-966-116757220971219/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:55:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:59.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:55:59 np0005466031 python3.9[139247]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:55:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:00 np0005466031 python3.9[139400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:56:01 np0005466031 python3.9[139523]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406159.9784913-1041-123847224807279/.source.json _original_basename=.99n6b3uz follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:01.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:01.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:01 np0005466031 python3.9[139675]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:03.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:03.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:04 np0005466031 python3.9[140104]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  2 07:56:05 np0005466031 python3.9[140256]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:56:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:05.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:05.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:06 np0005466031 python3.9[140409]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 07:56:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:07.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:07.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:07 np0005466031 python3[140586]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:56:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:56:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2181 writes, 12K keys, 2181 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2181 writes, 2181 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2181 writes, 12K keys, 2181 commit groups, 1.0 writes per commit group, ingest: 23.53 MB, 0.04 MB/s#012Interval WAL: 2181 writes, 2181 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    149.6      0.08              0.03         4    0.019       0      0       0.0       0.0#012  L6      1/0    7.35 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    185.8    158.3      0.15              0.07         3    0.051     11K   1270       0.0       0.0#012 Sum      1/0    7.35 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    123.5    155.4      0.23              0.09         7    0.033     11K   1270       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    124.9    157.2      0.23              0.09         6    0.038     11K   1270       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    185.8    158.3      0.15              0.07         3    0.051     11K   1270       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    154.9      0.07              0.03         3    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 308.00 MB usage: 984.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(49,851.05 KB,0.269838%) FilterBlock(7,41.42 KB,0.0131335%) IndexBlock(7,91.70 KB,0.0290759%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 07:56:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:09.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:09.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:11.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:11.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:13.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:13.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:15.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:15.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:16 np0005466031 podman[140788]: 2025-10-02 11:56:16.166457483 +0000 UTC m=+0.584816377 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.442112) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176442153, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1766, "num_deletes": 251, "total_data_size": 4443224, "memory_usage": 4497552, "flush_reason": "Manual Compaction"}
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176522143, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2913291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10648, "largest_seqno": 12409, "table_properties": {"data_size": 2905944, "index_size": 4418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14364, "raw_average_key_size": 19, "raw_value_size": 2891373, "raw_average_value_size": 3901, "num_data_blocks": 199, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406006, "oldest_key_time": 1759406006, "file_creation_time": 1759406176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 80638 microseconds, and 8991 cpu microseconds.
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.522743) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2913291 bytes OK
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.522908) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.721999) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.722072) EVENT_LOG_v1 {"time_micros": 1759406176722058, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.722108) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4435315, prev total WAL file size 4435315, number of live WAL files 2.
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.726675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2845KB)], [21(7526KB)]
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176726760, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10620783, "oldest_snapshot_seqno": -1}
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 3969 keys, 8437740 bytes, temperature: kUnknown
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176859012, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8437740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8408318, "index_size": 18368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 96232, "raw_average_key_size": 24, "raw_value_size": 8333712, "raw_average_value_size": 2099, "num_data_blocks": 794, "num_entries": 3969, "num_filter_entries": 3969, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759406176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.859281) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8437740 bytes
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.898183) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 63.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.5) write-amplify(2.9) OK, records in: 4486, records dropped: 517 output_compression: NoCompression
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.898228) EVENT_LOG_v1 {"time_micros": 1759406176898211, "job": 10, "event": "compaction_finished", "compaction_time_micros": 132325, "compaction_time_cpu_micros": 54031, "output_level": 6, "num_output_files": 1, "total_output_size": 8437740, "num_input_records": 4486, "num_output_records": 3969, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176898774, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176900123, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.726492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.900181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.900188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.900191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.900193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:56:16.900195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:56:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  2 07:56:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:17.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:17.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:18 np0005466031 podman[140601]: 2025-10-02 11:56:18.666244756 +0000 UTC m=+10.739563098 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:56:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:56:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:18 np0005466031 podman[141076]: 2025-10-02 11:56:18.80794003 +0000 UTC m=+0.025773066 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:56:18 np0005466031 podman[141076]: 2025-10-02 11:56:18.969830077 +0000 UTC m=+0.187663093 container create c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 07:56:18 np0005466031 python3[140586]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:56:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:19.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:56:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:19.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:19 np0005466031 python3.9[141266]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:56:20 np0005466031 python3.9[141421]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:21 np0005466031 python3.9[141497]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:56:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:21.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:21.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:21 np0005466031 python3.9[141649]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406181.3081937-1305-32182373679238/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:22 np0005466031 python3.9[141725]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:56:22 np0005466031 systemd[1]: Reloading.
Oct  2 07:56:22 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:56:22 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:56:23 np0005466031 python3.9[141835]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:23 np0005466031 systemd[1]: Reloading.
Oct  2 07:56:23 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:56:23 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:56:23 np0005466031 systemd[1]: Starting ovn_metadata_agent container...
Oct  2 07:56:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:23.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:23 np0005466031 systemd[1]: Started libcrun container.
Oct  2 07:56:23 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f9ba8f975ef9d4fa76a73f6bcd638318acd73437e41980ee2773b62c664660c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  2 07:56:23 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f9ba8f975ef9d4fa76a73f6bcd638318acd73437e41980ee2773b62c664660c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 07:56:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:23.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:23 np0005466031 systemd[1]: Started /usr/bin/podman healthcheck run c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019.
Oct  2 07:56:23 np0005466031 podman[141876]: 2025-10-02 11:56:23.782849162 +0000 UTC m=+0.149361496 container init c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + sudo -E kolla_set_configs
Oct  2 07:56:23 np0005466031 podman[141876]: 2025-10-02 11:56:23.81703149 +0000 UTC m=+0.183543794 container start c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 07:56:23 np0005466031 edpm-start-podman-container[141876]: ovn_metadata_agent
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Validating config file
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Copying service configuration files
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Writing out command to execute
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: ++ cat /run_command
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + CMD=neutron-ovn-metadata-agent
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + ARGS=
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + sudo kolla_copy_cacerts
Oct  2 07:56:23 np0005466031 edpm-start-podman-container[141875]: Creating additional drop-in dependency for "ovn_metadata_agent" (c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019)
Oct  2 07:56:23 np0005466031 podman[141900]: 2025-10-02 11:56:23.915819244 +0000 UTC m=+0.082093883 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 07:56:23 np0005466031 systemd[1]: Reloading.
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + [[ ! -n '' ]]
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + . kolla_extend_start
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: Running command: 'neutron-ovn-metadata-agent'
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + umask 0022
Oct  2 07:56:23 np0005466031 ovn_metadata_agent[141893]: + exec neutron-ovn-metadata-agent
Oct  2 07:56:24 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:56:24 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:56:24 np0005466031 systemd[1]: Started ovn_metadata_agent container.
Oct  2 07:56:25 np0005466031 systemd[1]: session-48.scope: Deactivated successfully.
Oct  2 07:56:25 np0005466031 systemd[1]: session-48.scope: Consumed 57.003s CPU time.
Oct  2 07:56:25 np0005466031 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Oct  2 07:56:25 np0005466031 systemd-logind[786]: Removed session 48.
Oct  2 07:56:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:25.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:25.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.764 141898 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.764 141898 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.764 141898 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.765 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.765 141898 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.765 141898 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.765 141898 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.765 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.765 141898 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.765 141898 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.765 141898 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.766 141898 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.767 141898 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.768 141898 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.768 141898 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.768 141898 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.768 141898 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.768 141898 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.768 141898 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.768 141898 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.768 141898 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.769 141898 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.770 141898 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.771 141898 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.772 141898 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.773 141898 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.774 141898 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.775 141898 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.776 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.777 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.778 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.779 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.780 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.781 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.782 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.783 141898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.784 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.785 141898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.786 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.787 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.788 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.789 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.790 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.791 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.792 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.793 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.794 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.795 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.796 141898 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.806 141898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.806 141898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.806 141898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.807 141898 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.807 141898 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.822 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name b9588630-ee40-495c-89d2-4219f6b0f0b5 (UUID: b9588630-ee40-495c-89d2-4219f6b0f0b5) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.842 141898 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.842 141898 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.843 141898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.843 141898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.846 141898 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.851 141898 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.857 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'b9588630-ee40-495c-89d2-4219f6b0f0b5'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], external_ids={}, name=b9588630-ee40-495c-89d2-4219f6b0f0b5, nb_cfg_timestamp=1759406123780, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.858 141898 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd2bc1abb20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.859 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.859 141898 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.860 141898 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.860 141898 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.864 141898 DEBUG oslo_service.service [-] Started child 142025 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.868 142025 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-372900'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.869 141898 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpshg502ls/privsep.sock']#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.893 142025 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.894 142025 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.894 142025 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.897 142025 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.903 142025 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  2 07:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:25.910 142025 INFO eventlet.wsgi.server [-] (142025) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  2 07:56:26 np0005466031 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  2 07:56:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:26.608 141898 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 07:56:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:26.610 141898 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpshg502ls/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 07:56:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:26.459 142062 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 07:56:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:26.465 142062 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 07:56:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:26.467 142062 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  2 07:56:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:26.468 142062 INFO oslo.privsep.daemon [-] privsep daemon running as pid 142062#033[00m
Oct  2 07:56:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:26.613 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe05783-d878-4697-bf38-d5689c1e8c76]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.176 142062 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.176 142062 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.176 142062 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:56:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:27.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.782 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[a2385538-3b45-4df4-a07f-29d075403d6f]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.785 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, column=external_ids, values=({'neutron:ovn-metadata-id': 'ceef3f5c-f5a2-535f-8465-3b528e2aee4d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.797 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.805 141898 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.805 141898 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.805 141898 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.805 141898 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.806 141898 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.806 141898 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.806 141898 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.806 141898 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.807 141898 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.807 141898 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.807 141898 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.808 141898 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.808 141898 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.808 141898 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.808 141898 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.809 141898 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.809 141898 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.809 141898 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.809 141898 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.809 141898 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.810 141898 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.810 141898 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.810 141898 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.810 141898 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.811 141898 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.811 141898 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.812 141898 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.812 141898 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.812 141898 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.812 141898 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.812 141898 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.813 141898 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.813 141898 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.813 141898 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.814 141898 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.814 141898 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.814 141898 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.815 141898 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.815 141898 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.815 141898 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.815 141898 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.815 141898 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.816 141898 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.816 141898 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.816 141898 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.816 141898 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.817 141898 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.817 141898 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.817 141898 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.817 141898 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.818 141898 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.818 141898 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.818 141898 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.818 141898 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.818 141898 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.819 141898 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.819 141898 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.819 141898 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.819 141898 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.820 141898 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.820 141898 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.820 141898 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.821 141898 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.821 141898 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.821 141898 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.822 141898 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.822 141898 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.822 141898 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.822 141898 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.822 141898 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.823 141898 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.823 141898 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.823 141898 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.823 141898 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.824 141898 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.824 141898 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.824 141898 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.824 141898 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.824 141898 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.825 141898 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.825 141898 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.825 141898 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.825 141898 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.825 141898 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.826 141898 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.826 141898 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.826 141898 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.826 141898 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.827 141898 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.827 141898 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.827 141898 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.827 141898 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.827 141898 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.828 141898 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.828 141898 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.828 141898 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.828 141898 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.828 141898 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.829 141898 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.829 141898 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.829 141898 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.829 141898 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.830 141898 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.830 141898 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.830 141898 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.830 141898 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.830 141898 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.831 141898 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.831 141898 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.831 141898 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.831 141898 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.832 141898 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.832 141898 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.832 141898 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.832 141898 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.833 141898 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.833 141898 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.834 141898 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.834 141898 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.834 141898 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.835 141898 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.835 141898 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.835 141898 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.835 141898 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.835 141898 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.836 141898 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.836 141898 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.836 141898 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.836 141898 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.837 141898 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.837 141898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.837 141898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.837 141898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.838 141898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.838 141898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.838 141898 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.838 141898 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.838 141898 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.839 141898 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.839 141898 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.839 141898 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.839 141898 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.839 141898 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.840 141898 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.840 141898 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.840 141898 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.840 141898 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.840 141898 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.841 141898 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.841 141898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.841 141898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.841 141898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.841 141898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.842 141898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.842 141898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.842 141898 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.842 141898 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.842 141898 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.842 141898 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.843 141898 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.843 141898 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.843 141898 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.843 141898 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.843 141898 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.844 141898 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.844 141898 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.844 141898 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.844 141898 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.844 141898 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.845 141898 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.845 141898 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.845 141898 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.845 141898 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.845 141898 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.846 141898 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.846 141898 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.846 141898 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.846 141898 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.846 141898 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.847 141898 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.847 141898 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.847 141898 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.847 141898 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.847 141898 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.848 141898 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.848 141898 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.848 141898 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.848 141898 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.848 141898 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.849 141898 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.849 141898 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.849 141898 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.849 141898 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.849 141898 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.850 141898 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.850 141898 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.850 141898 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.850 141898 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.850 141898 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.851 141898 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.851 141898 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.851 141898 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.851 141898 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.851 141898 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.852 141898 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.852 141898 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.852 141898 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.852 141898 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.852 141898 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.852 141898 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.853 141898 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.853 141898 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.853 141898 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.853 141898 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.853 141898 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.854 141898 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.854 141898 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.854 141898 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.854 141898 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.854 141898 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.855 141898 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.855 141898 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.855 141898 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.855 141898 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.855 141898 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.856 141898 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.856 141898 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.856 141898 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.856 141898 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.856 141898 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.856 141898 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.857 141898 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.857 141898 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.857 141898 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.857 141898 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.857 141898 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.857 141898 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.858 141898 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.858 141898 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.858 141898 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.858 141898 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.858 141898 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.858 141898 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.858 141898 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.859 141898 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.859 141898 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.859 141898 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.859 141898 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.859 141898 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.859 141898 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.860 141898 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.860 141898 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.860 141898 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.860 141898 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.860 141898 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.860 141898 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.860 141898 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.861 141898 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.861 141898 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.861 141898 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.861 141898 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.861 141898 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.861 141898 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.861 141898 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.862 141898 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.862 141898 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.862 141898 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.862 141898 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.862 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.862 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.863 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.863 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.863 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.863 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.863 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.863 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.864 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.864 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.864 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.864 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.864 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.864 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.864 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.865 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.865 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.865 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.865 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.865 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.865 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.865 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.866 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.866 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.866 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.866 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.866 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.866 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.867 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.867 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.867 141898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.867 141898 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.867 141898 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.867 141898 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.868 141898 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:56:27.868 141898 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 07:56:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:29.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:30 np0005466031 systemd-logind[786]: New session 49 of user zuul.
Oct  2 07:56:30 np0005466031 systemd[1]: Started Session 49 of User zuul.
Oct  2 07:56:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:31.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:31 np0005466031 python3.9[142223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:56:33 np0005466031 python3.9[142380]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:56:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:33.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:33.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:34 np0005466031 python3.9[142546]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:56:34 np0005466031 systemd[1]: Reloading.
Oct  2 07:56:34 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:56:34 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:56:35 np0005466031 python3.9[142732]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:56:35 np0005466031 network[142749]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:56:35 np0005466031 network[142750]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:56:35 np0005466031 network[142751]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:56:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:56:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:35.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:56:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:35.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:37.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:37.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:39.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:41.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:41.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:42 np0005466031 python3.9[143070]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:56:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5063 writes, 22K keys, 5063 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5063 writes, 741 syncs, 6.83 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5063 writes, 22K keys, 5063 commit groups, 1.0 writes per commit group, ingest: 18.21 MB, 0.03 MB/s#012Interval WAL: 5063 writes, 741 syncs, 6.83 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Oct  2 07:56:43 np0005466031 python3.9[143223]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:43.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:43.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:44 np0005466031 python3.9[143377]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:44 np0005466031 python3.9[143530]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:45.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:45 np0005466031 python3.9[143683]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:45.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:46 np0005466031 python3.9[143837]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:47 np0005466031 python3.9[143990]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:47 np0005466031 podman[143992]: 2025-10-02 11:56:47.647680358 +0000 UTC m=+0.091159605 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:56:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:47.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:47.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:48 np0005466031 python3.9[144169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:49 np0005466031 python3.9[144321]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:49.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:49.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:50 np0005466031 python3.9[144474]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:51 np0005466031 python3.9[144627]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:51.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:51.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:52 np0005466031 python3.9[144780]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:52 np0005466031 python3.9[144932]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:53 np0005466031 python3.9[145084]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:53.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:56:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:53.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:54 np0005466031 python3.9[145237]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:54 np0005466031 podman[145361]: 2025-10-02 11:56:54.427306392 +0000 UTC m=+0.057691868 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 07:56:54 np0005466031 python3.9[145408]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:55 np0005466031 python3.9[145561]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:55.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:55.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:55 np0005466031 python3.9[145713]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:56 np0005466031 python3.9[145866]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:57 np0005466031 python3.9[146018]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 07:56:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:57.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 07:56:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:57.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:57 np0005466031 python3.9[146220]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:58 np0005466031 python3.9[146373]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:56:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:59 np0005466031 python3.9[146525]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:56:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:56:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:56:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:59.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:00 np0005466031 python3.9[146678]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:57:00 np0005466031 systemd[1]: Reloading.
Oct  2 07:57:00 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:57:00 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:57:01 np0005466031 python3.9[146866]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:01.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:01.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:02 np0005466031 python3.9[147020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:02 np0005466031 python3.9[147173]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:03 np0005466031 python3.9[147326]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:03.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:04 np0005466031 python3.9[147480]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:05 np0005466031 python3.9[147633]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:05.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:05.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:05 np0005466031 python3.9[147786]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:07 np0005466031 python3.9[147940]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  2 07:57:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:57:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:07.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:57:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:07.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:07 np0005466031 python3.9[148094]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:57:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:09 np0005466031 python3.9[148252]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:57:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:09.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:09.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:10 np0005466031 python3.9[148413]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:57:11 np0005466031 python3.9[148497]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:57:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:11.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:13.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:13.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:15.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:15.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:17.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:17.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:18 np0005466031 podman[148563]: 2025-10-02 11:57:18.686299807 +0000 UTC m=+0.118929503 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct  2 07:57:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:19.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:57:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:19.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:57:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:21.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:21.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:23.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:24 np0005466031 podman[148695]: 2025-10-02 11:57:24.637931308 +0000 UTC m=+0.063046553 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 07:57:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:57:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:25.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:57:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:57:25.800 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:57:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:57:25.801 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:57:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:57:25.801 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:57:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:25.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:57:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:57:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:57:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:57:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:27.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:27.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:29.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:29.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000059s ======
Oct  2 07:57:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:31.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000059s
Oct  2 07:57:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:57:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:57:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:33.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:57:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:57:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:35.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:57:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:57:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:37.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:37.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:39.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:39.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:57:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:41.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:57:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:57:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:41.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:57:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:43.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:44 np0005466031 kernel: SELinux:  Converting 2767 SID table entries...
Oct  2 07:57:44 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:57:44 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:57:44 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:57:44 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:57:44 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:57:44 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:57:44 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:57:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:45.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:57:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:57:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:49 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  2 07:57:49 np0005466031 podman[149044]: 2025-10-02 11:57:49.666099024 +0000 UTC m=+0.088379651 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:57:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27bdf6f0 =====
Oct  2 07:57:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:49.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27bdf6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:49 np0005466031 radosgw[82465]: beast: 0x7f1c27bdf6f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:49.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27bdf6f0 =====
Oct  2 07:57:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:51.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27bdf6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:51 np0005466031 radosgw[82465]: beast: 0x7f1c27bdf6f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:51.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:53.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:53.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:55 np0005466031 kernel: SELinux:  Converting 2767 SID table entries...
Oct  2 07:57:55 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:57:55 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:57:55 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:57:55 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:57:55 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:57:55 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:57:55 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:57:55 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  2 07:57:55 np0005466031 podman[149081]: 2025-10-02 11:57:55.654878903 +0000 UTC m=+0.074050357 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  2 07:57:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:57:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:55.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:57:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:55.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:57:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:57.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:59.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:57:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:57:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:59.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:58:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:01.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:58:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:58:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:03.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:05.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:05.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:07.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:09.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:11.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:11.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:13.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:13.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:15.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:17.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:17.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:19.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:19.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:20 np0005466031 podman[158465]: 2025-10-02 11:58:20.695787459 +0000 UTC m=+0.119563322 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 07:58:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:21.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:21.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:23.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:23.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:58:25.801 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:58:25.802 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:58:25.802 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:25.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:25.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:26 np0005466031 podman[162726]: 2025-10-02 11:58:26.625616773 +0000 UTC m=+0.052562119 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 07:58:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:27.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000031s ======
Oct  2 07:58:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:29.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct  2 07:58:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:31.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:31.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:33.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:58:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:58:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:58:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:35.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:35.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:37.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:37.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:39.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:39.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:41.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:41.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:58:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:58:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:43.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:45.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:45.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:46 np0005466031 kernel: SELinux:  Converting 2768 SID table entries...
Oct  2 07:58:46 np0005466031 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:58:46 np0005466031 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:58:46 np0005466031 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:58:46 np0005466031 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:58:46 np0005466031 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:58:46 np0005466031 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:58:46 np0005466031 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:58:47 np0005466031 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct  2 07:58:47 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  2 07:58:47 np0005466031 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct  2 07:58:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:47.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000031s ======
Oct  2 07:58:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct  2 07:58:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:49.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:51 np0005466031 podman[166330]: 2025-10-02 11:58:51.29251277 +0000 UTC m=+0.137339659 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 07:58:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:53.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:53.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:55 np0005466031 systemd[1]: Stopping OpenSSH server daemon...
Oct  2 07:58:55 np0005466031 systemd[1]: sshd.service: Deactivated successfully.
Oct  2 07:58:55 np0005466031 systemd[1]: Stopped OpenSSH server daemon.
Oct  2 07:58:55 np0005466031 systemd[1]: sshd.service: Consumed 2.095s CPU time, read 0B from disk, written 4.0K to disk.
Oct  2 07:58:55 np0005466031 systemd[1]: Stopped target sshd-keygen.target.
Oct  2 07:58:55 np0005466031 systemd[1]: Stopping sshd-keygen.target...
Oct  2 07:58:55 np0005466031 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:58:55 np0005466031 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:58:55 np0005466031 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:58:55 np0005466031 systemd[1]: Reached target sshd-keygen.target.
Oct  2 07:58:55 np0005466031 systemd[1]: Starting OpenSSH server daemon...
Oct  2 07:58:55 np0005466031 systemd[1]: Started OpenSSH server daemon.
Oct  2 07:58:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:58:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:55.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:58:56 np0005466031 podman[167302]: 2025-10-02 11:58:56.728429247 +0000 UTC m=+0.063723616 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct  2 07:58:57 np0005466031 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:58:57 np0005466031 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:58:57 np0005466031 systemd[1]: Reloading.
Oct  2 07:58:57 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:58:57 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:58:57 np0005466031 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:58:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:57.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:57.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:59.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:58:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:59.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:00 np0005466031 systemd[1]: Starting PackageKit Daemon...
Oct  2 07:59:00 np0005466031 systemd[1]: Started PackageKit Daemon.
Oct  2 07:59:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:01.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:01.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:03.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000031s ======
Oct  2 07:59:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:03.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct  2 07:59:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:05 np0005466031 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:59:05 np0005466031 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:59:05 np0005466031 systemd[1]: man-db-cache-update.service: Consumed 10.042s CPU time.
Oct  2 07:59:05 np0005466031 systemd[1]: run-r96ccd414c04f40eb9af7f600a4ea970b.service: Deactivated successfully.
Oct  2 07:59:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:05.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000031s ======
Oct  2 07:59:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:05.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Oct  2 07:59:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:59:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:07.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:59:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:07.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:08 np0005466031 python3.9[175881]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:08 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:08 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:08 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:09 np0005466031 python3.9[176071]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:09 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:09 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:09 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:59:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:09.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:59:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:59:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:09.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:59:10 np0005466031 python3.9[176262]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:10 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:10 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:10 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:11 np0005466031 python3.9[176451]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:11 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:11 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:11 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:59:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:11.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:59:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:59:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:59:13 np0005466031 python3.9[176642]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:13 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:13 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:13 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:13.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:14 np0005466031 python3.9[176834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:14 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:14 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:14 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:15 np0005466031 python3.9[177024]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:15 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:15 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:15 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:15.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:15.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:16 np0005466031 python3.9[177215]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:17 np0005466031 python3.9[177370]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:17 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:17 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:17 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:17.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:17.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:18 np0005466031 python3.9[177585]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:18 np0005466031 systemd[1]: Reloading.
Oct  2 07:59:18 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:18 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:19 np0005466031 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  2 07:59:19 np0005466031 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  2 07:59:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:19.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:19 np0005466031 python3.9[177805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:19.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:20 np0005466031 python3.9[177961]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:21 np0005466031 python3.9[178116]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:21 np0005466031 podman[178118]: 2025-10-02 11:59:21.644134431 +0000 UTC m=+0.121914373 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 07:59:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:21.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:21.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:22 np0005466031 python3.9[178299]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:23 np0005466031 python3.9[178454]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:23 np0005466031 python3.9[178609]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:59:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:23.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:59:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:23.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:24 np0005466031 python3.9[178765]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:25 np0005466031 python3.9[178920]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:59:25.803 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:59:25.804 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 11:59:25.804 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:25.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:26 np0005466031 python3.9[179076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:26 np0005466031 python3.9[179231]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:27 np0005466031 podman[179233]: 2025-10-02 11:59:27.052487074 +0000 UTC m=+0.060858993 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 07:59:27 np0005466031 python3.9[179404]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:59:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:59:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:27.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:28 np0005466031 python3.9[179560]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:29 np0005466031 python3.9[179715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:29 np0005466031 python3.9[179870]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:29.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:29.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:59:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:31.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:59:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:59:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:32.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:59:32 np0005466031 python3.9[180027]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:33 np0005466031 python3.9[180179]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:33 np0005466031 python3.9[180331]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:33.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 07:59:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:34.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 07:59:34 np0005466031 python3.9[180484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:34 np0005466031 python3.9[180636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:35 np0005466031 python3.9[180788]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:35.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:59:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:36.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:59:37 np0005466031 python3.9[180941]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:59:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:37.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:59:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:38.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:38 np0005466031 python3.9[181067]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406376.6991856-1630-58422222002177/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:38 np0005466031 python3.9[181219]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:39 np0005466031 python3.9[181394]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406378.1976542-1630-280622386219454/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:39 np0005466031 python3.9[181546]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:39.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:40.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:40 np0005466031 python3.9[181672]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406379.4142582-1630-37249269119100/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:41 np0005466031 python3.9[181824]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:41 np0005466031 python3.9[181949]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406380.586876-1630-159397940104564/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:41.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:59:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:42.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:59:42 np0005466031 python3.9[182102]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:42 np0005466031 python3.9[182227]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406381.6987085-1630-279328942551721/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:43 np0005466031 python3.9[182492]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:43 np0005466031 python3.9[182634]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406382.8185132-1630-168379866594116/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:43.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:44.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:44 np0005466031 python3.9[182787]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:45 np0005466031 python3.9[182910]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406384.0078113-1630-14931295885091/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:45 np0005466031 python3.9[183062]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:45.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:46.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:59:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:59:46 np0005466031 python3.9[183188]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406385.2148602-1630-81252224758608/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:47.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:48 np0005466031 python3.9[183341]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:49 np0005466031 python3.9[183494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.642065) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389642114, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2197, "num_deletes": 253, "total_data_size": 5568142, "memory_usage": 5644072, "flush_reason": "Manual Compaction"}
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389669826, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3645687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12414, "largest_seqno": 14606, "table_properties": {"data_size": 3636755, "index_size": 5618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16653, "raw_average_key_size": 18, "raw_value_size": 3619026, "raw_average_value_size": 4080, "num_data_blocks": 251, "num_entries": 887, "num_filter_entries": 887, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406177, "oldest_key_time": 1759406177, "file_creation_time": 1759406389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 27817 microseconds, and 9487 cpu microseconds.
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.669885) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3645687 bytes OK
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.669907) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.671660) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.671673) EVENT_LOG_v1 {"time_micros": 1759406389671668, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.671693) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5558510, prev total WAL file size 5558510, number of live WAL files 2.
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.673297) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3560KB)], [24(8239KB)]
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389673398, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12083427, "oldest_snapshot_seqno": -1}
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4332 keys, 11528337 bytes, temperature: kUnknown
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389747839, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11528337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11493682, "index_size": 22705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105245, "raw_average_key_size": 24, "raw_value_size": 11409848, "raw_average_value_size": 2633, "num_data_blocks": 969, "num_entries": 4332, "num_filter_entries": 4332, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759406389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.748087) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11528337 bytes
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.749262) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.2 rd, 154.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(6.5) write-amplify(3.2) OK, records in: 4856, records dropped: 524 output_compression: NoCompression
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.749281) EVENT_LOG_v1 {"time_micros": 1759406389749270, "job": 12, "event": "compaction_finished", "compaction_time_micros": 74503, "compaction_time_cpu_micros": 33965, "output_level": 6, "num_output_files": 1, "total_output_size": 11528337, "num_input_records": 4856, "num_output_records": 4332, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389750051, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389751333, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.673116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.751380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.751387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.751388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.751390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-11:59:49.751391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:59:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:59:50 np0005466031 python3.9[183647]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:50 np0005466031 python3.9[183799]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:51 np0005466031 python3.9[183951]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:51 np0005466031 podman[184076]: 2025-10-02 11:59:51.908884932 +0000 UTC m=+0.083728028 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 07:59:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:51.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:59:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:52.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:59:52 np0005466031 python3.9[184121]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:52 np0005466031 python3.9[184282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:53 np0005466031 python3.9[184484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:53 np0005466031 python3.9[184636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:53.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:54.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:54 np0005466031 python3.9[184789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:55 np0005466031 python3.9[184941]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:55 np0005466031 python3.9[185093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:55.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:56.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:56 np0005466031 python3.9[185246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:57 np0005466031 python3.9[185398]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:57 np0005466031 podman[185522]: 2025-10-02 11:59:57.429767694 +0000 UTC m=+0.049053560 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 07:59:57 np0005466031 python3.9[185566]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:58.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 07:59:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:59.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:59 np0005466031 python3.9[185770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:00.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:00 np0005466031 python3.9[185893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406399.5498526-2292-152740433603323/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 08:00:01 np0005466031 python3.9[186045]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:01 np0005466031 python3.9[186168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406400.6803226-2292-144282950099466/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:01.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:02.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:02 np0005466031 python3.9[186321]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:02 np0005466031 python3.9[186444]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406401.7995946-2292-233211661104936/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:03 np0005466031 python3.9[186596]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:03 np0005466031 python3.9[186720]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406402.9920633-2292-108620761416284/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:03.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:04.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:04 np0005466031 python3.9[186872]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:04 np0005466031 python3.9[186995]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406404.0262544-2292-98371292099373/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:05 np0005466031 python3.9[187147]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:05.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:06.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:06 np0005466031 python3.9[187271]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406405.1377697-2292-177915703080052/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:06 np0005466031 python3.9[187423]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:07 np0005466031 python3.9[187546]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406406.2391083-2292-209748174239901/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:07 np0005466031 python3.9[187698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:08.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:08 np0005466031 python3.9[187822]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406407.3484282-2292-65371513943576/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:08 np0005466031 python3.9[187974]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:09 np0005466031 python3.9[188097]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406408.4466975-2292-86561667598297/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:09.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:09 np0005466031 python3.9[188250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:10.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:10 np0005466031 python3.9[188373]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406409.5605145-2292-170263105927948/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:11 np0005466031 python3.9[188525]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:11 np0005466031 python3.9[188648]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406410.7164435-2292-115562222786544/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:11.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:12.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:12 np0005466031 python3.9[188801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:12 np0005466031 python3.9[188924]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406411.9753523-2292-29014844077621/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:13 np0005466031 python3.9[189076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:13.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:14.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:14 np0005466031 python3.9[189200]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406413.020129-2292-38316759538353/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:14 np0005466031 python3.9[189352]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:15 np0005466031 python3.9[189475]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406414.295042-2292-189638047342918/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:16.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:18.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:18 np0005466031 python3.9[189627]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:00:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:19 np0005466031 python3.9[189832]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  2 08:00:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:19.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:20.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:21.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:22 np0005466031 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  2 08:00:22 np0005466031 podman[189863]: 2025-10-02 12:00:22.683307986 +0000 UTC m=+0.095324697 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:00:23 np0005466031 python3.9[190016]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:23.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:24.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:24 np0005466031 python3.9[190169]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:24 np0005466031 python3.9[190321]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:25 np0005466031 python3.9[190473]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:00:25.806 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:00:25.806 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:00:25.806 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:25 np0005466031 python3.9[190626]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:25.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:26.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:27 np0005466031 python3.9[190778]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:27 np0005466031 podman[190930]: 2025-10-02 12:00:27.518208982 +0000 UTC m=+0.040464009 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:00:27 np0005466031 python3.9[190931]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:27.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:28.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:28 np0005466031 python3.9[191102]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:28 np0005466031 auditd[703]: Audit daemon rotating log files
Oct  2 08:00:28 np0005466031 python3.9[191254]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:29 np0005466031 python3.9[191406]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:29.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:30 np0005466031 python3.9[191559]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:30 np0005466031 systemd[1]: Reloading.
Oct  2 08:00:30 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:30 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:31 np0005466031 systemd[1]: Starting libvirt logging daemon socket...
Oct  2 08:00:31 np0005466031 systemd[1]: Listening on libvirt logging daemon socket.
Oct  2 08:00:31 np0005466031 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  2 08:00:31 np0005466031 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  2 08:00:31 np0005466031 systemd[1]: Starting libvirt logging daemon...
Oct  2 08:00:31 np0005466031 systemd[1]: Started libvirt logging daemon.
Oct  2 08:00:31 np0005466031 python3.9[191752]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:31 np0005466031 systemd[1]: Reloading.
Oct  2 08:00:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:32.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:32 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:32 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:32 np0005466031 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  2 08:00:32 np0005466031 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  2 08:00:32 np0005466031 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  2 08:00:32 np0005466031 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  2 08:00:32 np0005466031 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  2 08:00:32 np0005466031 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  2 08:00:32 np0005466031 systemd[1]: Starting libvirt nodedev daemon...
Oct  2 08:00:32 np0005466031 systemd[1]: Started libvirt nodedev daemon.
Oct  2 08:00:33 np0005466031 python3.9[191967]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:33 np0005466031 systemd[1]: Reloading.
Oct  2 08:00:33 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:33 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:33 np0005466031 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  2 08:00:33 np0005466031 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  2 08:00:33 np0005466031 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  2 08:00:33 np0005466031 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  2 08:00:33 np0005466031 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  2 08:00:33 np0005466031 systemd[1]: Starting libvirt proxy daemon...
Oct  2 08:00:33 np0005466031 systemd[1]: Started libvirt proxy daemon.
Oct  2 08:00:33 np0005466031 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  2 08:00:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:34.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:34 np0005466031 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  2 08:00:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:34.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:34 np0005466031 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  2 08:00:34 np0005466031 python3.9[192185]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:34 np0005466031 systemd[1]: Reloading.
Oct  2 08:00:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:34 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:34 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:34 np0005466031 systemd[1]: Listening on libvirt locking daemon socket.
Oct  2 08:00:34 np0005466031 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  2 08:00:34 np0005466031 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  2 08:00:34 np0005466031 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  2 08:00:34 np0005466031 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  2 08:00:34 np0005466031 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  2 08:00:34 np0005466031 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  2 08:00:34 np0005466031 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  2 08:00:34 np0005466031 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  2 08:00:34 np0005466031 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  2 08:00:34 np0005466031 systemd[1]: Starting libvirt QEMU daemon...
Oct  2 08:00:34 np0005466031 systemd[1]: Started libvirt QEMU daemon.
Oct  2 08:00:35 np0005466031 setroubleshoot[192004]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 96983573-a3ed-4dac-9a5a-3f5a7b2952bd
Oct  2 08:00:35 np0005466031 setroubleshoot[192004]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  2 08:00:35 np0005466031 setroubleshoot[192004]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 96983573-a3ed-4dac-9a5a-3f5a7b2952bd
Oct  2 08:00:35 np0005466031 setroubleshoot[192004]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  2 08:00:35 np0005466031 python3.9[192401]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:35 np0005466031 systemd[1]: Reloading.
Oct  2 08:00:35 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:35 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:36.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:36.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:36 np0005466031 systemd[1]: Starting libvirt secret daemon socket...
Oct  2 08:00:36 np0005466031 systemd[1]: Listening on libvirt secret daemon socket.
Oct  2 08:00:36 np0005466031 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  2 08:00:36 np0005466031 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  2 08:00:36 np0005466031 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  2 08:00:36 np0005466031 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  2 08:00:36 np0005466031 systemd[1]: Starting libvirt secret daemon...
Oct  2 08:00:36 np0005466031 systemd[1]: Started libvirt secret daemon.
Oct  2 08:00:37 np0005466031 python3.9[192612]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:37 np0005466031 python3.9[192765]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 08:00:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:38.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:38.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:38 np0005466031 python3.9[192917]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:00:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:39 np0005466031 python3.9[193121]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 08:00:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:40.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:40.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:40 np0005466031 python3.9[193272]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:40 np0005466031 python3.9[193393]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406439.9917314-3366-173348564907187/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9d9565ec21a9799171bafbb06d2141d5e5510d7d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:41 np0005466031 python3.9[193545]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 20fdc58c-b037-5094-a8ef-d490aa7c36f3#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:00:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:42.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:42.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:42 np0005466031 python3.9[193708]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:44.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:45 np0005466031 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  2 08:00:45 np0005466031 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.049s CPU time.
Oct  2 08:00:45 np0005466031 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  2 08:00:45 np0005466031 python3.9[194173]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:46.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:46.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:46 np0005466031 python3.9[194325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:47 np0005466031 python3.9[194448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406446.173297-3532-223749321244197/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:48.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:48 np0005466031 python3.9[194601]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:49 np0005466031 python3.9[194753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:49 np0005466031 python3.9[194831]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:50 np0005466031 python3.9[194984]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:50 np0005466031 python3.9[195062]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2ku6lg8m recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:51 np0005466031 python3.9[195214]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:51 np0005466031 python3.9[195293]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:52.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:52 np0005466031 python3.9[195445]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:00:53 np0005466031 podman[195495]: 2025-10-02 12:00:53.085941233 +0000 UTC m=+0.090216008 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:00:53 np0005466031 python3[195739]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 08:00:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:54.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:54 np0005466031 python3.9[195908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:00:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:00:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:00:54 np0005466031 python3.9[195986]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:55 np0005466031 python3.9[196139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:56.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:56.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:56 np0005466031 python3.9[196217]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:57 np0005466031 python3.9[196369]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:57 np0005466031 podman[196448]: 2025-10-02 12:00:57.655680832 +0000 UTC m=+0.082419731 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:00:57 np0005466031 python3.9[196447]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:00:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:00:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:00:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:58.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:58 np0005466031 python3.9[196620]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:59 np0005466031 python3.9[196698]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:59 np0005466031 python3.9[196900]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:00.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:00.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:00 np0005466031 python3.9[197026]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406459.332131-3907-259743763298993/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:01 np0005466031 python3.9[197228]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:01:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:01:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:02 np0005466031 python3.9[197392]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:01:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:02.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:03 np0005466031 python3.9[197547]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:03 np0005466031 python3.9[197699]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:01:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:04.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:04 np0005466031 python3.9[197853]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:01:05 np0005466031 python3.9[198007]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:01:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000030s ======
Oct  2 08:01:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct  2 08:01:06 np0005466031 python3.9[198163]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:06.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:06 np0005466031 python3.9[198315]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:07 np0005466031 python3.9[198438]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406466.4947157-4123-137213015993417/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:08.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:08 np0005466031 python3.9[198591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:08 np0005466031 python3.9[198714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406467.9069173-4168-194200495314069/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:09 np0005466031 python3.9[198866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:10.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:10.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:10 np0005466031 python3.9[198990]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406469.288404-4213-60656574191701/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:11 np0005466031 python3.9[199142]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:11 np0005466031 systemd[1]: Reloading.
Oct  2 08:01:11 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:11 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:11 np0005466031 systemd[1]: Reached target edpm_libvirt.target.
Oct  2 08:01:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:12.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:12.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:12 np0005466031 python3.9[199335]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 08:01:12 np0005466031 systemd[1]: Reloading.
Oct  2 08:01:12 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:12 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:13 np0005466031 systemd[1]: Reloading.
Oct  2 08:01:13 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:13 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:13 np0005466031 systemd[1]: session-49.scope: Deactivated successfully.
Oct  2 08:01:13 np0005466031 systemd[1]: session-49.scope: Consumed 3min 30.143s CPU time.
Oct  2 08:01:13 np0005466031 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Oct  2 08:01:13 np0005466031 systemd-logind[786]: Removed session 49.
Oct  2 08:01:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:14.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 08:01:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 08:01:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:16.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:16.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:18.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:18 np0005466031 systemd-logind[786]: New session 50 of user zuul.
Oct  2 08:01:18 np0005466031 systemd[1]: Started Session 50 of User zuul.
Oct  2 08:01:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:19 np0005466031 python3.9[199639]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 08:01:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:20.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:21 np0005466031 python3.9[199796]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:21 np0005466031 python3.9[199949]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:22.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:22.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:22 np0005466031 python3.9[200101]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:23 np0005466031 python3.9[200253]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 08:01:23 np0005466031 podman[200254]: 2025-10-02 12:01:23.264479368 +0000 UTC m=+0.082289357 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:01:23 np0005466031 python3.9[200433]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:24.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:24.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:24 np0005466031 python3.9[200586]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:01:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:01:25.806 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:01:25.807 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:01:25.807 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:26.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:26.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:26 np0005466031 python3.9[200741]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:26 np0005466031 systemd[1]: Reloading.
Oct  2 08:01:26 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:26 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:27 np0005466031 python3.9[200930]: ansible-ansible.builtin.service_facts Invoked
Oct  2 08:01:27 np0005466031 network[200947]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 08:01:27 np0005466031 network[200948]: 'network-scripts' will be removed from distribution in near future.
Oct  2 08:01:27 np0005466031 network[200949]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 08:01:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:28.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:28.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:28 np0005466031 podman[200957]: 2025-10-02 12:01:28.522721481 +0000 UTC m=+0.070694109 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:01:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:30.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:30.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:31 np0005466031 python3.9[201243]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:31 np0005466031 systemd[1]: Reloading.
Oct  2 08:01:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:32.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:32 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:32 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:32.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:33 np0005466031 python3.9[201430]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:01:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:34.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:34 np0005466031 python3.9[201583]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 08:01:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:34 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:01:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:34.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:34 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:01:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:35 np0005466031 podman[201597]: 2025-10-02 12:01:35.686700837 +0000 UTC m=+1.484858319 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 08:01:35 np0005466031 podman[201658]: 2025-10-02 12:01:35.822261041 +0000 UTC m=+0.039861749 container create e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.8743] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct  2 08:01:35 np0005466031 kernel: podman0: port 1(veth0) entered blocking state
Oct  2 08:01:35 np0005466031 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 08:01:35 np0005466031 kernel: veth0: entered allmulticast mode
Oct  2 08:01:35 np0005466031 kernel: veth0: entered promiscuous mode
Oct  2 08:01:35 np0005466031 kernel: podman0: port 1(veth0) entered blocking state
Oct  2 08:01:35 np0005466031 kernel: podman0: port 1(veth0) entered forwarding state
Oct  2 08:01:35 np0005466031 podman[201658]: 2025-10-02 12:01:35.803194102 +0000 UTC m=+0.020794800 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.8990] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9021] device (veth0): carrier: link connected
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9026] device (podman0): carrier: link connected
Oct  2 08:01:35 np0005466031 systemd-udevd[201688]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:01:35 np0005466031 systemd-udevd[201685]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9330] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9338] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9347] device (podman0): Activation: starting connection 'podman0' (1c9a0044-3fb1-47c0-adae-baf829739851)
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9351] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9353] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9355] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9359] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466031 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 08:01:35 np0005466031 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9699] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9702] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466031 NetworkManager[44907]: <info>  [1759406495.9710] device (podman0): Activation: successful, device activated.
Oct  2 08:01:35 np0005466031 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  2 08:01:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:36.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:36 np0005466031 systemd[1]: Started libpod-conmon-e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff.scope.
Oct  2 08:01:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:36.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:36 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:01:36 np0005466031 podman[201658]: 2025-10-02 12:01:36.263303795 +0000 UTC m=+0.480904493 container init e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:01:36 np0005466031 podman[201658]: 2025-10-02 12:01:36.331672244 +0000 UTC m=+0.549272922 container start e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:01:36 np0005466031 podman[201658]: 2025-10-02 12:01:36.335618728 +0000 UTC m=+0.553219426 container attach e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:01:36 np0005466031 systemd[1]: libpod-e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff.scope: Deactivated successfully.
Oct  2 08:01:36 np0005466031 iscsid_config[201816]: iqn.1994-05.com.redhat:d79ae5a31735#015
Oct  2 08:01:36 np0005466031 conmon[201816]: conmon e76fb8b1bd858eb5cbc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff.scope/container/memory.events
Oct  2 08:01:36 np0005466031 podman[201658]: 2025-10-02 12:01:36.343829114 +0000 UTC m=+0.561429792 container died e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:01:36 np0005466031 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 08:01:36 np0005466031 kernel: veth0 (unregistering): left allmulticast mode
Oct  2 08:01:36 np0005466031 kernel: veth0 (unregistering): left promiscuous mode
Oct  2 08:01:36 np0005466031 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 08:01:36 np0005466031 NetworkManager[44907]: <info>  [1759406496.4457] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:01:36 np0005466031 systemd[1]: run-netns-netns\x2d599ccdd9\x2d8406\x2dc795\x2d92c7\x2d85b3da7f85cc.mount: Deactivated successfully.
Oct  2 08:01:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff-userdata-shm.mount: Deactivated successfully.
Oct  2 08:01:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay-35dd5ff0677299132098dd0c2fb3309bc6c88d651833eb02a1353773b0e685b1-merged.mount: Deactivated successfully.
Oct  2 08:01:36 np0005466031 podman[201658]: 2025-10-02 12:01:36.867811216 +0000 UTC m=+1.085411884 container remove e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:01:36 np0005466031 python3.9[201583]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct  2 08:01:36 np0005466031 systemd[1]: libpod-conmon-e76fb8b1bd858eb5cbc4feda370ce7a5d72e2bbc00b7bdb3b53abe778c9fe1ff.scope: Deactivated successfully.
Oct  2 08:01:37 np0005466031 python3.9[201583]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  2 08:01:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:38.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:38.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:38 np0005466031 python3.9[202059]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:39 np0005466031 python3.9[202182]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406498.2144651-324-65496739876972/.source.iscsi _original_basename=.nrap651c follow=False checksum=bffb0d641e3e082df5bebe40f33fbcca00789e5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:40.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:40 np0005466031 python3.9[202385]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:40.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:40 np0005466031 python3.9[202535]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:01:41 np0005466031 python3.9[202690]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:42.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:42.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:42 np0005466031 python3.9[202842]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:43 np0005466031 python3.9[202994]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:43 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 08:01:44 np0005466031 python3.9[203073]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:44.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.686574) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504686638, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1525, "num_deletes": 501, "total_data_size": 2981358, "memory_usage": 3025432, "flush_reason": "Manual Compaction"}
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct  2 08:01:44 np0005466031 python3.9[203225]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504694348, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1162013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14611, "largest_seqno": 16131, "table_properties": {"data_size": 1157196, "index_size": 1765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14875, "raw_average_key_size": 19, "raw_value_size": 1144903, "raw_average_value_size": 1465, "num_data_blocks": 81, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406390, "oldest_key_time": 1759406390, "file_creation_time": 1759406504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 7805 microseconds, and 3790 cpu microseconds.
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.694392) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1162013 bytes OK
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.694411) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.695628) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.695642) EVENT_LOG_v1 {"time_micros": 1759406504695638, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.695662) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2973386, prev total WAL file size 2973386, number of live WAL files 2.
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.696614) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1134KB)], [27(10MB)]
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504696684, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12690350, "oldest_snapshot_seqno": -1}
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4149 keys, 7897210 bytes, temperature: kUnknown
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504755049, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 7897210, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7868177, "index_size": 17531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 102607, "raw_average_key_size": 24, "raw_value_size": 7791792, "raw_average_value_size": 1877, "num_data_blocks": 739, "num_entries": 4149, "num_filter_entries": 4149, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759406504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.755425) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7897210 bytes
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.761304) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.9 rd, 135.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(17.7) write-amplify(6.8) OK, records in: 5113, records dropped: 964 output_compression: NoCompression
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.761343) EVENT_LOG_v1 {"time_micros": 1759406504761326, "job": 14, "event": "compaction_finished", "compaction_time_micros": 58497, "compaction_time_cpu_micros": 40873, "output_level": 6, "num_output_files": 1, "total_output_size": 7897210, "num_input_records": 5113, "num_output_records": 4149, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504761962, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504766329, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.696471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.766402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.766408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.766410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.766411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:01:44.766413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:45 np0005466031 python3.9[203303]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:01:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:46.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:01:46 np0005466031 python3.9[203456]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:46 np0005466031 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 08:01:47 np0005466031 python3.9[203608]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:47 np0005466031 python3.9[203686]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:48.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:48.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:48 np0005466031 python3.9[203839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:48 np0005466031 python3.9[203917]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:50.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:50 np0005466031 python3.9[204070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:50 np0005466031 systemd[1]: Reloading.
Oct  2 08:01:50 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:50 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:50.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:51 np0005466031 python3.9[204258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:51 np0005466031 python3.9[204336]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:52.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:52 np0005466031 python3.9[204489]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:52 np0005466031 python3.9[204567]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:53 np0005466031 podman[204691]: 2025-10-02 12:01:53.609154055 +0000 UTC m=+0.082502917 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:01:53 np0005466031 python3.9[204739]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:53 np0005466031 systemd[1]: Reloading.
Oct  2 08:01:53 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:53 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:54.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:54 np0005466031 systemd[1]: Starting Create netns directory...
Oct  2 08:01:54 np0005466031 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 08:01:54 np0005466031 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 08:01:54 np0005466031 systemd[1]: Finished Create netns directory.
Oct  2 08:01:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:55 np0005466031 python3.9[204941]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:55 np0005466031 python3.9[205094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:01:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:56.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:01:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:56.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:56 np0005466031 python3.9[205217]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406515.5220108-786-110410053060411/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:57 np0005466031 python3.9[205369]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:58.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:01:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:01:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:58.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:01:58 np0005466031 python3.9[205522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:58 np0005466031 podman[205582]: 2025-10-02 12:01:58.656943947 +0000 UTC m=+0.084119014 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:01:58 np0005466031 python3.9[205665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406517.9274206-861-245562771580218/.source.json _original_basename=.etkgajzs follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:59 np0005466031 python3.9[205817]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:00.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:00.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:01 np0005466031 python3.9[206414]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  2 08:02:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:02:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:02:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:02:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:02.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:02.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:02 np0005466031 python3.9[206577]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 08:02:03 np0005466031 python3.9[206730]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 08:02:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:04.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:04.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:06.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:06.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:06 np0005466031 python3[206910]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 08:02:06 np0005466031 podman[206946]: 2025-10-02 12:02:06.706165138 +0000 UTC m=+0.055318424 container create d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:02:06 np0005466031 podman[206946]: 2025-10-02 12:02:06.677463882 +0000 UTC m=+0.026617138 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 08:02:06 np0005466031 python3[206910]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 08:02:07 np0005466031 python3.9[207136]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:08.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:08.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:08 np0005466031 python3.9[207291]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:09 np0005466031 python3.9[207417]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:02:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:02:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:09 np0005466031 python3.9[207568]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406529.1760569-1125-221857781137102/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:10.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:10 np0005466031 python3.9[207645]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:02:10 np0005466031 systemd[1]: Reloading.
Oct  2 08:02:10 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:02:10 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:02:11 np0005466031 python3.9[207756]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:02:11 np0005466031 systemd[1]: Reloading.
Oct  2 08:02:11 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:02:11 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:02:11 np0005466031 systemd[1]: Starting iscsid container...
Oct  2 08:02:11 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:02:11 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222dbfee3a866c33a4841fde62419c89911199dc30c5175ec846c3aa60545431/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  2 08:02:11 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222dbfee3a866c33a4841fde62419c89911199dc30c5175ec846c3aa60545431/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:02:11 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222dbfee3a866c33a4841fde62419c89911199dc30c5175ec846c3aa60545431/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:02:11 np0005466031 systemd[1]: Started /usr/bin/podman healthcheck run d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a.
Oct  2 08:02:11 np0005466031 podman[207797]: 2025-10-02 12:02:11.970344362 +0000 UTC m=+0.113692716 container init d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:02:11 np0005466031 iscsid[207813]: + sudo -E kolla_set_configs
Oct  2 08:02:12 np0005466031 podman[207797]: 2025-10-02 12:02:12.003564749 +0000 UTC m=+0.146913093 container start d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:02:12 np0005466031 podman[207797]: iscsid
Oct  2 08:02:12 np0005466031 systemd[1]: Started iscsid container.
Oct  2 08:02:12 np0005466031 systemd[1]: Created slice User Slice of UID 0.
Oct  2 08:02:12 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  2 08:02:12 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  2 08:02:12 np0005466031 systemd[1]: Starting User Manager for UID 0...
Oct  2 08:02:12 np0005466031 podman[207819]: 2025-10-02 12:02:12.081196035 +0000 UTC m=+0.066169777 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:02:12 np0005466031 systemd[1]: d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a-31257804d3ab3434.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 08:02:12 np0005466031 systemd[1]: d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a-31257804d3ab3434.service: Failed with result 'exit-code'.
Oct  2 08:02:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:12.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:12 np0005466031 systemd[207840]: Queued start job for default target Main User Target.
Oct  2 08:02:12 np0005466031 systemd[207840]: Created slice User Application Slice.
Oct  2 08:02:12 np0005466031 systemd[207840]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  2 08:02:12 np0005466031 systemd[207840]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:02:12 np0005466031 systemd[207840]: Reached target Paths.
Oct  2 08:02:12 np0005466031 systemd[207840]: Reached target Timers.
Oct  2 08:02:12 np0005466031 systemd[207840]: Starting D-Bus User Message Bus Socket...
Oct  2 08:02:12 np0005466031 systemd[207840]: Starting Create User's Volatile Files and Directories...
Oct  2 08:02:12 np0005466031 systemd[207840]: Finished Create User's Volatile Files and Directories.
Oct  2 08:02:12 np0005466031 systemd[207840]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:02:12 np0005466031 systemd[207840]: Reached target Sockets.
Oct  2 08:02:12 np0005466031 systemd[207840]: Reached target Basic System.
Oct  2 08:02:12 np0005466031 systemd[207840]: Reached target Main User Target.
Oct  2 08:02:12 np0005466031 systemd[207840]: Startup finished in 141ms.
Oct  2 08:02:12 np0005466031 systemd[1]: Started User Manager for UID 0.
Oct  2 08:02:12 np0005466031 systemd[1]: Started Session c3 of User root.
Oct  2 08:02:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:02:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:12.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:02:12 np0005466031 iscsid[207813]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:02:12 np0005466031 iscsid[207813]: INFO:__main__:Validating config file
Oct  2 08:02:12 np0005466031 iscsid[207813]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:02:12 np0005466031 iscsid[207813]: INFO:__main__:Writing out command to execute
Oct  2 08:02:12 np0005466031 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  2 08:02:12 np0005466031 iscsid[207813]: ++ cat /run_command
Oct  2 08:02:12 np0005466031 iscsid[207813]: + CMD='/usr/sbin/iscsid -f'
Oct  2 08:02:12 np0005466031 iscsid[207813]: + ARGS=
Oct  2 08:02:12 np0005466031 iscsid[207813]: + sudo kolla_copy_cacerts
Oct  2 08:02:12 np0005466031 systemd[1]: Started Session c4 of User root.
Oct  2 08:02:12 np0005466031 iscsid[207813]: + [[ ! -n '' ]]
Oct  2 08:02:12 np0005466031 iscsid[207813]: + . kolla_extend_start
Oct  2 08:02:12 np0005466031 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  2 08:02:12 np0005466031 iscsid[207813]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  2 08:02:12 np0005466031 iscsid[207813]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  2 08:02:12 np0005466031 iscsid[207813]: Running command: '/usr/sbin/iscsid -f'
Oct  2 08:02:12 np0005466031 iscsid[207813]: + umask 0022
Oct  2 08:02:12 np0005466031 iscsid[207813]: + exec /usr/sbin/iscsid -f
Oct  2 08:02:12 np0005466031 kernel: Loading iSCSI transport class v2.0-870.
Oct  2 08:02:13 np0005466031 python3.9[208017]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:14 np0005466031 python3.9[208170]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.003000087s ======
Oct  2 08:02:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:14.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000087s
Oct  2 08:02:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:15 np0005466031 python3.9[208322]: ansible-ansible.builtin.service_facts Invoked
Oct  2 08:02:15 np0005466031 network[208339]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 08:02:15 np0005466031 network[208340]: 'network-scripts' will be removed from distribution in near future.
Oct  2 08:02:15 np0005466031 network[208341]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 08:02:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:18.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:18.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:02:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:20.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:02:20 np0005466031 python3.9[208669]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 08:02:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:20.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:21 np0005466031 python3.9[208821]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  2 08:02:22 np0005466031 python3.9[208978]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:22.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:22.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:22 np0005466031 systemd[1]: Stopping User Manager for UID 0...
Oct  2 08:02:22 np0005466031 systemd[207840]: Activating special unit Exit the Session...
Oct  2 08:02:22 np0005466031 systemd[207840]: Stopped target Main User Target.
Oct  2 08:02:22 np0005466031 systemd[207840]: Stopped target Basic System.
Oct  2 08:02:22 np0005466031 systemd[207840]: Stopped target Paths.
Oct  2 08:02:22 np0005466031 systemd[207840]: Stopped target Sockets.
Oct  2 08:02:22 np0005466031 systemd[207840]: Stopped target Timers.
Oct  2 08:02:22 np0005466031 systemd[207840]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:02:22 np0005466031 systemd[207840]: Closed D-Bus User Message Bus Socket.
Oct  2 08:02:22 np0005466031 systemd[207840]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:02:22 np0005466031 systemd[207840]: Removed slice User Application Slice.
Oct  2 08:02:22 np0005466031 systemd[207840]: Reached target Shutdown.
Oct  2 08:02:22 np0005466031 systemd[207840]: Finished Exit the Session.
Oct  2 08:02:22 np0005466031 systemd[207840]: Reached target Exit the Session.
Oct  2 08:02:22 np0005466031 systemd[1]: user@0.service: Deactivated successfully.
Oct  2 08:02:22 np0005466031 systemd[1]: Stopped User Manager for UID 0.
Oct  2 08:02:22 np0005466031 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  2 08:02:22 np0005466031 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  2 08:02:22 np0005466031 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  2 08:02:22 np0005466031 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  2 08:02:22 np0005466031 systemd[1]: Removed slice User Slice of UID 0.
Oct  2 08:02:22 np0005466031 python3.9[209101]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406541.5565317-1348-181471050544307/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:23 np0005466031 python3.9[209255]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:24.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:24 np0005466031 podman[209380]: 2025-10-02 12:02:24.263107851 +0000 UTC m=+0.152788772 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:02:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:24.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:24 np0005466031 python3.9[209429]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:02:24 np0005466031 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 08:02:24 np0005466031 systemd[1]: Stopped Load Kernel Modules.
Oct  2 08:02:24 np0005466031 systemd[1]: Stopping Load Kernel Modules...
Oct  2 08:02:24 np0005466031 systemd[1]: Starting Load Kernel Modules...
Oct  2 08:02:24 np0005466031 systemd[1]: Finished Load Kernel Modules.
Oct  2 08:02:25 np0005466031 python3.9[209590]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:02:25.808 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:02:25.810 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:02:25.810 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:26.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:26.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:26 np0005466031 python3.9[209743]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:27 np0005466031 python3.9[209895]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:27 np0005466031 python3.9[210048]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:28.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:28.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:28 np0005466031 python3.9[210171]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406547.4309876-1521-122124726835947/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:29 np0005466031 podman[210295]: 2025-10-02 12:02:29.250629836 +0000 UTC m=+0.066330462 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:02:29 np0005466031 python3.9[210342]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:02:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:30 np0005466031 python3.9[210496]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:30.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:31 np0005466031 python3.9[210648]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:31 np0005466031 python3.9[210800]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:32.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:32 np0005466031 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  2 08:02:32 np0005466031 python3.9[210954]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:33 np0005466031 python3.9[211106]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:33 np0005466031 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 08:02:34 np0005466031 python3.9[211260]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:34.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:34 np0005466031 python3.9[211412]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:35 np0005466031 python3.9[211564]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:36.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:36 np0005466031 python3.9[211719]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:36.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:37 np0005466031 python3.9[211871]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:38 np0005466031 python3.9[212024]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:38.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:38.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:38 np0005466031 python3.9[212102]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:39 np0005466031 python3.9[212254]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:39 np0005466031 python3.9[212332]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:02:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:02:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:40 np0005466031 python3.9[212535]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:41 np0005466031 python3.9[212687]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:41 np0005466031 python3.9[212765]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:42.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:42 np0005466031 podman[212890]: 2025-10-02 12:02:42.260824207 +0000 UTC m=+0.063178251 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:02:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:42.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:42 np0005466031 python3.9[212938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:42 np0005466031 python3.9[213016]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:43 np0005466031 python3.9[213168]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:02:43 np0005466031 systemd[1]: Reloading.
Oct  2 08:02:43 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:02:43 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:02:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:44.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:44 np0005466031 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  2 08:02:44 np0005466031 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  2 08:02:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:44 np0005466031 python3.9[213360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:45 np0005466031 python3.9[213438]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:46.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:46 np0005466031 python3.9[213591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:46.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:46 np0005466031 python3.9[213669]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:47 np0005466031 python3.9[213821]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:02:47 np0005466031 systemd[1]: Reloading.
Oct  2 08:02:47 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:02:47 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:02:48 np0005466031 systemd[1]: Starting Create netns directory...
Oct  2 08:02:48 np0005466031 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 08:02:48 np0005466031 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 08:02:48 np0005466031 systemd[1]: Finished Create netns directory.
Oct  2 08:02:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:48.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:48.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:49 np0005466031 python3.9[214015]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:50 np0005466031 python3.9[214168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:50.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:50.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:50 np0005466031 python3.9[214291]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406569.5645483-2143-78460956801482/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:51 np0005466031 python3.9[214443]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:02:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:02:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:52.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:52 np0005466031 python3.9[214596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:53 np0005466031 python3.9[214719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406572.0694406-2217-35951835409108/.source.json _original_basename=.6dqmcg70 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:53 np0005466031 python3.9[214872]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:54.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:54 np0005466031 podman[214996]: 2025-10-02 12:02:54.684075253 +0000 UTC m=+0.099096466 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:02:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:56.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:56.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:56 np0005466031 python3.9[215328]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  2 08:02:57 np0005466031 python3.9[215480]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 08:02:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:58.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:58 np0005466031 python3.9[215633]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 08:02:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:02:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 08:02:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:58.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 08:02:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:59 np0005466031 podman[215683]: 2025-10-02 12:02:59.688121594 +0000 UTC m=+0.107371904 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 08:03:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:00.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:00.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:00 np0005466031 python3[215851]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 08:03:02 np0005466031 podman[215895]: 2025-10-02 12:03:02.031221021 +0000 UTC m=+1.402996780 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 08:03:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:02.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:02 np0005466031 podman[215955]: 2025-10-02 12:03:02.217319142 +0000 UTC m=+0.073060976 container create 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:03:02 np0005466031 podman[215955]: 2025-10-02 12:03:02.170206805 +0000 UTC m=+0.025948689 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 08:03:02 np0005466031 python3[215851]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 08:03:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:02.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:03 np0005466031 python3.9[216145]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:03:03 np0005466031 python3.9[216300]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:04.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:04.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:04 np0005466031 python3.9[216376]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:03:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:05 np0005466031 python3.9[216527]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406584.5216591-2481-281438093235868/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:05 np0005466031 python3.9[216603]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:03:05 np0005466031 systemd[1]: Reloading.
Oct  2 08:03:05 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:06 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:06.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:06.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:06 np0005466031 python3.9[216715]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:07 np0005466031 systemd[1]: Reloading.
Oct  2 08:03:07 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:07 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:07 np0005466031 systemd[1]: Starting multipathd container...
Oct  2 08:03:07 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:03:07 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedc7a0d96feded4a68fe232b1cb426e9fc1f662b00017ad551899a0007b4e99/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:07 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedc7a0d96feded4a68fe232b1cb426e9fc1f662b00017ad551899a0007b4e99/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:07 np0005466031 systemd[1]: Started /usr/bin/podman healthcheck run 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531.
Oct  2 08:03:07 np0005466031 podman[216755]: 2025-10-02 12:03:07.513832897 +0000 UTC m=+0.128254295 container init 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 08:03:07 np0005466031 multipathd[216769]: + sudo -E kolla_set_configs
Oct  2 08:03:07 np0005466031 podman[216755]: 2025-10-02 12:03:07.539573159 +0000 UTC m=+0.153994557 container start 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:03:07 np0005466031 podman[216755]: multipathd
Oct  2 08:03:07 np0005466031 systemd[1]: Started multipathd container.
Oct  2 08:03:07 np0005466031 multipathd[216769]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:03:07 np0005466031 multipathd[216769]: INFO:__main__:Validating config file
Oct  2 08:03:07 np0005466031 multipathd[216769]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:03:07 np0005466031 multipathd[216769]: INFO:__main__:Writing out command to execute
Oct  2 08:03:07 np0005466031 multipathd[216769]: ++ cat /run_command
Oct  2 08:03:07 np0005466031 multipathd[216769]: + CMD='/usr/sbin/multipathd -d'
Oct  2 08:03:07 np0005466031 multipathd[216769]: + ARGS=
Oct  2 08:03:07 np0005466031 multipathd[216769]: + sudo kolla_copy_cacerts
Oct  2 08:03:07 np0005466031 podman[216776]: 2025-10-02 12:03:07.611632764 +0000 UTC m=+0.060925236 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 08:03:07 np0005466031 systemd[1]: 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531-3e4b6909fdeb4db5.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 08:03:07 np0005466031 systemd[1]: 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531-3e4b6909fdeb4db5.service: Failed with result 'exit-code'.
Oct  2 08:03:07 np0005466031 multipathd[216769]: + [[ ! -n '' ]]
Oct  2 08:03:07 np0005466031 multipathd[216769]: + . kolla_extend_start
Oct  2 08:03:07 np0005466031 multipathd[216769]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  2 08:03:07 np0005466031 multipathd[216769]: Running command: '/usr/sbin/multipathd -d'
Oct  2 08:03:07 np0005466031 multipathd[216769]: + umask 0022
Oct  2 08:03:07 np0005466031 multipathd[216769]: + exec /usr/sbin/multipathd -d
Oct  2 08:03:07 np0005466031 multipathd[216769]: 4443.285635 | --------start up--------
Oct  2 08:03:07 np0005466031 multipathd[216769]: 4443.285652 | read /etc/multipath.conf
Oct  2 08:03:07 np0005466031 multipathd[216769]: 4443.293346 | path checkers start up
Oct  2 08:03:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:08.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:08.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:08 np0005466031 python3.9[216960]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:03:09 np0005466031 python3.9[217227]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:03:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:09 np0005466031 podman[217296]: 2025-10-02 12:03:09.551342173 +0000 UTC m=+0.063381816 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 08:03:09 np0005466031 podman[217296]: 2025-10-02 12:03:09.689931795 +0000 UTC m=+0.201971438 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 08:03:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:10.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:10 np0005466031 podman[217583]: 2025-10-02 12:03:10.232333658 +0000 UTC m=+0.056568410 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:03:10 np0005466031 podman[217583]: 2025-10-02 12:03:10.246808135 +0000 UTC m=+0.071042877 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:03:10 np0005466031 python3.9[217536]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:03:10 np0005466031 systemd[1]: Stopping multipathd container...
Oct  2 08:03:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:10.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:10 np0005466031 multipathd[216769]: 4446.025335 | exit (signal)
Oct  2 08:03:10 np0005466031 multipathd[216769]: 4446.026151 | --------shut down-------
Oct  2 08:03:10 np0005466031 systemd[1]: libpod-552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531.scope: Deactivated successfully.
Oct  2 08:03:10 np0005466031 podman[217632]: 2025-10-02 12:03:10.405109405 +0000 UTC m=+0.066069664 container died 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:03:10 np0005466031 systemd[1]: 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531-3e4b6909fdeb4db5.timer: Deactivated successfully.
Oct  2 08:03:10 np0005466031 systemd[1]: Stopped /usr/bin/podman healthcheck run 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531.
Oct  2 08:03:10 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531-userdata-shm.mount: Deactivated successfully.
Oct  2 08:03:10 np0005466031 systemd[1]: var-lib-containers-storage-overlay-dedc7a0d96feded4a68fe232b1cb426e9fc1f662b00017ad551899a0007b4e99-merged.mount: Deactivated successfully.
Oct  2 08:03:10 np0005466031 podman[217632]: 2025-10-02 12:03:10.539861416 +0000 UTC m=+0.200821675 container cleanup 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, tcib_managed=true)
Oct  2 08:03:10 np0005466031 podman[217632]: multipathd
Oct  2 08:03:10 np0005466031 podman[217668]: 2025-10-02 12:03:10.573510255 +0000 UTC m=+0.166227749 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, release=1793)
Oct  2 08:03:10 np0005466031 podman[217668]: 2025-10-02 12:03:10.583100041 +0000 UTC m=+0.175817525 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64)
Oct  2 08:03:10 np0005466031 podman[217698]: multipathd
Oct  2 08:03:10 np0005466031 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  2 08:03:10 np0005466031 systemd[1]: Stopped multipathd container.
Oct  2 08:03:10 np0005466031 systemd[1]: Starting multipathd container...
Oct  2 08:03:10 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:03:10 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedc7a0d96feded4a68fe232b1cb426e9fc1f662b00017ad551899a0007b4e99/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:10 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedc7a0d96feded4a68fe232b1cb426e9fc1f662b00017ad551899a0007b4e99/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:10 np0005466031 systemd[1]: Started /usr/bin/podman healthcheck run 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531.
Oct  2 08:03:10 np0005466031 podman[217727]: 2025-10-02 12:03:10.793513732 +0000 UTC m=+0.124810416 container init 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  2 08:03:10 np0005466031 multipathd[217755]: + sudo -E kolla_set_configs
Oct  2 08:03:10 np0005466031 podman[217727]: 2025-10-02 12:03:10.817781141 +0000 UTC m=+0.149077805 container start 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:03:10 np0005466031 podman[217727]: multipathd
Oct  2 08:03:10 np0005466031 systemd[1]: Started multipathd container.
Oct  2 08:03:10 np0005466031 multipathd[217755]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:03:10 np0005466031 multipathd[217755]: INFO:__main__:Validating config file
Oct  2 08:03:10 np0005466031 multipathd[217755]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:03:10 np0005466031 multipathd[217755]: INFO:__main__:Writing out command to execute
Oct  2 08:03:10 np0005466031 multipathd[217755]: ++ cat /run_command
Oct  2 08:03:10 np0005466031 multipathd[217755]: + CMD='/usr/sbin/multipathd -d'
Oct  2 08:03:10 np0005466031 multipathd[217755]: + ARGS=
Oct  2 08:03:10 np0005466031 multipathd[217755]: + sudo kolla_copy_cacerts
Oct  2 08:03:10 np0005466031 multipathd[217755]: + [[ ! -n '' ]]
Oct  2 08:03:10 np0005466031 multipathd[217755]: + . kolla_extend_start
Oct  2 08:03:10 np0005466031 multipathd[217755]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  2 08:03:10 np0005466031 multipathd[217755]: Running command: '/usr/sbin/multipathd -d'
Oct  2 08:03:10 np0005466031 multipathd[217755]: + umask 0022
Oct  2 08:03:10 np0005466031 multipathd[217755]: + exec /usr/sbin/multipathd -d
Oct  2 08:03:10 np0005466031 podman[217768]: 2025-10-02 12:03:10.910672577 +0000 UTC m=+0.077924426 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:03:10 np0005466031 multipathd[217755]: 4446.563343 | --------start up--------
Oct  2 08:03:10 np0005466031 multipathd[217755]: 4446.563361 | read /etc/multipath.conf
Oct  2 08:03:10 np0005466031 multipathd[217755]: 4446.569192 | path checkers start up
Oct  2 08:03:10 np0005466031 systemd[1]: 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531-1656b28ca7fabd17.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 08:03:10 np0005466031 systemd[1]: 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531-1656b28ca7fabd17.service: Failed with result 'exit-code'.
Oct  2 08:03:11 np0005466031 python3.9[218071]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:03:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:03:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:12.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:12.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:12 np0005466031 podman[218202]: 2025-10-02 12:03:12.611711201 +0000 UTC m=+0.059206026 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:03:12 np0005466031 python3.9[218249]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 08:03:13 np0005466031 python3.9[218402]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  2 08:03:13 np0005466031 kernel: Key type psk registered
Oct  2 08:03:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:14.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:14.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:14 np0005466031 python3.9[218567]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:03:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:14 np0005466031 python3.9[218690]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406593.8196275-2722-155768306008649/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:15 np0005466031 python3.9[218842]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:16.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:16.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:16 np0005466031 python3.9[218995]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:03:16 np0005466031 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 08:03:16 np0005466031 systemd[1]: Stopped Load Kernel Modules.
Oct  2 08:03:16 np0005466031 systemd[1]: Stopping Load Kernel Modules...
Oct  2 08:03:16 np0005466031 systemd[1]: Starting Load Kernel Modules...
Oct  2 08:03:16 np0005466031 systemd[1]: Finished Load Kernel Modules.
Oct  2 08:03:17 np0005466031 python3.9[219151]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 08:03:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:18.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:18 np0005466031 python3.9[219284]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 08:03:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:20.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:20.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:22.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:22.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:24.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:24.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:25 np0005466031 systemd[1]: Reloading.
Oct  2 08:03:25 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:25 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:25 np0005466031 podman[219346]: 2025-10-02 12:03:25.232127187 +0000 UTC m=+0.122595322 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:03:25 np0005466031 systemd[1]: Reloading.
Oct  2 08:03:25 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:25 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:03:25.810 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:03:25.811 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:03:25.811 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:25 np0005466031 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  2 08:03:25 np0005466031 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  2 08:03:26 np0005466031 lvm[219483]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 08:03:26 np0005466031 lvm[219483]: VG ceph_vg0 finished
Oct  2 08:03:26 np0005466031 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 08:03:26 np0005466031 systemd[1]: Starting man-db-cache-update.service...
Oct  2 08:03:26 np0005466031 systemd[1]: Reloading.
Oct  2 08:03:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:26.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:26 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:26 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:26.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:26 np0005466031 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 08:03:27 np0005466031 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 08:03:27 np0005466031 systemd[1]: Finished man-db-cache-update.service.
Oct  2 08:03:27 np0005466031 systemd[1]: man-db-cache-update.service: Consumed 1.408s CPU time.
Oct  2 08:03:27 np0005466031 systemd[1]: run-re83bc569acbe40c78695f8a04a0465c9.service: Deactivated successfully.
Oct  2 08:03:27 np0005466031 python3.9[220822]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:28.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:28 np0005466031 python3.9[220973]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 08:03:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:29 np0005466031 python3.9[221129]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:30.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:30.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:30 np0005466031 podman[221207]: 2025-10-02 12:03:30.656528006 +0000 UTC m=+0.069441561 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:03:31 np0005466031 python3.9[221302]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:03:31 np0005466031 systemd[1]: Reloading.
Oct  2 08:03:31 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:31 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:32.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:32 np0005466031 python3.9[221487]: ansible-ansible.builtin.service_facts Invoked
Oct  2 08:03:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:32.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:32 np0005466031 network[221504]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 08:03:32 np0005466031 network[221505]: 'network-scripts' will be removed from distribution in near future.
Oct  2 08:03:32 np0005466031 network[221506]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 08:03:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:34.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:03:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:34.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:03:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:36.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:36.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:37 np0005466031 python3.9[221786]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:38.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:38.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:38 np0005466031 python3.9[221940]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:39 np0005466031 python3.9[222093]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:40 np0005466031 python3.9[222247]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:40.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:40 np0005466031 python3.9[222450]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:41 np0005466031 podman[222452]: 2025-10-02 12:03:41.098472545 +0000 UTC m=+0.087164321 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:03:41 np0005466031 python3.9[222622]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:42.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:42.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:42 np0005466031 python3.9[222776]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:43 np0005466031 podman[222901]: 2025-10-02 12:03:43.017354245 +0000 UTC m=+0.087210363 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:03:43 np0005466031 python3.9[222946]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:44.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:44 np0005466031 python3.9[223103]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.771313) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624771424, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1329, "num_deletes": 255, "total_data_size": 3128868, "memory_usage": 3172792, "flush_reason": "Manual Compaction"}
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624788521, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2055989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16136, "largest_seqno": 17460, "table_properties": {"data_size": 2050299, "index_size": 3085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11336, "raw_average_key_size": 18, "raw_value_size": 2038910, "raw_average_value_size": 3353, "num_data_blocks": 140, "num_entries": 608, "num_filter_entries": 608, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406505, "oldest_key_time": 1759406505, "file_creation_time": 1759406624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17317 microseconds, and 11075 cpu microseconds.
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.788640) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2055989 bytes OK
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.788674) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.789973) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.789998) EVENT_LOG_v1 {"time_micros": 1759406624789992, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.790023) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3122646, prev total WAL file size 3122646, number of live WAL files 2.
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.791152) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2007KB)], [30(7712KB)]
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624791232, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 9953199, "oldest_snapshot_seqno": -1}
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4232 keys, 9583697 bytes, temperature: kUnknown
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624840788, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 9583697, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9552721, "index_size": 19282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 105428, "raw_average_key_size": 24, "raw_value_size": 9473355, "raw_average_value_size": 2238, "num_data_blocks": 805, "num_entries": 4232, "num_filter_entries": 4232, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759406624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.841434) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9583697 bytes
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.5 rd, 192.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 4757, records dropped: 525 output_compression: NoCompression
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.843009) EVENT_LOG_v1 {"time_micros": 1759406624842995, "job": 16, "event": "compaction_finished", "compaction_time_micros": 49902, "compaction_time_cpu_micros": 24243, "output_level": 6, "num_output_files": 1, "total_output_size": 9583697, "num_input_records": 4757, "num_output_records": 4232, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624843716, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624845891, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.791011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.845951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.845958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.845960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.845961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:44.845963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:45 np0005466031 python3.9[223255]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:46 np0005466031 python3.9[223408]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:46.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:46.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:46 np0005466031 python3.9[223560]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:47 np0005466031 python3.9[223712]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:48 np0005466031 python3.9[223865]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:48.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:48.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:48 np0005466031 python3.9[224017]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:49 np0005466031 python3.9[224169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:50.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:50 np0005466031 python3.9[224322]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:51 np0005466031 python3.9[224474]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:52 np0005466031 python3.9[224627]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:52.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:52.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:03:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:03:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.457441) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635457488, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 354, "num_deletes": 251, "total_data_size": 310838, "memory_usage": 318664, "flush_reason": "Manual Compaction"}
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635460265, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 204944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17465, "largest_seqno": 17814, "table_properties": {"data_size": 202801, "index_size": 307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5354, "raw_average_key_size": 18, "raw_value_size": 198624, "raw_average_value_size": 684, "num_data_blocks": 14, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406625, "oldest_key_time": 1759406625, "file_creation_time": 1759406635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 2853 microseconds, and 1271 cpu microseconds.
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.460300) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 204944 bytes OK
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.460316) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.461946) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.461978) EVENT_LOG_v1 {"time_micros": 1759406635461971, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.461997) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 308433, prev total WAL file size 308433, number of live WAL files 2.
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.462727) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(200KB)], [33(9359KB)]
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635462829, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 9788641, "oldest_snapshot_seqno": -1}
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4012 keys, 7761511 bytes, temperature: kUnknown
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635501744, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7761511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7733644, "index_size": 16749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 101528, "raw_average_key_size": 25, "raw_value_size": 7659682, "raw_average_value_size": 1909, "num_data_blocks": 691, "num_entries": 4012, "num_filter_entries": 4012, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759406635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.502035) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7761511 bytes
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.503386) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.0 rd, 199.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(85.6) write-amplify(37.9) OK, records in: 4522, records dropped: 510 output_compression: NoCompression
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.503416) EVENT_LOG_v1 {"time_micros": 1759406635503400, "job": 18, "event": "compaction_finished", "compaction_time_micros": 38999, "compaction_time_cpu_micros": 18844, "output_level": 6, "num_output_files": 1, "total_output_size": 7761511, "num_input_records": 4522, "num_output_records": 4012, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635503644, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635506034, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.462483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.506102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.506109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.506111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.506113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:03:55.506115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466031 podman[224629]: 2025-10-02 12:03:55.678676069 +0000 UTC m=+0.108254320 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:03:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:56.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:56.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:58.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:03:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:00.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:00.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:00 np0005466031 podman[224683]: 2025-10-02 12:04:00.829034432 +0000 UTC m=+0.092007509 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:04:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:02.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:03 np0005466031 python3.9[224880]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:03 np0005466031 python3.9[225033]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:04.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:04.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:04 np0005466031 python3.9[225185]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:05 np0005466031 python3.9[225337]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:05 np0005466031 python3.9[225489]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:06.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:07 np0005466031 python3.9[225642]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:08.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:08.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:08 np0005466031 python3.9[225795]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 08:04:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:09 np0005466031 python3.9[225947]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:04:09 np0005466031 systemd[1]: Reloading.
Oct  2 08:04:09 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:04:09 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:04:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:10.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:10.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:10 np0005466031 python3.9[226135]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:11 np0005466031 podman[226260]: 2025-10-02 12:04:11.306461779 +0000 UTC m=+0.072827842 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:04:11 np0005466031 python3.9[226305]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:12 np0005466031 python3.9[226462]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:12.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:12 np0005466031 python3.9[226615]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:13 np0005466031 podman[226740]: 2025-10-02 12:04:13.253391033 +0000 UTC m=+0.082210215 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 08:04:13 np0005466031 python3.9[226778]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:13 np0005466031 python3.9[226939]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:14.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:14.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:14 np0005466031 python3.9[227092]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:15 np0005466031 python3.9[227245]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:16.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:18.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:18.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:19 np0005466031 python3.9[227514]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:04:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:04:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:04:19 np0005466031 python3.9[227682]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:20.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:20 np0005466031 python3.9[227835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:20.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:21 np0005466031 python3.9[228037]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:22 np0005466031 python3.9[228190]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:22.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:22.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:22 np0005466031 python3.9[228342]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:23 np0005466031 python3.9[228494]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:24 np0005466031 python3.9[228647]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:24.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:24.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:24 np0005466031 python3.9[228799]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:25 np0005466031 python3.9[228951]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:04:25.811 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:04:25.812 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:04:25.812 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:25 np0005466031 python3.9[229103]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:26 np0005466031 podman[229105]: 2025-10-02 12:04:26.056610812 +0000 UTC m=+0.104880972 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:04:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:26.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:26.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:26 np0005466031 python3.9[229330]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:04:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:04:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:28.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:28.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:30.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:30.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:31 np0005466031 podman[229357]: 2025-10-02 12:04:31.633446772 +0000 UTC m=+0.063899184 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:04:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:32.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:32.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:33 np0005466031 python3.9[229504]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  2 08:04:34 np0005466031 python3.9[229658]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 08:04:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:34.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:34.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:35 np0005466031 python3.9[229816]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 08:04:36 np0005466031 systemd-logind[786]: New session 52 of user zuul.
Oct  2 08:04:36 np0005466031 systemd[1]: Started Session 52 of User zuul.
Oct  2 08:04:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:36.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:36 np0005466031 systemd[1]: session-52.scope: Deactivated successfully.
Oct  2 08:04:36 np0005466031 systemd-logind[786]: Session 52 logged out. Waiting for processes to exit.
Oct  2 08:04:36 np0005466031 systemd-logind[786]: Removed session 52.
Oct  2 08:04:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:36.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:36 np0005466031 python3.9[230003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:37 np0005466031 python3.9[230124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406676.52871-4359-184012363655578/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:38 np0005466031 python3.9[230275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:38.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:38 np0005466031 python3.9[230351]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:39 np0005466031 python3.9[230501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:39 np0005466031 python3.9[230622]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406678.7427814-4359-192061706912794/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:40.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:40 np0005466031 python3.9[230773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:40.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:40 np0005466031 python3.9[230894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406679.8956616-4359-49140948069175/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:41 np0005466031 python3.9[231094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:41 np0005466031 podman[231142]: 2025-10-02 12:04:41.692954862 +0000 UTC m=+0.126809948 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:04:42 np0005466031 python3.9[231236]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406680.9788265-4359-179734230448771/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:42.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:42.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:43 np0005466031 podman[231360]: 2025-10-02 12:04:43.384019207 +0000 UTC m=+0.066984143 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:04:43 np0005466031 python3.9[231406]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:44.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:44 np0005466031 python3.9[231562]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:44.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:45 np0005466031 python3.9[231714]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:04:45 np0005466031 python3.9[231867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:46.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:46.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:46 np0005466031 python3.9[231990]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759406685.5051894-4638-1820310492301/.source _original_basename=.5rsqml8m follow=False checksum=58da26613b3d1f9437328409095ce83a5f3fdaac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  2 08:04:47 np0005466031 python3.9[232142]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:04:48 np0005466031 python3.9[232295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:48.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:48.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:48 np0005466031 python3.9[232416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406687.7708845-4716-183223715234817/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:49 np0005466031 python3.9[232566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:50 np0005466031 python3.9[232688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406689.263757-4761-207238668519782/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:50.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:50.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:51 np0005466031 python3.9[232840]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  2 08:04:52 np0005466031 python3.9[232993]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 08:04:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:52.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:52.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:53 np0005466031 python3[233145]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 08:04:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:54.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:04:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:56.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:04:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:58.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:04:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:58.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:58 np0005466031 podman[233203]: 2025-10-02 12:04:58.699330169 +0000 UTC m=+2.122425776 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:04:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:00.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:00.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:02.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:02.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:02 np0005466031 podman[233298]: 2025-10-02 12:05:02.569445688 +0000 UTC m=+0.050454374 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 08:05:02 np0005466031 podman[233160]: 2025-10-02 12:05:02.572227029 +0000 UTC m=+9.489112234 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 08:05:02 np0005466031 podman[233339]: 2025-10-02 12:05:02.710647832 +0000 UTC m=+0.045655875 container create 4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:05:02 np0005466031 podman[233339]: 2025-10-02 12:05:02.684104113 +0000 UTC m=+0.019112176 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 08:05:02 np0005466031 python3[233145]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  2 08:05:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:04.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:04.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:04 np0005466031 python3.9[233530]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:06 np0005466031 python3.9[233685]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  2 08:05:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:06.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:06 np0005466031 python3.9[233837]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 08:05:07 np0005466031 python3[233989]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 08:05:08 np0005466031 podman[234029]: 2025-10-02 12:05:08.11878002 +0000 UTC m=+0.050675980 container create a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct  2 08:05:08 np0005466031 podman[234029]: 2025-10-02 12:05:08.092179409 +0000 UTC m=+0.024075379 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 08:05:08 np0005466031 python3[233989]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct  2 08:05:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:08.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:08.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:10 np0005466031 python3.9[234221]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:10.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:11 np0005466031 python3.9[234375]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:05:11 np0005466031 python3.9[234526]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406711.2645886-5037-217913078359458/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:05:12 np0005466031 podman[234575]: 2025-10-02 12:05:12.166297283 +0000 UTC m=+0.060109154 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  2 08:05:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:12 np0005466031 python3.9[234621]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:05:12 np0005466031 systemd[1]: Reloading.
Oct  2 08:05:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:12.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:12 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:05:12 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:05:13 np0005466031 python3.9[234732]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:05:13 np0005466031 systemd[1]: Reloading.
Oct  2 08:05:13 np0005466031 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:05:13 np0005466031 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:05:13 np0005466031 systemd[1]: Starting nova_compute container...
Oct  2 08:05:13 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:05:13 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466031 podman[234771]: 2025-10-02 12:05:13.769337085 +0000 UTC m=+0.097102667 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:05:13 np0005466031 podman[234773]: 2025-10-02 12:05:13.975825252 +0000 UTC m=+0.303804160 container init a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:05:13 np0005466031 podman[234773]: 2025-10-02 12:05:13.987226873 +0000 UTC m=+0.315205791 container start a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:05:13 np0005466031 nova_compute[234802]: + sudo -E kolla_set_configs
Oct  2 08:05:14 np0005466031 podman[234773]: nova_compute
Oct  2 08:05:14 np0005466031 systemd[1]: Started nova_compute container.
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Validating config file
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying service configuration files
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Deleting /etc/ceph
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Creating directory /etc/ceph
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/ceph
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Writing out command to execute
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:14 np0005466031 nova_compute[234802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 08:05:14 np0005466031 nova_compute[234802]: ++ cat /run_command
Oct  2 08:05:14 np0005466031 nova_compute[234802]: + CMD=nova-compute
Oct  2 08:05:14 np0005466031 nova_compute[234802]: + ARGS=
Oct  2 08:05:14 np0005466031 nova_compute[234802]: + sudo kolla_copy_cacerts
Oct  2 08:05:14 np0005466031 nova_compute[234802]: + [[ ! -n '' ]]
Oct  2 08:05:14 np0005466031 nova_compute[234802]: + . kolla_extend_start
Oct  2 08:05:14 np0005466031 nova_compute[234802]: Running command: 'nova-compute'
Oct  2 08:05:14 np0005466031 nova_compute[234802]: + echo 'Running command: '\''nova-compute'\'''
Oct  2 08:05:14 np0005466031 nova_compute[234802]: + umask 0022
Oct  2 08:05:14 np0005466031 nova_compute[234802]: + exec nova-compute
Oct  2 08:05:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:14.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:14.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:15 np0005466031 python3.9[234971]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:16.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:16 np0005466031 python3.9[235122]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:16 np0005466031 nova_compute[234802]: 2025-10-02 12:05:16.883 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:16 np0005466031 nova_compute[234802]: 2025-10-02 12:05:16.884 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:16 np0005466031 nova_compute[234802]: 2025-10-02 12:05:16.884 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:16 np0005466031 nova_compute[234802]: 2025-10-02 12:05:16.884 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.064 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.085 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.597 2 INFO nova.virt.driver [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  2 08:05:17 np0005466031 python3.9[235276]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.797 2 INFO nova.compute.provider_config [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.814 2 DEBUG oslo_concurrency.lockutils [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.815 2 DEBUG oslo_concurrency.lockutils [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.815 2 DEBUG oslo_concurrency.lockutils [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.816 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.816 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.816 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.816 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.817 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.817 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.817 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.818 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.818 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.818 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.818 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.818 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.819 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.819 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.819 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.819 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.819 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.820 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.820 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.820 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.820 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.820 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.821 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.821 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.821 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.821 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.821 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.822 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.822 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.822 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.822 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.822 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.823 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.823 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.823 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.823 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.823 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.823 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.824 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.824 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.824 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.824 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.825 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.825 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.825 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.825 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.825 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.825 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.826 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.826 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.826 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.826 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.826 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.827 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.827 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.827 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.827 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.827 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.827 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.828 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.828 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.828 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.828 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.828 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.829 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.829 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.829 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.829 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.829 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.829 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.830 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.830 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.830 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.830 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.830 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.831 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.831 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.831 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.831 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.831 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.832 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.832 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.832 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.832 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.832 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.832 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.833 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.833 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.833 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.833 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.833 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.834 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.834 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.834 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.834 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.834 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.834 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.835 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.835 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.835 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.835 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.835 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.836 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.836 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.836 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.836 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.836 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.836 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.837 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.837 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.837 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.837 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.837 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.838 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.838 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.838 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.838 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.838 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.838 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.839 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.839 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.839 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.839 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.839 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.840 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.840 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.840 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.840 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.840 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.840 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.841 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.841 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.841 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.841 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.841 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.842 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.842 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.842 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.842 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.842 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.842 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.843 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.843 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.843 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.843 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.843 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.844 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.844 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.844 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.844 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.844 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.845 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.845 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.845 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.845 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.845 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.846 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.846 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.846 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.846 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.846 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.846 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.847 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.847 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.847 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.847 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.847 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.848 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.848 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.848 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.848 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.848 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.849 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.849 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.849 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.849 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.849 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.849 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.850 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.850 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.850 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.850 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.850 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.851 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.851 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.851 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.851 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.851 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.852 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.852 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.852 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.852 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.852 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.853 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.853 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.853 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.853 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.853 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.854 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.854 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.854 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.854 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.854 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.854 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.855 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.855 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.855 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.855 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.855 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.856 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.856 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.856 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.856 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.856 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.856 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.857 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.857 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.857 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.857 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.857 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.858 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.858 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.858 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.858 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.858 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.858 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.859 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.859 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.859 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.859 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.859 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.860 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.860 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.860 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.860 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.860 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.861 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.861 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.861 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.861 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.861 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.862 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.862 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.862 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.862 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.862 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.862 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.863 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.863 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.863 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.863 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.863 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.863 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.864 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.864 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.864 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.864 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.864 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.865 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.865 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.865 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.865 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.865 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.866 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.866 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.866 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.866 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.866 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.866 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.867 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.867 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.867 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.867 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.867 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.868 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.868 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.868 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.868 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.868 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.869 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.869 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.869 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.869 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.869 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.869 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.870 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.870 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.870 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.870 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.870 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.871 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.871 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.871 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.871 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.871 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.872 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.872 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.872 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.872 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.872 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.873 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.873 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.873 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.873 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.873 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.874 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.874 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.874 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.874 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.874 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.875 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.875 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.875 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.875 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.875 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.875 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.876 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.876 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.876 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.876 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.876 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.877 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.877 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.877 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.877 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.877 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.878 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.878 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.878 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.878 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.878 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.878 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.879 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.879 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.879 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.879 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.879 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.880 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.880 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.880 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.880 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.880 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.881 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.881 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.881 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.881 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.881 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.881 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.882 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.882 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.882 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.882 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.883 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.883 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.883 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.883 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.884 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.884 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.884 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.885 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.885 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.886 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.886 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.887 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.887 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.888 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.888 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.888 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.888 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.889 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.889 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.889 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.889 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.889 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.890 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.890 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.890 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.890 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.890 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.890 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.891 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.891 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.891 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.891 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.892 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.892 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.892 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.892 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.892 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.892 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.893 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.893 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.893 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.893 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.893 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.894 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.894 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.894 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.894 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.895 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.895 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.895 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.895 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.895 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.896 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.896 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.896 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.896 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.896 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.897 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.897 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.897 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.897 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.897 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.898 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.898 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.898 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.898 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.898 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.898 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.899 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.899 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.899 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.899 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.899 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.899 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.899 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.900 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.900 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.900 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.900 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.900 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.901 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.901 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.901 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.901 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.901 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.901 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.901 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.902 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.902 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.902 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.902 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.902 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.903 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.903 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.903 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.903 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.903 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.903 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.904 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.904 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.904 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.905 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.905 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.905 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.905 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.905 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.905 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.905 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.906 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.906 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.906 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.906 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.906 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.906 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.907 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.907 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.907 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.907 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.907 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.907 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.908 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.908 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.908 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.908 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.908 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.908 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.908 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.909 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.909 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.909 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.909 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.909 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.909 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.909 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.910 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.910 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.910 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.910 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.910 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.910 2 WARNING oslo_config.cfg [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  2 08:05:17 np0005466031 nova_compute[234802]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  2 08:05:17 np0005466031 nova_compute[234802]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  2 08:05:17 np0005466031 nova_compute[234802]: and ``live_migration_inbound_addr`` respectively.
Oct  2 08:05:17 np0005466031 nova_compute[234802]: ).  Its value may be silently ignored in the future.#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.911 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.911 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.911 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.911 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.911 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.911 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.912 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.912 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.912 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.912 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.912 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.912 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.912 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.913 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.913 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.913 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.913 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.913 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.913 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rbd_secret_uuid        = 20fdc58c-b037-5094-a8ef-d490aa7c36f3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.913 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.914 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.914 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.914 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.914 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.914 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.914 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.914 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.915 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.915 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.915 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.915 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.915 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.915 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.916 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.916 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.916 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.916 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.916 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.916 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.916 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.917 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.917 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.917 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.917 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.917 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.917 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.917 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.918 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.918 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.918 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.918 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.918 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.918 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.918 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.919 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.919 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.919 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.919 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.919 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.919 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.919 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.920 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.920 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.920 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.920 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.920 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.920 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.920 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.921 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.921 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.921 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.921 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.921 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.921 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.921 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.922 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.922 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.922 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.922 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.922 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.922 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.922 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.923 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.923 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.923 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.923 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.923 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.923 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.923 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.924 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.924 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.924 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.924 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.924 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.924 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.924 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.925 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.925 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.925 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.925 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.925 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.925 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.925 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.926 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.926 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.926 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.926 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.926 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.926 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.926 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.926 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.927 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.927 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.927 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.927 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.927 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.927 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.928 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.928 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.928 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.928 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.928 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.928 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.928 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.929 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.929 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.929 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.929 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.929 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.929 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.930 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.930 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.930 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.930 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.930 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.930 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.930 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.930 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.931 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.931 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.931 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.931 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.931 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.932 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.932 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.932 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.932 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.932 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.932 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.933 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.933 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.933 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.933 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.933 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.933 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.933 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.934 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.934 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.934 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.934 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.934 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.934 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.934 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.935 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.935 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.935 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.935 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.935 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.935 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.935 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.936 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.936 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.936 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.936 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.936 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.936 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.936 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.936 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.937 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.937 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.937 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.937 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.937 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.938 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.938 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.938 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.938 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.938 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.938 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.938 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.939 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.939 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.939 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.939 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.939 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.939 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.939 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.940 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.940 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.940 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.940 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.940 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.940 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.940 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.941 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.941 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.941 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.941 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.941 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.941 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.941 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.942 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.942 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.942 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.942 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.942 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.942 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.942 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.943 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.943 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.943 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.943 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.943 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.943 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.943 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.944 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.944 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.944 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.944 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.944 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.944 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.944 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.945 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.945 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.945 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.945 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.945 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.945 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.945 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.946 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.946 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.946 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.946 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.946 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.946 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.947 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.947 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.947 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.947 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.947 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.948 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.948 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.948 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.948 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.948 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.948 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.949 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.949 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.949 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.949 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.949 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.949 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.950 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.950 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.950 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.950 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.950 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.950 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.951 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.951 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.951 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.951 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.951 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.951 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.951 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.952 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.952 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.952 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.952 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.952 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.952 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.952 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.953 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.953 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.953 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.953 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.953 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.953 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.953 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.954 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.954 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.954 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.954 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.954 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.954 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.954 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.955 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.955 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.955 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.955 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.955 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.955 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.955 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.956 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.956 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.956 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.956 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.956 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.956 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.956 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.957 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.957 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.957 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.957 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.957 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.957 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.957 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.958 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.958 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.958 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.958 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.958 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.958 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.958 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.959 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.959 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.959 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.959 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.959 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.959 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.959 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.960 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.960 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.960 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.960 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.960 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.960 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.960 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.961 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.961 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.961 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.961 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.961 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.961 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.962 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.962 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.962 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.962 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.962 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.962 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.962 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.962 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.963 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.963 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.963 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.963 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.963 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.963 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.964 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.964 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.964 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.964 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.964 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.964 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.964 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.965 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.965 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.965 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.965 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.965 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.965 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.965 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.966 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.966 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.966 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.966 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.966 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.966 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.966 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.967 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.967 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.967 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.967 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.967 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.967 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.968 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.968 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.968 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.968 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.968 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.968 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.968 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.969 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.969 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.969 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.969 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.969 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.970 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.970 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.970 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.970 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.970 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.970 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.971 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.971 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.971 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.971 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.971 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.971 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.972 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.972 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.972 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.972 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.972 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.972 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.973 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.973 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.973 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.973 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.973 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.973 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.973 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.974 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.974 2 DEBUG oslo_service.service [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.975 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.992 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.993 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.993 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  2 08:05:17 np0005466031 nova_compute[234802]: 2025-10-02 12:05:17.994 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  2 08:05:18 np0005466031 systemd[1]: Starting libvirt QEMU daemon...
Oct  2 08:05:18 np0005466031 systemd[1]: Started libvirt QEMU daemon.
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 2025-10-02 12:05:18.069 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f64e5a9d4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 2025-10-02 12:05:18.071 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f64e5a9d4c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 2025-10-02 12:05:18.072 2 INFO nova.virt.libvirt.driver [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 2025-10-02 12:05:18.091 2 WARNING nova.virt.libvirt.driver [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 2025-10-02 12:05:18.091 2 DEBUG nova.virt.libvirt.volume.mount [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  2 08:05:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:18.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:18 np0005466031 python3.9[235481]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 08:05:18 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:05:18 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 2025-10-02 12:05:18.987 2 INFO nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Libvirt host capabilities <capabilities>
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 
Oct  2 08:05:18 np0005466031 nova_compute[234802]:  <host>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <uuid>91df6c8e-6fe2-49d2-9991-360b14608f11</uuid>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <cpu>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <arch>x86_64</arch>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <model>EPYC-Rome-v4</model>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <vendor>AMD</vendor>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <microcode version='16777317'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <signature family='23' model='49' stepping='0'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='x2apic'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='tsc-deadline'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='osxsave'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='hypervisor'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='tsc_adjust'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='spec-ctrl'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='stibp'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='arch-capabilities'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='ssbd'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='cmp_legacy'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='topoext'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='virt-ssbd'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='lbrv'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='tsc-scale'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='vmcb-clean'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='pause-filter'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='pfthreshold'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='svme-addr-chk'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='rdctl-no'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='skip-l1dfl-vmentry'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='mds-no'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <feature name='pschange-mc-no'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <pages unit='KiB' size='4'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <pages unit='KiB' size='2048'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <pages unit='KiB' size='1048576'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </cpu>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <power_management>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <suspend_mem/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </power_management>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <iommu support='no'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <migration_features>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <live/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <uri_transports>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:        <uri_transport>tcp</uri_transport>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:        <uri_transport>rdma</uri_transport>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      </uri_transports>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </migration_features>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <topology>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <cells num='1'>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:        <cell id='0'>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:          <memory unit='KiB'>7864104</memory>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:          <pages unit='KiB' size='4'>1966026</pages>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:          <pages unit='KiB' size='2048'>0</pages>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:          <distances>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <sibling id='0' value='10'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:          </distances>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:          <cpus num='8'>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:          </cpus>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:        </cell>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      </cells>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </topology>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <cache>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </cache>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <secmodel>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <model>selinux</model>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <doi>0</doi>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </secmodel>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <secmodel>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <model>dac</model>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <doi>0</doi>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </secmodel>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:  </host>
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 
Oct  2 08:05:18 np0005466031 nova_compute[234802]:  <guest>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <os_type>hvm</os_type>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <arch name='i686'>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <wordsize>32</wordsize>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <domain type='qemu'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <domain type='kvm'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </arch>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <features>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <pae/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <nonpae/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <acpi default='on' toggle='yes'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <apic default='on' toggle='no'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <cpuselection/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <deviceboot/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <disksnapshot default='on' toggle='no'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <externalSnapshot/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </features>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:  </guest>
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 
Oct  2 08:05:18 np0005466031 nova_compute[234802]:  <guest>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <os_type>hvm</os_type>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <arch name='x86_64'>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <wordsize>64</wordsize>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <domain type='qemu'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <domain type='kvm'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </arch>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    <features>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <acpi default='on' toggle='yes'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <apic default='on' toggle='no'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <cpuselection/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <deviceboot/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <disksnapshot default='on' toggle='no'/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:      <externalSnapshot/>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:    </features>
Oct  2 08:05:18 np0005466031 nova_compute[234802]:  </guest>
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 
Oct  2 08:05:18 np0005466031 nova_compute[234802]: </capabilities>
Oct  2 08:05:18 np0005466031 nova_compute[234802]: #033[00m
Oct  2 08:05:18 np0005466031 nova_compute[234802]: 2025-10-02 12:05:18.997 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.033 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  2 08:05:19 np0005466031 nova_compute[234802]: <domainCapabilities>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <domain>kvm</domain>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <arch>i686</arch>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <vcpu max='240'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <iothreads supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <os supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <enum name='firmware'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <loader supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>rom</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pflash</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='readonly'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>yes</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>no</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='secure'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>no</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </loader>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </os>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>on</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>off</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='maximumMigratable'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>on</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>off</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <vendor>AMD</vendor>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='succor'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='custom' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='auto-ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='auto-ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-128'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-256'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-512'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='KnightsMill'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512er'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512pf'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512er'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512pf'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tbm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tbm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SierraForest'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cmpccxadd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cmpccxadd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='athlon'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='athlon-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='core2duo'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='core2duo-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='coreduo'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='coreduo-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='n270'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='n270-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='phenom'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='phenom-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <memoryBacking supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <enum name='sourceType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>file</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>anonymous</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>memfd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </memoryBacking>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <devices>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <disk supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='diskDevice'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>disk</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>cdrom</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>floppy</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>lun</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='bus'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ide</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>fdc</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>scsi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>sata</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-non-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </disk>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <graphics supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vnc</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>egl-headless</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>dbus</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </graphics>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <video supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='modelType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vga</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>cirrus</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>none</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>bochs</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ramfb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </video>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <hostdev supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='mode'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>subsystem</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='startupPolicy'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>default</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>mandatory</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>requisite</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>optional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='subsysType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pci</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>scsi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='capsType'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='pciBackend'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </hostdev>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <rng supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-non-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>random</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>egd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>builtin</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </rng>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <filesystem supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='driverType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>path</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>handle</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtiofs</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </filesystem>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <tpm supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tpm-tis</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tpm-crb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>emulator</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>external</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendVersion'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>2.0</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </tpm>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <redirdev supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='bus'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </redirdev>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <channel supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pty</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>unix</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </channel>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <crypto supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>qemu</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>builtin</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </crypto>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <interface supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>default</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>passt</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </interface>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <panic supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>isa</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>hyperv</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </panic>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </devices>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <features>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <gic supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <genid supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <backup supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <async-teardown supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <ps2 supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <sev supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <sgx supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <hyperv supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='features'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>relaxed</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vapic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>spinlocks</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vpindex</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>runtime</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>synic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>stimer</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>reset</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vendor_id</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>frequencies</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>reenlightenment</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tlbflush</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ipi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>avic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>emsr_bitmap</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>xmm_input</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </hyperv>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <launchSecurity supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </features>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: </domainCapabilities>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.042 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  2 08:05:19 np0005466031 nova_compute[234802]: <domainCapabilities>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <domain>kvm</domain>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <arch>i686</arch>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <vcpu max='4096'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <iothreads supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <os supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <enum name='firmware'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <loader supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>rom</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pflash</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='readonly'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>yes</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>no</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='secure'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>no</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </loader>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </os>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>on</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>off</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='maximumMigratable'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>on</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>off</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <vendor>AMD</vendor>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='succor'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='custom' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='auto-ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='auto-ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-128'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-256'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-512'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='KnightsMill'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512er'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512pf'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512er'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512pf'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tbm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tbm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SierraForest'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cmpccxadd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cmpccxadd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='athlon'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='athlon-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='core2duo'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='core2duo-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='coreduo'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='coreduo-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='n270'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='n270-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='phenom'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='phenom-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <memoryBacking supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <enum name='sourceType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>file</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>anonymous</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>memfd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </memoryBacking>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <devices>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <disk supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='diskDevice'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>disk</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>cdrom</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>floppy</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>lun</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='bus'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>fdc</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>scsi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>sata</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-non-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </disk>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <graphics supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vnc</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>egl-headless</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>dbus</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </graphics>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <video supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='modelType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vga</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>cirrus</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>none</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>bochs</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ramfb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </video>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <hostdev supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='mode'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>subsystem</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='startupPolicy'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>default</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>mandatory</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>requisite</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>optional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='subsysType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pci</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>scsi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='capsType'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='pciBackend'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </hostdev>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <rng supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-non-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>random</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>egd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>builtin</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </rng>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <filesystem supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='driverType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>path</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>handle</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtiofs</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </filesystem>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <tpm supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tpm-tis</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tpm-crb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>emulator</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>external</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendVersion'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>2.0</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </tpm>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <redirdev supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='bus'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </redirdev>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <channel supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pty</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>unix</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </channel>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <crypto supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>qemu</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>builtin</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </crypto>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <interface supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>default</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>passt</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </interface>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <panic supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>isa</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>hyperv</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </panic>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </devices>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <features>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <gic supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <genid supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <backup supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <async-teardown supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <ps2 supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <sev supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <sgx supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <hyperv supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='features'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>relaxed</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vapic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>spinlocks</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vpindex</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>runtime</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>synic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>stimer</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>reset</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vendor_id</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>frequencies</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>reenlightenment</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tlbflush</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ipi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>avic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>emsr_bitmap</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>xmm_input</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </hyperv>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <launchSecurity supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </features>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: </domainCapabilities>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.104 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.108 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  2 08:05:19 np0005466031 nova_compute[234802]: <domainCapabilities>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <domain>kvm</domain>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <arch>x86_64</arch>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <vcpu max='240'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <iothreads supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <os supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <enum name='firmware'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <loader supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>rom</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pflash</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='readonly'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>yes</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>no</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='secure'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>no</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </loader>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </os>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>on</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>off</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='maximumMigratable'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>on</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>off</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <vendor>AMD</vendor>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='succor'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='custom' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='auto-ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='auto-ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-128'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-256'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-512'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='KnightsMill'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512er'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512pf'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512er'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512pf'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tbm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tbm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SierraForest'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cmpccxadd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cmpccxadd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='athlon'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='athlon-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='core2duo'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='core2duo-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='coreduo'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='coreduo-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='n270'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='n270-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='phenom'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='phenom-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <memoryBacking supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <enum name='sourceType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>file</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>anonymous</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>memfd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </memoryBacking>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <devices>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <disk supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='diskDevice'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>disk</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>cdrom</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>floppy</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>lun</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='bus'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ide</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>fdc</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>scsi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>sata</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-non-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </disk>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <graphics supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vnc</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>egl-headless</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>dbus</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </graphics>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <video supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='modelType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vga</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>cirrus</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>none</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>bochs</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ramfb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </video>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <hostdev supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='mode'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>subsystem</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='startupPolicy'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>default</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>mandatory</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>requisite</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>optional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='subsysType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pci</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>scsi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='capsType'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='pciBackend'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </hostdev>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <rng supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-non-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>random</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>egd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>builtin</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </rng>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <filesystem supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='driverType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>path</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>handle</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtiofs</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </filesystem>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <tpm supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tpm-tis</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tpm-crb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>emulator</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>external</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendVersion'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>2.0</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </tpm>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <redirdev supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='bus'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </redirdev>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <channel supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pty</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>unix</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </channel>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <crypto supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>qemu</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>builtin</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </crypto>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <interface supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>default</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>passt</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </interface>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <panic supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>isa</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>hyperv</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </panic>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </devices>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <features>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <gic supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <genid supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <backup supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <async-teardown supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <ps2 supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <sev supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <sgx supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <hyperv supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='features'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>relaxed</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vapic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>spinlocks</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vpindex</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>runtime</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>synic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>stimer</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>reset</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vendor_id</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>frequencies</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>reenlightenment</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tlbflush</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ipi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>avic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>emsr_bitmap</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>xmm_input</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </hyperv>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <launchSecurity supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </features>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: </domainCapabilities>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.184 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  2 08:05:19 np0005466031 nova_compute[234802]: <domainCapabilities>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <domain>kvm</domain>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <arch>x86_64</arch>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <vcpu max='4096'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <iothreads supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <os supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <enum name='firmware'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>efi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <loader supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>rom</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pflash</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='readonly'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>yes</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>no</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='secure'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>yes</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>no</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </loader>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </os>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>on</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>off</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='maximumMigratable'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>on</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>off</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <vendor>AMD</vendor>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='succor'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <mode name='custom' supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Denverton-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='auto-ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='auto-ibrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amd-psfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='stibp-always-on'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='EPYC-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-128'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-256'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx10-512'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='prefetchiti'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Haswell-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='KnightsMill'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512er'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512pf'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512er'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512pf'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tbm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fma4'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tbm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xop'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='amx-tile'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-bf16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-fp16'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bitalg'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrc'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fzrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='la57'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='taa-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xfd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SierraForest'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cmpccxadd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ifma'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cmpccxadd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fbsdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='fsrs'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ibrs-all'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mcdt-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pbrsb-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='psdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='serialize'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vaes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='hle'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='rtm'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512bw'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512cd'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512dq'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512f'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='avx512vl'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='invpcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pcid'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='pku'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='mpx'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='core-capability'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='split-lock-detect'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='cldemote'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='erms'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='gfni'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdir64b'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='movdiri'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='xsaves'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='athlon'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='athlon-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='core2duo'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='core2duo-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='coreduo'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='coreduo-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='n270'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='n270-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='ss'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='phenom'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <blockers model='phenom-v1'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnow'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <feature name='3dnowext'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </blockers>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </mode>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <memoryBacking supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <enum name='sourceType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>file</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>anonymous</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <value>memfd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </memoryBacking>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <devices>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <disk supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='diskDevice'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>disk</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>cdrom</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>floppy</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>lun</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='bus'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>fdc</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>scsi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>sata</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-non-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </disk>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <graphics supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vnc</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>egl-headless</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>dbus</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </graphics>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <video supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='modelType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vga</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>cirrus</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>none</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>bochs</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ramfb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </video>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <hostdev supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='mode'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>subsystem</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='startupPolicy'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>default</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>mandatory</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>requisite</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>optional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='subsysType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pci</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>scsi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='capsType'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='pciBackend'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </hostdev>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <rng supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtio-non-transitional</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>random</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>egd</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>builtin</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </rng>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <filesystem supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='driverType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>path</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>handle</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>virtiofs</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </filesystem>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <tpm supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tpm-tis</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tpm-crb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>emulator</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>external</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendVersion'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>2.0</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </tpm>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <redirdev supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='bus'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>usb</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </redirdev>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <channel supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>pty</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>unix</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </channel>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <crypto supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='type'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>qemu</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendModel'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>builtin</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </crypto>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <interface supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='backendType'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>default</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>passt</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </interface>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <panic supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='model'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>isa</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>hyperv</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </panic>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </devices>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <features>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <gic supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <genid supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <backup supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <async-teardown supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <ps2 supported='yes'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <sev supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <sgx supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <hyperv supported='yes'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      <enum name='features'>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>relaxed</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vapic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>spinlocks</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vpindex</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>runtime</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>synic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>stimer</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>reset</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>vendor_id</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>frequencies</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>reenlightenment</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>tlbflush</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>ipi</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>avic</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>emsr_bitmap</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:        <value>xmm_input</value>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:      </enum>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    </hyperv>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:    <launchSecurity supported='no'/>
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  </features>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: </domainCapabilities>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.241 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.242 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.242 2 DEBUG nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.242 2 INFO nova.virt.libvirt.host [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Secure Boot support detected#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.244 2 INFO nova.virt.libvirt.driver [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.244 2 INFO nova.virt.libvirt.driver [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.255 2 DEBUG nova.virt.libvirt.driver [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] cpu compare xml: <cpu match="exact">
Oct  2 08:05:19 np0005466031 nova_compute[234802]:  <model>Nehalem</model>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: </cpu>
Oct  2 08:05:19 np0005466031 nova_compute[234802]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.258 2 DEBUG nova.virt.libvirt.driver [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.286 2 INFO nova.virt.node [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Determined node identity f694d536-1dcd-4bb3-8516-534a40cdf6d7 from /var/lib/nova/compute_id#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.301 2 WARNING nova.compute.manager [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Compute nodes ['f694d536-1dcd-4bb3-8516-534a40cdf6d7'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.332 2 INFO nova.compute.manager [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.365 2 WARNING nova.compute.manager [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.366 2 DEBUG oslo_concurrency.lockutils [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.366 2 DEBUG oslo_concurrency.lockutils [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.366 2 DEBUG oslo_concurrency.lockutils [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.366 2 DEBUG nova.compute.resource_tracker [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.367 2 DEBUG oslo_concurrency.processutils [None req-b9b3d2dd-3fca-4d58-a40a-6d929df4c9d9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:19 np0005466031 python3.9[235668]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:05:19 np0005466031 systemd[1]: Stopping nova_compute container...
Oct  2 08:05:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2400835705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.782 2 DEBUG oslo_concurrency.lockutils [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.782 2 DEBUG oslo_concurrency.lockutils [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:19 np0005466031 nova_compute[234802]: 2025-10-02 12:05:19.783 2 DEBUG oslo_concurrency.lockutils [None req-c719abb5-01e5-414b-bb53-a14045cf703e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:20 np0005466031 virtqemud[235323]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  2 08:05:20 np0005466031 virtqemud[235323]: hostname: compute-2
Oct  2 08:05:20 np0005466031 virtqemud[235323]: End of file while reading data: Input/output error
Oct  2 08:05:20 np0005466031 systemd[1]: libpod-a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a.scope: Deactivated successfully.
Oct  2 08:05:20 np0005466031 systemd[1]: libpod-a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a.scope: Consumed 4.002s CPU time.
Oct  2 08:05:20 np0005466031 podman[235692]: 2025-10-02 12:05:20.356511239 +0000 UTC m=+0.637246908 container died a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:05:20 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:05:20 np0005466031 systemd[1]: var-lib-containers-storage-overlay-c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5-merged.mount: Deactivated successfully.
Oct  2 08:05:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:20.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:22 np0005466031 podman[235692]: 2025-10-02 12:05:22.135994689 +0000 UTC m=+2.416730368 container cleanup a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:05:22 np0005466031 podman[235692]: nova_compute
Oct  2 08:05:22 np0005466031 podman[235773]: nova_compute
Oct  2 08:05:22 np0005466031 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  2 08:05:22 np0005466031 systemd[1]: Stopped nova_compute container.
Oct  2 08:05:22 np0005466031 systemd[1]: Starting nova_compute container...
Oct  2 08:05:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:22 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:05:22 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:22 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:22 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:22 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:22 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c7f88a88b3172a6483f2dd3e3b59bb91eee7525a20c7e7012f398c202c8d5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:22 np0005466031 podman[235787]: 2025-10-02 12:05:22.335867744 +0000 UTC m=+0.100415802 container init a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Oct  2 08:05:22 np0005466031 podman[235787]: 2025-10-02 12:05:22.346869923 +0000 UTC m=+0.111417971 container start a75faed7829a5c48a1f47b8d72396c5433dc938409f0f06c5248e11f42bb748a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251001)
Oct  2 08:05:22 np0005466031 podman[235787]: nova_compute
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + sudo -E kolla_set_configs
Oct  2 08:05:22 np0005466031 systemd[1]: Started nova_compute container.
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Validating config file
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying service configuration files
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /etc/ceph
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Creating directory /etc/ceph
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/ceph
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Writing out command to execute
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:22 np0005466031 nova_compute[235803]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 08:05:22 np0005466031 nova_compute[235803]: ++ cat /run_command
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + CMD=nova-compute
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + ARGS=
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + sudo kolla_copy_cacerts
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + [[ ! -n '' ]]
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + . kolla_extend_start
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + echo 'Running command: '\''nova-compute'\'''
Oct  2 08:05:22 np0005466031 nova_compute[235803]: Running command: 'nova-compute'
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + umask 0022
Oct  2 08:05:22 np0005466031 nova_compute[235803]: + exec nova-compute
Oct  2 08:05:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:22.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:24.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:24 np0005466031 nova_compute[235803]: 2025-10-02 12:05:24.449 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:24 np0005466031 nova_compute[235803]: 2025-10-02 12:05:24.449 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:24 np0005466031 nova_compute[235803]: 2025-10-02 12:05:24.450 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:24 np0005466031 nova_compute[235803]: 2025-10-02 12:05:24.450 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  2 08:05:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:24.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:24 np0005466031 nova_compute[235803]: 2025-10-02 12:05:24.604 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:24 np0005466031 nova_compute[235803]: 2025-10-02 12:05:24.626 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.026 2 INFO nova.virt.driver [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.140 2 INFO nova.compute.provider_config [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.157 2 DEBUG oslo_concurrency.lockutils [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.157 2 DEBUG oslo_concurrency.lockutils [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.158 2 DEBUG oslo_concurrency.lockutils [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.158 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.158 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.158 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.159 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.159 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.159 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.159 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.159 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.160 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.160 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.160 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.160 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.160 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.161 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.161 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.161 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.161 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.161 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.162 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.162 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.162 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.162 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.162 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.162 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.163 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.163 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.163 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.163 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.163 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.164 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.164 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.164 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.164 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.164 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.165 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.165 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.165 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.165 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.165 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.166 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.166 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.166 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.166 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.166 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.167 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.167 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.167 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.167 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.167 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.168 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.168 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.168 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.168 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.168 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.169 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.169 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.169 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.169 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.169 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.170 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.170 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.170 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.170 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.170 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.170 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.171 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.171 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.171 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.171 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.171 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.172 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.172 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.172 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.172 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.173 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.173 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.173 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.173 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.174 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.174 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.174 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.174 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.175 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.175 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.175 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.175 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.175 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.176 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.176 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.176 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.176 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.176 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.177 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.177 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.177 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.177 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.177 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.178 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.178 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.178 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.178 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.178 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.179 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.179 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.179 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.179 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.180 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.180 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.180 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.180 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.181 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.181 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.181 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.181 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.182 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.182 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.182 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.182 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.182 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.183 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.183 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.183 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.183 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.184 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.184 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.184 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.184 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.185 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.185 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.185 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.185 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.185 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.186 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.186 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.186 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.186 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.187 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.187 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.187 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.187 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.188 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.188 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.188 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.188 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.189 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.189 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.189 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.189 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.189 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.190 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.190 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.190 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.190 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.191 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.191 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.191 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.191 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.192 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.192 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.192 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.192 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.193 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.193 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.193 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.193 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.194 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.194 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.194 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.194 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.195 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.195 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.195 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.195 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.195 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.196 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.196 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.196 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.196 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.197 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.197 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.197 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.198 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.198 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.198 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.198 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.199 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.199 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.199 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.199 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.200 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.200 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.200 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.200 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.200 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.201 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.201 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.201 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.201 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.201 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.202 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.202 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.202 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.202 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.202 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.203 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.203 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.203 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.203 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.203 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.204 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.204 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.204 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.204 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.204 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.204 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.205 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.205 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.205 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.205 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.205 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.206 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.206 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.206 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.206 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.206 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.207 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.207 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.207 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.207 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.207 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.208 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.208 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.208 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.208 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.208 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.209 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.209 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.209 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.209 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.210 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.210 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.210 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.210 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.210 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.211 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.211 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.211 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.211 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.211 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.211 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.212 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.212 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.212 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.212 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.212 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.213 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.213 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.213 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.213 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.213 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.213 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.214 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.214 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.214 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.214 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.214 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.215 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.215 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.215 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.215 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.215 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.216 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.216 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.216 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.216 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.217 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.217 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.217 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.217 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.217 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.217 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.218 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.218 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.218 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.218 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.218 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.219 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.219 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.219 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.219 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.219 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.220 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.220 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.220 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.220 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.221 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.221 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.221 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.221 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.222 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.222 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.222 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.222 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.223 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.223 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.223 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.223 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.224 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.224 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.224 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.224 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.225 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.225 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.225 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.225 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.225 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.226 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.226 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.226 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.226 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.226 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.227 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.227 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.227 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.227 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.227 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.228 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.228 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.228 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.228 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.228 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.229 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.229 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.229 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.229 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.230 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.230 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.230 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.230 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.230 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.231 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.231 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.231 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.231 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.232 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.232 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.232 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.232 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.232 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.232 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.233 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.233 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.233 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.233 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.234 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.234 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.234 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.235 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.235 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.235 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.235 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.236 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.236 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.236 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.236 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.237 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.237 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.237 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.237 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.238 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.238 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.238 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.238 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.239 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.239 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.239 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.239 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.240 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.240 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.240 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.240 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.241 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.241 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.241 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.241 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.242 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.242 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.242 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.242 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.243 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.243 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.243 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.243 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.243 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.244 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.244 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.244 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.244 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.244 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.244 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.245 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.245 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.245 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.245 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.245 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.246 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.246 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.246 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.246 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.247 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.247 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.247 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.247 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.247 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.248 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.248 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.248 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.248 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.248 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.249 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.249 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.249 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.249 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.249 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.250 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.250 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.250 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.250 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.250 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.251 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.251 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.251 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.252 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.252 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.252 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.252 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.252 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.253 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.253 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.253 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.253 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.253 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.254 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.254 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.254 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.254 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.255 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.255 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.255 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.255 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.256 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.256 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.256 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.256 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.257 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.257 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.257 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.257 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.258 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.258 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.258 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.258 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.259 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.259 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.259 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.259 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.259 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.260 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.260 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.260 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.260 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.261 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.261 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.261 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.261 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.261 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.262 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.262 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.262 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.262 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.262 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.263 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.263 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.263 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.263 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.264 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.264 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.264 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.264 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.265 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.265 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.265 2 WARNING oslo_config.cfg [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  2 08:05:25 np0005466031 nova_compute[235803]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  2 08:05:25 np0005466031 nova_compute[235803]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  2 08:05:25 np0005466031 nova_compute[235803]: and ``live_migration_inbound_addr`` respectively.
Oct  2 08:05:25 np0005466031 nova_compute[235803]: ).  Its value may be silently ignored in the future.#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.265 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.266 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.266 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.266 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.266 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.267 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.267 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.267 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.267 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.267 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.268 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.268 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.268 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.268 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.268 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.269 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.269 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.269 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.269 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rbd_secret_uuid        = 20fdc58c-b037-5094-a8ef-d490aa7c36f3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.269 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.270 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.270 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.270 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.270 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.270 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.271 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.271 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.271 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.271 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.271 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.272 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.272 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.272 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.272 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.272 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.273 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.273 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.273 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.273 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.273 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.274 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.274 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.274 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.274 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.274 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.274 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.275 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.275 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.275 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.275 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.275 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.276 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.276 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.276 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.276 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.277 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.277 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.277 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.277 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.277 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.277 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.278 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.278 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.278 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.278 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.279 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.279 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.279 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.279 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.279 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.280 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.280 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.280 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.280 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.280 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.280 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.281 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.281 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.281 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.281 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.282 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.282 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.282 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.282 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.282 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.283 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.283 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.283 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.283 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.283 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.284 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.284 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.284 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.284 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.284 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.285 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.285 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.285 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.285 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.285 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.285 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.286 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.286 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.286 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.286 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.286 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.287 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.287 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.287 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.287 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.287 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.288 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.288 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.288 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.288 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.288 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.289 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.289 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.289 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.289 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.289 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.289 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.290 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.290 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.290 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.290 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.290 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.291 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.291 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.291 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.291 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.291 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.292 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.292 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.292 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.292 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.292 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.293 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.293 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.293 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.293 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.294 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.294 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.294 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.294 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.294 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.295 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.295 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.295 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.295 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.295 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.296 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.296 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.296 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.296 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.297 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.297 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.297 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.297 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.297 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.297 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.298 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.298 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.298 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.298 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.298 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.299 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.299 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.299 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.299 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.299 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.300 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.300 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.300 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.300 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.300 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.301 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.301 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.301 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.302 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.302 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.302 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.302 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.303 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.303 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.303 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.303 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.304 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.304 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.304 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.304 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.305 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.305 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.305 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.305 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.306 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.306 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.306 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.306 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.307 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.307 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.307 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.307 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.308 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.308 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.308 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.308 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.309 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.309 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.309 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.309 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.310 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.310 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.310 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.310 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.311 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.311 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.311 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.311 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.312 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.312 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.312 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.312 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.313 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.313 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.313 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.313 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.314 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.314 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.314 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.314 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.314 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.315 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.315 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.315 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.315 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.316 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.316 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.316 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.316 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.317 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.317 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.317 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.317 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.318 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.318 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.318 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.318 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.319 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.319 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.319 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.320 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.320 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.320 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.320 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.321 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.321 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.321 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.321 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.322 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.322 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.322 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.322 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.323 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.323 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.323 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.323 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.324 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.324 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.324 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.324 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.324 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.325 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.325 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.325 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.325 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.326 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.326 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.326 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.326 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.327 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.327 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.327 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.327 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.328 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.328 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.328 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.328 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.329 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.329 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.329 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.330 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.330 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.330 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.330 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.331 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.331 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.331 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.331 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.332 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.332 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.332 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.332 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.333 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.333 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.333 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.333 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.334 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.334 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.334 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.334 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.334 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.335 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.335 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.335 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.335 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.336 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.336 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.336 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.336 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.337 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.337 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.337 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.337 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.338 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.338 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.338 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.338 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.338 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.339 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.339 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.339 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.339 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.340 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.340 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.340 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.340 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.341 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.341 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.341 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.341 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.342 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.342 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.342 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.343 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.343 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.343 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.343 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.344 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.344 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.344 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.344 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.345 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.345 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.345 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.345 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.345 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.346 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.346 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.346 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.346 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.347 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.347 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.347 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.347 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.348 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.348 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.348 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.348 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.349 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.349 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.349 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.349 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.349 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.350 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.350 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.350 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.350 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.351 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.351 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.351 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.351 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.352 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.352 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.352 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.352 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.353 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.353 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.353 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.353 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.354 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.354 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.354 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.354 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.355 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.355 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.355 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.355 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.356 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.356 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.356 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.357 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.357 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.357 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.357 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.358 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.358 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.358 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.358 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.359 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.359 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.359 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.360 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.360 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.360 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.360 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.361 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.361 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.361 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.361 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.362 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.362 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.362 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.362 2 DEBUG oslo_service.service [None req-596a5b2b-dde1-4305-bf35-c8202d212864 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.364 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.378 2 INFO nova.virt.node [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Determined node identity f694d536-1dcd-4bb3-8516-534a40cdf6d7 from /var/lib/nova/compute_id#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.380 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.381 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.381 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.381 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.399 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd595617b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.402 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd595617b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.403 2 INFO nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.408 2 INFO nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Libvirt host capabilities <capabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <host>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <uuid>91df6c8e-6fe2-49d2-9991-360b14608f11</uuid>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <arch>x86_64</arch>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model>EPYC-Rome-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <vendor>AMD</vendor>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <microcode version='16777317'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <signature family='23' model='49' stepping='0'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='x2apic'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='tsc-deadline'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='osxsave'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='hypervisor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='tsc_adjust'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='spec-ctrl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='stibp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='arch-capabilities'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='cmp_legacy'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='topoext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='virt-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='lbrv'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='tsc-scale'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='vmcb-clean'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='pause-filter'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='pfthreshold'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='svme-addr-chk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='rdctl-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='skip-l1dfl-vmentry'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='mds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature name='pschange-mc-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <pages unit='KiB' size='4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <pages unit='KiB' size='2048'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <pages unit='KiB' size='1048576'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <power_management>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <suspend_mem/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </power_management>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <iommu support='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <migration_features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <live/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <uri_transports>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <uri_transport>tcp</uri_transport>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <uri_transport>rdma</uri_transport>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </uri_transports>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </migration_features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <topology>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <cells num='1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <cell id='0'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:          <memory unit='KiB'>7864104</memory>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:          <pages unit='KiB' size='4'>1966026</pages>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:          <pages unit='KiB' size='2048'>0</pages>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:          <distances>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <sibling id='0' value='10'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:          </distances>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:          <cpus num='8'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:          </cpus>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        </cell>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </cells>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </topology>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <cache>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </cache>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <secmodel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model>selinux</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <doi>0</doi>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </secmodel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <secmodel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model>dac</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <doi>0</doi>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </secmodel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </host>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <guest>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <os_type>hvm</os_type>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <arch name='i686'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <wordsize>32</wordsize>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <domain type='qemu'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <domain type='kvm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </arch>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <pae/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <nonpae/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <acpi default='on' toggle='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <apic default='on' toggle='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <cpuselection/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <deviceboot/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <disksnapshot default='on' toggle='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <externalSnapshot/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </guest>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <guest>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <os_type>hvm</os_type>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <arch name='x86_64'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <wordsize>64</wordsize>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <domain type='qemu'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <domain type='kvm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </arch>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <acpi default='on' toggle='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <apic default='on' toggle='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <cpuselection/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <deviceboot/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <disksnapshot default='on' toggle='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <externalSnapshot/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </guest>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 
Oct  2 08:05:25 np0005466031 nova_compute[235803]: </capabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: #033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.415 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.417 2 DEBUG nova.virt.libvirt.volume.mount [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.419 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  2 08:05:25 np0005466031 nova_compute[235803]: <domainCapabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <domain>kvm</domain>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <arch>i686</arch>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <vcpu max='240'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <iothreads supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <os supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <enum name='firmware'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <loader supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>rom</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pflash</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='readonly'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>yes</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>no</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='secure'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>no</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </loader>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>on</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>off</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='maximumMigratable'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>on</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>off</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <vendor>AMD</vendor>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='succor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='custom' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='auto-ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='auto-ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-128'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-256'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-512'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='KnightsMill'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512er'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512pf'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512er'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512pf'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tbm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tbm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SierraForest'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cmpccxadd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cmpccxadd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='athlon'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='athlon-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='core2duo'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='core2duo-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='coreduo'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='coreduo-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='n270'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='n270-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='phenom'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='phenom-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <memoryBacking supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <enum name='sourceType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>file</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>anonymous</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>memfd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </memoryBacking>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <disk supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='diskDevice'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>disk</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>cdrom</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>floppy</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>lun</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='bus'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ide</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>fdc</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>scsi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>sata</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-non-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <graphics supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vnc</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>egl-headless</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>dbus</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </graphics>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <video supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='modelType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vga</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>cirrus</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>none</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>bochs</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ramfb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <hostdev supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='mode'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>subsystem</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='startupPolicy'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>default</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>mandatory</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>requisite</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>optional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='subsysType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pci</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>scsi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='capsType'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='pciBackend'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </hostdev>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <rng supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-non-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>random</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>egd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>builtin</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <filesystem supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='driverType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>path</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>handle</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtiofs</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </filesystem>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <tpm supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tpm-tis</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tpm-crb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>emulator</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>external</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendVersion'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>2.0</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </tpm>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <redirdev supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='bus'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </redirdev>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <channel supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pty</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>unix</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </channel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <crypto supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>qemu</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>builtin</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </crypto>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <interface supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>default</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>passt</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <panic supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>isa</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>hyperv</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </panic>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <gic supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <genid supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <backup supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <async-teardown supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <ps2 supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <sev supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <sgx supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <hyperv supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='features'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>relaxed</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vapic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>spinlocks</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vpindex</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>runtime</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>synic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>stimer</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>reset</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vendor_id</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>frequencies</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>reenlightenment</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tlbflush</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ipi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>avic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>emsr_bitmap</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>xmm_input</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </hyperv>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <launchSecurity supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: </domainCapabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.430 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  2 08:05:25 np0005466031 nova_compute[235803]: <domainCapabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <domain>kvm</domain>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <arch>i686</arch>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <vcpu max='4096'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <iothreads supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <os supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <enum name='firmware'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <loader supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>rom</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pflash</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='readonly'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>yes</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>no</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='secure'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>no</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </loader>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>on</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>off</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='maximumMigratable'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>on</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>off</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <vendor>AMD</vendor>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='succor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='custom' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='auto-ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='auto-ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-128'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-256'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-512'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='KnightsMill'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512er'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512pf'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512er'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512pf'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tbm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tbm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SierraForest'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cmpccxadd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cmpccxadd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='athlon'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='athlon-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='core2duo'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='core2duo-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='coreduo'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='coreduo-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='n270'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='n270-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='phenom'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='phenom-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <memoryBacking supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <enum name='sourceType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>file</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>anonymous</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>memfd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </memoryBacking>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <disk supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='diskDevice'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>disk</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>cdrom</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>floppy</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>lun</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='bus'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>fdc</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>scsi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>sata</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-non-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <graphics supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vnc</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>egl-headless</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>dbus</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </graphics>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <video supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='modelType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vga</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>cirrus</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>none</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>bochs</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ramfb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <hostdev supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='mode'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>subsystem</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='startupPolicy'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>default</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>mandatory</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>requisite</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>optional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='subsysType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pci</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>scsi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='capsType'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='pciBackend'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </hostdev>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <rng supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-non-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>random</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>egd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>builtin</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <filesystem supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='driverType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>path</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>handle</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtiofs</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </filesystem>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <tpm supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tpm-tis</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tpm-crb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>emulator</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>external</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendVersion'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>2.0</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </tpm>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <redirdev supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='bus'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </redirdev>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <channel supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pty</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>unix</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </channel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <crypto supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>qemu</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>builtin</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </crypto>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <interface supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>default</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>passt</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <panic supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>isa</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>hyperv</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </panic>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <gic supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <genid supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <backup supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <async-teardown supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <ps2 supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <sev supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <sgx supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <hyperv supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='features'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>relaxed</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vapic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>spinlocks</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vpindex</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>runtime</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>synic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>stimer</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>reset</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vendor_id</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>frequencies</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>reenlightenment</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tlbflush</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ipi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>avic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>emsr_bitmap</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>xmm_input</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </hyperv>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <launchSecurity supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: </domainCapabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.459 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.464 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  2 08:05:25 np0005466031 nova_compute[235803]: <domainCapabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <domain>kvm</domain>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <arch>x86_64</arch>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <vcpu max='240'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <iothreads supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <os supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <enum name='firmware'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <loader supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>rom</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pflash</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='readonly'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>yes</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>no</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='secure'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>no</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </loader>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>on</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>off</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='maximumMigratable'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>on</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>off</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <vendor>AMD</vendor>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='succor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='custom' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='auto-ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='auto-ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-128'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-256'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-512'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='KnightsMill'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512er'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512pf'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512er'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512pf'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tbm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tbm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SierraForest'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cmpccxadd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cmpccxadd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='athlon'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='athlon-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='core2duo'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='core2duo-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='coreduo'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='coreduo-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='n270'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='n270-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='phenom'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='phenom-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <memoryBacking supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <enum name='sourceType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>file</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>anonymous</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>memfd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </memoryBacking>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <disk supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='diskDevice'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>disk</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>cdrom</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>floppy</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>lun</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='bus'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ide</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>fdc</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>scsi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>sata</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-non-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <graphics supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vnc</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>egl-headless</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>dbus</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </graphics>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <video supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='modelType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vga</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>cirrus</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>none</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>bochs</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ramfb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <hostdev supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='mode'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>subsystem</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='startupPolicy'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>default</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>mandatory</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>requisite</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>optional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='subsysType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pci</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>scsi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='capsType'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='pciBackend'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </hostdev>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <rng supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-non-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>random</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>egd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>builtin</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <filesystem supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='driverType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>path</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>handle</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtiofs</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </filesystem>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <tpm supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tpm-tis</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tpm-crb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>emulator</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>external</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendVersion'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>2.0</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </tpm>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <redirdev supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='bus'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </redirdev>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <channel supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pty</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>unix</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </channel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <crypto supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>qemu</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>builtin</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </crypto>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <interface supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>default</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>passt</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <panic supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>isa</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>hyperv</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </panic>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <gic supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <genid supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <backup supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <async-teardown supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <ps2 supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <sev supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <sgx supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <hyperv supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='features'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>relaxed</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vapic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>spinlocks</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vpindex</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>runtime</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>synic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>stimer</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>reset</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vendor_id</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>frequencies</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>reenlightenment</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tlbflush</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ipi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>avic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>emsr_bitmap</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>xmm_input</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </hyperv>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <launchSecurity supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: </domainCapabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.524 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  2 08:05:25 np0005466031 nova_compute[235803]: <domainCapabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <domain>kvm</domain>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <arch>x86_64</arch>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <vcpu max='4096'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <iothreads supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <os supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <enum name='firmware'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>efi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <loader supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>rom</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pflash</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='readonly'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>yes</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>no</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='secure'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>yes</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>no</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </loader>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>on</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>off</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='maximumMigratable'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>on</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>off</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <vendor>AMD</vendor>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='succor'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <mode name='custom' supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Denverton-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='auto-ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='auto-ibrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amd-psfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='stibp-always-on'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='EPYC-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-128'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-256'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx10-512'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='prefetchiti'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Haswell-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='KnightsMill'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512er'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512pf'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512er'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512pf'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tbm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fma4'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tbm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xop'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='amx-tile'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-bf16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-fp16'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bitalg'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrc'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fzrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='la57'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='taa-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xfd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SierraForest'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cmpccxadd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ifma'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cmpccxadd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fbsdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='fsrs'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ibrs-all'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mcdt-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pbrsb-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='psdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='serialize'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vaes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='hle'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='rtm'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512bw'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512cd'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512dq'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512f'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='avx512vl'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='invpcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pcid'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='pku'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='mpx'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='core-capability'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='split-lock-detect'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='cldemote'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='erms'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='gfni'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdir64b'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='movdiri'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='xsaves'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='athlon'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='athlon-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='core2duo'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='core2duo-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='coreduo'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='coreduo-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='n270'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='n270-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='ss'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='phenom'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <blockers model='phenom-v1'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnow'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <feature name='3dnowext'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </blockers>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </mode>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <memoryBacking supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <enum name='sourceType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>file</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>anonymous</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <value>memfd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </memoryBacking>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <disk supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='diskDevice'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>disk</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>cdrom</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>floppy</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>lun</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='bus'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>fdc</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>scsi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>sata</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-non-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <graphics supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vnc</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>egl-headless</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>dbus</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </graphics>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <video supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='modelType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vga</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>cirrus</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>none</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>bochs</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ramfb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <hostdev supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='mode'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>subsystem</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='startupPolicy'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>default</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>mandatory</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>requisite</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>optional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='subsysType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pci</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>scsi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='capsType'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='pciBackend'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </hostdev>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <rng supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtio-non-transitional</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>random</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>egd</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>builtin</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <filesystem supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='driverType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>path</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>handle</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>virtiofs</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </filesystem>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <tpm supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tpm-tis</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tpm-crb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>emulator</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>external</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendVersion'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>2.0</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </tpm>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <redirdev supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='bus'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>usb</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </redirdev>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <channel supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>pty</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>unix</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </channel>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <crypto supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='type'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>qemu</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendModel'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>builtin</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </crypto>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <interface supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='backendType'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>default</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>passt</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <panic supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='model'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>isa</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>hyperv</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </panic>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <gic supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <genid supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <backup supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <async-teardown supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <ps2 supported='yes'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <sev supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <sgx supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <hyperv supported='yes'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      <enum name='features'>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>relaxed</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vapic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>spinlocks</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vpindex</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>runtime</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>synic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>stimer</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>reset</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>vendor_id</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>frequencies</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>reenlightenment</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>tlbflush</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>ipi</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>avic</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>emsr_bitmap</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:        <value>xmm_input</value>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:      </enum>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    </hyperv>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:    <launchSecurity supported='no'/>
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: </domainCapabilities>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.578 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.579 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.579 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.579 2 INFO nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Secure Boot support detected#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.581 2 INFO nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.582 2 INFO nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.592 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] cpu compare xml: <cpu match="exact">
Oct  2 08:05:25 np0005466031 nova_compute[235803]:  <model>Nehalem</model>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: </cpu>
Oct  2 08:05:25 np0005466031 nova_compute[235803]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.595 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.614 2 INFO nova.virt.node [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Determined node identity f694d536-1dcd-4bb3-8516-534a40cdf6d7 from /var/lib/nova/compute_id#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.630 2 WARNING nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Compute nodes ['f694d536-1dcd-4bb3-8516-534a40cdf6d7'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.657 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.672 2 WARNING nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.673 2 DEBUG oslo_concurrency.lockutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.673 2 DEBUG oslo_concurrency.lockutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.673 2 DEBUG oslo_concurrency.lockutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.674 2 DEBUG nova.compute.resource_tracker [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:05:25 np0005466031 nova_compute[235803]: 2025-10-02 12:05:25.675 2 DEBUG oslo_concurrency.processutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:05:25.812 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:05:25.813 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:05:25.813 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2216606571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.097 2 DEBUG oslo_concurrency.processutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:26 np0005466031 systemd[1]: Starting libvirt nodedev daemon...
Oct  2 08:05:26 np0005466031 systemd[1]: Started libvirt nodedev daemon.
Oct  2 08:05:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:26.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.420 2 WARNING nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.421 2 DEBUG nova.compute.resource_tracker [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5247MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.422 2 DEBUG oslo_concurrency.lockutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.422 2 DEBUG oslo_concurrency.lockutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.440 2 WARNING nova.compute.resource_tracker [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] No compute node record for compute-2.ctlplane.example.com:f694d536-1dcd-4bb3-8516-534a40cdf6d7: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host f694d536-1dcd-4bb3-8516-534a40cdf6d7 could not be found.#033[00m
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.455 2 INFO nova.compute.resource_tracker [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: f694d536-1dcd-4bb3-8516-534a40cdf6d7#033[00m
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.504 2 DEBUG nova.compute.resource_tracker [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.504 2 DEBUG nova.compute.resource_tracker [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:05:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:26 np0005466031 nova_compute[235803]: 2025-10-02 12:05:26.927 2 INFO nova.scheduler.client.report [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [req-a0b7f4a9-7aa7-403e-a13a-4d98c595ec9b] Created resource provider record via placement API for resource provider with UUID f694d536-1dcd-4bb3-8516-534a40cdf6d7 and name compute-2.ctlplane.example.com.#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.315 2 DEBUG oslo_concurrency.processutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3916071039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.762 2 DEBUG oslo_concurrency.processutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.770 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  2 08:05:27 np0005466031 nova_compute[235803]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.771 2 INFO nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.772 2 DEBUG nova.compute.provider_tree [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.772 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:05:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.775 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Libvirt baseline CPU <cpu>
Oct  2 08:05:27 np0005466031 nova_compute[235803]:  <arch>x86_64</arch>
Oct  2 08:05:27 np0005466031 nova_compute[235803]:  <model>Nehalem</model>
Oct  2 08:05:27 np0005466031 nova_compute[235803]:  <vendor>AMD</vendor>
Oct  2 08:05:27 np0005466031 nova_compute[235803]:  <topology sockets="8" cores="1" threads="1"/>
Oct  2 08:05:27 np0005466031 nova_compute[235803]: </cpu>
Oct  2 08:05:27 np0005466031 nova_compute[235803]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.876 2 DEBUG nova.scheduler.client.report [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Updated inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.876 2 DEBUG nova.compute.provider_tree [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Updating resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.877 2 DEBUG nova.compute.provider_tree [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:05:27 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.976 2 DEBUG nova.compute.provider_tree [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Updating resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 08:05:28 np0005466031 nova_compute[235803]: 2025-10-02 12:05:27.999 2 DEBUG nova.compute.resource_tracker [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:05:28 np0005466031 nova_compute[235803]: 2025-10-02 12:05:28.000 2 DEBUG oslo_concurrency.lockutils [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:28 np0005466031 nova_compute[235803]: 2025-10-02 12:05:28.000 2 DEBUG nova.service [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  2 08:05:28 np0005466031 python3.9[236191]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 08:05:28 np0005466031 nova_compute[235803]: 2025-10-02 12:05:28.090 2 DEBUG nova.service [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  2 08:05:28 np0005466031 nova_compute[235803]: 2025-10-02 12:05:28.091 2 DEBUG nova.servicegroup.drivers.db [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  2 08:05:28 np0005466031 systemd[1]: Started libpod-conmon-4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3.scope.
Oct  2 08:05:28 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:05:28 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aeb1c8ae39bfbc95d6186c03aa6d5c5126c0ecfa05bf1a97176ee17a77a81f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:28 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aeb1c8ae39bfbc95d6186c03aa6d5c5126c0ecfa05bf1a97176ee17a77a81f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:28 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aeb1c8ae39bfbc95d6186c03aa6d5c5126c0ecfa05bf1a97176ee17a77a81f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:28 np0005466031 podman[236219]: 2025-10-02 12:05:28.227714716 +0000 UTC m=+0.116187630 container init 4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:05:28 np0005466031 podman[236219]: 2025-10-02 12:05:28.23888067 +0000 UTC m=+0.127353564 container start 4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct  2 08:05:28 np0005466031 python3.9[236191]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Applying nova statedir ownership
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  2 08:05:28 np0005466031 nova_compute_init[236241]: INFO:nova_statedir:Nova statedir ownership complete
Oct  2 08:05:28 np0005466031 systemd[1]: libpod-4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3.scope: Deactivated successfully.
Oct  2 08:05:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:28 np0005466031 podman[236256]: 2025-10-02 12:05:28.333003979 +0000 UTC m=+0.026513390 container died 4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:05:28 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3-userdata-shm.mount: Deactivated successfully.
Oct  2 08:05:28 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b8aeb1c8ae39bfbc95d6186c03aa6d5c5126c0ecfa05bf1a97176ee17a77a81f-merged.mount: Deactivated successfully.
Oct  2 08:05:28 np0005466031 podman[236256]: 2025-10-02 12:05:28.378257271 +0000 UTC m=+0.071766662 container cleanup 4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Oct  2 08:05:28 np0005466031 systemd[1]: libpod-conmon-4b268cbe8ba940b61f4374252557cdf92d016917c12c8d89ba58984959f539a3.scope: Deactivated successfully.
Oct  2 08:05:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:05:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:05:29 np0005466031 systemd[1]: session-50.scope: Deactivated successfully.
Oct  2 08:05:29 np0005466031 systemd[1]: session-50.scope: Consumed 2min 46.229s CPU time.
Oct  2 08:05:29 np0005466031 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Oct  2 08:05:29 np0005466031 systemd-logind[786]: Removed session 50.
Oct  2 08:05:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 08:05:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:30.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 08:05:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:30.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:32.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:32.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:32 np0005466031 podman[236311]: 2025-10-02 12:05:32.713425366 +0000 UTC m=+0.135255653 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  2 08:05:32 np0005466031 podman[236336]: 2025-10-02 12:05:32.792965022 +0000 UTC m=+0.053552034 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:05:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:34.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:36.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:36.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:38.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:40.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:40.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:42.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:42.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:42 np0005466031 podman[236460]: 2025-10-02 12:05:42.651665959 +0000 UTC m=+0.078849188 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:05:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:44.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:44.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:44 np0005466031 podman[236483]: 2025-10-02 12:05:44.621731384 +0000 UTC m=+0.057321973 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:05:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:46.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:46.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:48.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:50.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:52.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:52.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:54.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:05:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:54.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:05:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:56.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:05:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:56.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:05:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:58.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:05:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:58.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:00.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:06:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:06:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:06:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:02.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:06:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:02.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:03 np0005466031 nova_compute[235803]: 2025-10-02 12:06:03.092 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:03 np0005466031 nova_compute[235803]: 2025-10-02 12:06:03.145 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:03 np0005466031 podman[236562]: 2025-10-02 12:06:03.642375459 +0000 UTC m=+0.072816337 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:06:03 np0005466031 podman[236563]: 2025-10-02 12:06:03.661286128 +0000 UTC m=+0.085574841 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:06:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:04.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:04.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:06.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:08.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:06:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3575 writes, 19K keys, 3575 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 3575 writes, 3575 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1394 writes, 6916 keys, 1394 commit groups, 1.0 writes per commit group, ingest: 14.84 MB, 0.02 MB/s#012Interval WAL: 1394 writes, 1394 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     98.6      0.21              0.06         9    0.024       0      0       0.0       0.0#012  L6      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    159.7    132.8      0.51              0.24         8    0.063     35K   4310       0.0       0.0#012 Sum      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    112.4    122.6      0.72              0.30        17    0.042     35K   4310       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5    107.2    107.3      0.49              0.21        10    0.049     23K   3040       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    159.7    132.8      0.51              0.24         8    0.063     35K   4310       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     99.9      0.21              0.06         8    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.07 MB/s write, 0.08 GB read, 0.07 MB/s read, 0.7 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 308.00 MB usage: 4.74 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(262,4.43 MB,1.43712%) FilterBlock(17,106.61 KB,0.0338022%) IndexBlock(17,212.30 KB,0.0673121%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:06:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:08.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:10.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:10.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:12.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:12.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:13 np0005466031 podman[236612]: 2025-10-02 12:06:13.623345349 +0000 UTC m=+0.058858870 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:06:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:14.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:14.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:15 np0005466031 podman[236633]: 2025-10-02 12:06:15.631851801 +0000 UTC m=+0.063890333 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid)
Oct  2 08:06:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:16.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:06:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:06:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:18.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:06:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:06:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:20.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:20.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:22.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:22.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.638 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.638 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:06:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:24.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.672 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.672 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.673 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.673 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.673 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.674 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.674 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.674 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:06:24 np0005466031 nova_compute[235803]: 2025-10-02 12:06:24.675 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.047 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.048 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.048 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.048 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.049 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:25 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3315944606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.469 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.598 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.599 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5306MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.599 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.599 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.706 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.707 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:06:25 np0005466031 nova_compute[235803]: 2025-10-02 12:06:25.795 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:06:25.813 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:06:25.813 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:06:25.813 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1921985533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:26 np0005466031 nova_compute[235803]: 2025-10-02 12:06:26.199 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:26 np0005466031 nova_compute[235803]: 2025-10-02 12:06:26.203 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:06:26 np0005466031 nova_compute[235803]: 2025-10-02 12:06:26.238 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:06:26 np0005466031 nova_compute[235803]: 2025-10-02 12:06:26.240 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:06:26 np0005466031 nova_compute[235803]: 2025-10-02 12:06:26.240 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:26.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:06:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:26.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:06:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:28.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:28.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:30.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:30.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:06:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:32.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:06:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:06:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:06:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:34.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:34 np0005466031 podman[236757]: 2025-10-02 12:06:34.627712763 +0000 UTC m=+0.059936970 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:06:34 np0005466031 podman[236758]: 2025-10-02 12:06:34.657250395 +0000 UTC m=+0.087189557 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:06:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:34.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:36.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:36.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:06:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:06:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:06:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:06:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:38.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:06:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:38.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:06:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:06:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:40.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:06:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:06:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:40.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:06:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:42.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:06:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5455 writes, 22K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5455 writes, 916 syncs, 5.96 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 392 writes, 622 keys, 392 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s#012Interval WAL: 392 writes, 175 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  2 08:06:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:42.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:44 np0005466031 podman[237132]: 2025-10-02 12:06:44.38243023 +0000 UTC m=+0.061080072 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:06:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:44.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:44.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:46.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:46 np0005466031 podman[237180]: 2025-10-02 12:06:46.615449376 +0000 UTC m=+0.051261973 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:06:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:06:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:46.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:06:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:48.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:48.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:50.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:50.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:52.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:52.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:54.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:54.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:56.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:06:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:56.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:06:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:58.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:06:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:06:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:58.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:06:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:00.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:00.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:02.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:02.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:04.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:04.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:07:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1643989139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:07:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:07:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1643989139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:07:05 np0005466031 podman[237259]: 2025-10-02 12:07:05.645986347 +0000 UTC m=+0.067357022 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:07:05 np0005466031 podman[237260]: 2025-10-02 12:07:05.690710592 +0000 UTC m=+0.112932641 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:07:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:06.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:06.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:08.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:07:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:08.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:07:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:10.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:07:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:10.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:07:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:12.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:07:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:12.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:07:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:14.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:14 np0005466031 systemd[1]: Starting dnf makecache...
Oct  2 08:07:14 np0005466031 podman[237305]: 2025-10-02 12:07:14.635723533 +0000 UTC m=+0.062749811 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:07:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:14.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:14 np0005466031 dnf[237306]: Metadata cache refreshed recently.
Oct  2 08:07:14 np0005466031 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 08:07:14 np0005466031 systemd[1]: Finished dnf makecache.
Oct  2 08:07:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:16.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:16.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:17 np0005466031 podman[237328]: 2025-10-02 12:07:17.625575968 +0000 UTC m=+0.053610650 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:07:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:07:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:18.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:07:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:18.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:20.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:20.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:22.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:22.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:24.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:24.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:07:25.814 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:07:25.814 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:07:25.814 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.234 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.234 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.254 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.254 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.255 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.255 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.255 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.255 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.255 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.256 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.284 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.285 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.285 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.285 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.285 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:26.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1281353787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:26.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.750 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.900 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.901 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5307MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.901 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.902 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.982 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:07:26 np0005466031 nova_compute[235803]: 2025-10-02 12:07:26.983 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.012 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/787876386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.416 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.422 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.442 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.444 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.444 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.826 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.826 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.826 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:07:27 np0005466031 nova_compute[235803]: 2025-10-02 12:07:27.844 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:07:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:28.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:28.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:07:29.269 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:07:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:07:29.270 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:07:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:07:29.271 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:30.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:30.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:32.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:34.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:34.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:36.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:36 np0005466031 podman[237452]: 2025-10-02 12:07:36.632737081 +0000 UTC m=+0.062614067 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:07:36 np0005466031 podman[237453]: 2025-10-02 12:07:36.705618959 +0000 UTC m=+0.132415717 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:07:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:07:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:36.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:07:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:38.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:38.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:42.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:42.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:44.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.561494) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864561586, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2350, "num_deletes": 251, "total_data_size": 5949219, "memory_usage": 6010720, "flush_reason": "Manual Compaction"}
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864580182, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3896006, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17820, "largest_seqno": 20164, "table_properties": {"data_size": 3886472, "index_size": 6092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18976, "raw_average_key_size": 20, "raw_value_size": 3867534, "raw_average_value_size": 4088, "num_data_blocks": 272, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406635, "oldest_key_time": 1759406635, "file_creation_time": 1759406864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 18721 microseconds, and 8001 cpu microseconds.
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.580223) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3896006 bytes OK
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.580249) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.582538) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.582576) EVENT_LOG_v1 {"time_micros": 1759406864582569, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.582594) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5938962, prev total WAL file size 5938962, number of live WAL files 2.
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.583942) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3804KB)], [36(7579KB)]
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864584024, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 11657517, "oldest_snapshot_seqno": -1}
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4439 keys, 9637273 bytes, temperature: kUnknown
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864627288, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9637273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9605043, "index_size": 20024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110826, "raw_average_key_size": 24, "raw_value_size": 9522021, "raw_average_value_size": 2145, "num_data_blocks": 832, "num_entries": 4439, "num_filter_entries": 4439, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759406864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.627598) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9637273 bytes
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.628621) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 269.0 rd, 222.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 4958, records dropped: 519 output_compression: NoCompression
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.628644) EVENT_LOG_v1 {"time_micros": 1759406864628632, "job": 20, "event": "compaction_finished", "compaction_time_micros": 43337, "compaction_time_cpu_micros": 21120, "output_level": 6, "num_output_files": 1, "total_output_size": 9637273, "num_input_records": 4958, "num_output_records": 4439, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864629691, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864631699, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.583758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.631781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.631788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.631789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.631791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:07:44.631792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466031 podman[237647]: 2025-10-02 12:07:44.754746621 +0000 UTC m=+0.062075491 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:07:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:44.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:07:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:07:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:07:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:07:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:07:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:46.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:07:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:48.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:48 np0005466031 podman[237701]: 2025-10-02 12:07:48.629510701 +0000 UTC m=+0.060135485 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:07:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:50.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:52.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:52.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:07:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:07:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:54.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:54.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:56.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:56.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:07:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:07:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:58.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:07:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:00.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:02.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:02.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:04.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:04.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:08:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2705115424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:08:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:08:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2705115424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:08:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:06.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:07 np0005466031 podman[237830]: 2025-10-02 12:08:07.622444572 +0000 UTC m=+0.053547116 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:08:07 np0005466031 podman[237831]: 2025-10-02 12:08:07.687419708 +0000 UTC m=+0.105062994 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 08:08:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:08.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:08.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:12.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:12.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:14.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:14.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:15 np0005466031 podman[237880]: 2025-10-02 12:08:15.621932669 +0000 UTC m=+0.056785617 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 08:08:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:16.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:08:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:16.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:08:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:18.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:19 np0005466031 podman[237903]: 2025-10-02 12:08:19.629397209 +0000 UTC m=+0.055661796 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:08:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:20.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:22.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:24.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:24 np0005466031 nova_compute[235803]: 2025-10-02 12:08:24.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:25 np0005466031 nova_compute[235803]: 2025-10-02 12:08:25.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:25 np0005466031 nova_compute[235803]: 2025-10-02 12:08:25.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:25 np0005466031 nova_compute[235803]: 2025-10-02 12:08:25.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:08:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:08:25.815 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:08:25.815 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:08:25.816 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:26.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:26 np0005466031 nova_compute[235803]: 2025-10-02 12:08:26.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:26 np0005466031 nova_compute[235803]: 2025-10-02 12:08:26.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:26 np0005466031 nova_compute[235803]: 2025-10-02 12:08:26.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:26 np0005466031 nova_compute[235803]: 2025-10-02 12:08:26.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:26 np0005466031 nova_compute[235803]: 2025-10-02 12:08:26.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.123 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.123 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.124 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.124 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.124 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:08:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:08:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2064557662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.555 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.729 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.732 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5326MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.732 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.732 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.997 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:08:27 np0005466031 nova_compute[235803]: 2025-10-02 12:08:27.998 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:08:28 np0005466031 nova_compute[235803]: 2025-10-02 12:08:28.018 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:08:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:08:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3308308964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:08:28 np0005466031 nova_compute[235803]: 2025-10-02 12:08:28.460 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:08:28 np0005466031 nova_compute[235803]: 2025-10-02 12:08:28.468 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:08:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:28.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:28 np0005466031 nova_compute[235803]: 2025-10-02 12:08:28.489 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:08:28 np0005466031 nova_compute[235803]: 2025-10-02 12:08:28.491 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:08:28 np0005466031 nova_compute[235803]: 2025-10-02 12:08:28.492 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:28.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:29 np0005466031 nova_compute[235803]: 2025-10-02 12:08:29.493 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:29 np0005466031 nova_compute[235803]: 2025-10-02 12:08:29.494 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:08:29 np0005466031 nova_compute[235803]: 2025-10-02 12:08:29.494 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:08:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:29 np0005466031 nova_compute[235803]: 2025-10-02 12:08:29.806 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:08:30 np0005466031 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 08:08:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:30.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:30.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:32.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:32.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:34.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:34.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:36.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:38.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:38 np0005466031 podman[238028]: 2025-10-02 12:08:38.626225706 +0000 UTC m=+0.057587130 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:08:38 np0005466031 podman[238029]: 2025-10-02 12:08:38.666796306 +0000 UTC m=+0.092562292 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:08:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:38.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:40.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:40.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:42.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=404 latency=0.002000056s ======
Oct  2 08:08:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:42.652 +0000] "GET /healthcheck HTTP/1.1" 404 240 - "python-urllib3/1.26.5" - latency=0.002000056s
Oct  2 08:08:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:42.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:44.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:44.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:46 np0005466031 podman[238128]: 2025-10-02 12:08:46.638854242 +0000 UTC m=+0.065632116 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:08:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct  2 08:08:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:48.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct  2 08:08:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:50 np0005466031 podman[238151]: 2025-10-02 12:08:50.634437739 +0000 UTC m=+0.051560941 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  2 08:08:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:08:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:08:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:08:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:52.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:08:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct  2 08:08:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:08:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:08:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:54.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:08:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:08:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:08:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct  2 08:08:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:08:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:56.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:08:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct  2 08:08:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:08:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:58.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:00.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:09:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:00.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:09:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:02.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:02.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:04.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:04.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:06.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:08.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:09 np0005466031 podman[238362]: 2025-10-02 12:09:09.659689732 +0000 UTC m=+0.083625421 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:09:09 np0005466031 podman[238363]: 2025-10-02 12:09:09.680486617 +0000 UTC m=+0.091087631 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct  2 08:09:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:09:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:09:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:09:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:10.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:09:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:10.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:12.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:09:13.701 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:09:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:09:13.702 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:09:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:14.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:14.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:16.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:09:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:16.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:09:17 np0005466031 podman[238461]: 2025-10-02 12:09:17.62544674 +0000 UTC m=+0.061884961 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:09:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:09:17.704 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:18 np0005466031 nova_compute[235803]: 2025-10-02 12:09:18.454 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:18 np0005466031 nova_compute[235803]: 2025-10-02 12:09:18.454 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:18 np0005466031 nova_compute[235803]: 2025-10-02 12:09:18.482 2 DEBUG nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:09:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:18.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:18 np0005466031 nova_compute[235803]: 2025-10-02 12:09:18.616 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:18 np0005466031 nova_compute[235803]: 2025-10-02 12:09:18.618 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:18 np0005466031 nova_compute[235803]: 2025-10-02 12:09:18.627 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:09:18 np0005466031 nova_compute[235803]: 2025-10-02 12:09:18.628 2 INFO nova.compute.claims [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:09:18 np0005466031 nova_compute[235803]: 2025-10-02 12:09:18.780 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:18.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4267939077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.251 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.262 2 DEBUG nova.compute.provider_tree [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.284 2 DEBUG nova.scheduler.client.report [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.309 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.310 2 DEBUG nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.362 2 DEBUG nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.378 2 INFO nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.396 2 DEBUG nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.481 2 DEBUG nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.483 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.483 2 INFO nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Creating image(s)#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.520 2 DEBUG nova.storage.rbd_utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.553 2 DEBUG nova.storage.rbd_utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.587 2 DEBUG nova.storage.rbd_utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.592 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:19 np0005466031 nova_compute[235803]: 2025-10-02 12:09:19.593 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:20 np0005466031 nova_compute[235803]: 2025-10-02 12:09:20.327 2 DEBUG nova.virt.libvirt.imagebackend [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/423b8b5f-aab8-418b-8fad-d82c90818bdd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/423b8b5f-aab8-418b-8fad-d82c90818bdd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:09:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:20.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:21 np0005466031 podman[238560]: 2025-10-02 12:09:21.658594421 +0000 UTC m=+0.086923424 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:09:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:22 np0005466031 nova_compute[235803]: 2025-10-02 12:09:22.799 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:22 np0005466031 nova_compute[235803]: 2025-10-02 12:09:22.866 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:22 np0005466031 nova_compute[235803]: 2025-10-02 12:09:22.867 2 DEBUG nova.virt.images [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] 423b8b5f-aab8-418b-8fad-d82c90818bdd was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 08:09:22 np0005466031 nova_compute[235803]: 2025-10-02 12:09:22.869 2 DEBUG nova.privsep.utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:09:22 np0005466031 nova_compute[235803]: 2025-10-02 12:09:22.869 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:22.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:23 np0005466031 nova_compute[235803]: 2025-10-02 12:09:23.051 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:23 np0005466031 nova_compute[235803]: 2025-10-02 12:09:23.056 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:23 np0005466031 nova_compute[235803]: 2025-10-02 12:09:23.142 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:23 np0005466031 nova_compute[235803]: 2025-10-02 12:09:23.143 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:23 np0005466031 nova_compute[235803]: 2025-10-02 12:09:23.172 2 DEBUG nova.storage.rbd_utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:23 np0005466031 nova_compute[235803]: 2025-10-02 12:09:23.177 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct  2 08:09:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:24.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:09:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:24.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:09:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.557 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.630 2 DEBUG nova.storage.rbd_utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] resizing rbd image db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.670 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.741 2 DEBUG nova.objects.instance [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'migration_context' on Instance uuid db731c69-9df0-4f81-b108-1fbb5e8a35c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.768 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.768 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Ensure instance console log exists: /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.768 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.769 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.769 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.771 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.774 2 WARNING nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.778 2 DEBUG nova.virt.libvirt.host [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.778 2 DEBUG nova.virt.libvirt.host [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.781 2 DEBUG nova.virt.libvirt.host [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.781 2 DEBUG nova.virt.libvirt.host [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.782 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.782 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.783 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.783 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.783 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.783 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.784 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.784 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.784 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.784 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.785 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.785 2 DEBUG nova.virt.hardware [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.788 2 DEBUG nova.privsep.utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:09:25 np0005466031 nova_compute[235803]: 2025-10-02 12:09:25.788 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:09:25.815 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:09:25.816 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:09:25.816 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:09:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1302507799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.203 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.227 2 DEBUG nova.storage.rbd_utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.230 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:09:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4019412554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.648 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.650 2 DEBUG nova.objects.instance [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'pci_devices' on Instance uuid db731c69-9df0-4f81-b108-1fbb5e8a35c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.797 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <uuid>db731c69-9df0-4f81-b108-1fbb5e8a35c7</uuid>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <name>instance-00000001</name>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <nova:name>tempest-AutoAllocateNetworkTest-server-1152599396</nova:name>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:09:25</nova:creationTime>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <nova:user uuid="b81237ef015d48dfa022b6761d706e36">tempest-AutoAllocateNetworkTest-1017519520-project-member</nova:user>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <nova:project uuid="fa15236c63df4c43bf19989029fcda0f">tempest-AutoAllocateNetworkTest-1017519520</nova:project>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <entry name="serial">db731c69-9df0-4f81-b108-1fbb5e8a35c7</entry>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <entry name="uuid">db731c69-9df0-4f81-b108-1fbb5e8a35c7</entry>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk.config">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7/console.log" append="off"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:09:26 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:09:26 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:09:26 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:09:26 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.891 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.891 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.892 2 INFO nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Using config drive#033[00m
Oct  2 08:09:26 np0005466031 nova_compute[235803]: 2025-10-02 12:09:26.916 2 DEBUG nova.storage.rbd_utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:26.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.659 2 INFO nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Creating config drive at /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7/disk.config#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.664 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3e454cqz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.682 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.683 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.683 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.684 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.684 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.795 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3e454cqz" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.824 2 DEBUG nova.storage.rbd_utils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.828 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7/disk.config db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.990 2 DEBUG oslo_concurrency.processutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7/disk.config db731c69-9df0-4f81-b108-1fbb5e8a35c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:27 np0005466031 nova_compute[235803]: 2025-10-02 12:09:27.991 2 INFO nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Deleting local config drive /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7/disk.config because it was imported into RBD.#033[00m
Oct  2 08:09:28 np0005466031 systemd[1]: Starting libvirt secret daemon...
Oct  2 08:09:28 np0005466031 systemd[1]: Started libvirt secret daemon.
Oct  2 08:09:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2150631146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.116 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:28 np0005466031 systemd-machined[192227]: New machine qemu-1-instance-00000001.
Oct  2 08:09:28 np0005466031 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.228 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.228 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.364 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.365 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5282MB free_disk=20.986618041992188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.365 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.365 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.439 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance db731c69-9df0-4f81-b108-1fbb5e8a35c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.439 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.439 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.489 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:28.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1331342296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.978 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:28 np0005466031 nova_compute[235803]: 2025-10-02 12:09:28.984 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.010 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759406969.009887, db731c69-9df0-4f81-b108-1fbb5e8a35c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.011 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.012 2 DEBUG nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.013 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.016 2 INFO nova.virt.libvirt.driver [-] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Instance spawned successfully.#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.016 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.018 2 ERROR nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [req-8bca6c44-255d-4d4a-be29-152ebc46086d] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID f694d536-1dcd-4bb3-8516-534a40cdf6d7.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-8bca6c44-255d-4d4a-be29-152ebc46086d"}]}#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.047 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.050 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.052 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.082 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.083 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.101 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.121 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.121 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.122 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.122 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.123 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.123 2 DEBUG nova.virt.libvirt.driver [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.126 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.127 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759406969.0106685, db731c69-9df0-4f81-b108-1fbb5e8a35c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.127 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] VM Started (Lifecycle Event)#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.138 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.167 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.172 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.185 2 INFO nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Took 9.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.186 2 DEBUG nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.204 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.224 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.241 2 INFO nova.compute.manager [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Took 10.67 seconds to build instance.#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.265 2 DEBUG oslo_concurrency.lockutils [None req-99ef1471-1c60-4c2a-9d6a-c1728b7853a6 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/44836204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.667 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.674 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.750 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updated inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.750 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.751 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.772 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:09:29 np0005466031 nova_compute[235803]: 2025-10-02 12:09:29.773 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:09:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:30.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:09:30 np0005466031 nova_compute[235803]: 2025-10-02 12:09:30.773 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:30 np0005466031 nova_compute[235803]: 2025-10-02 12:09:30.814 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:30 np0005466031 nova_compute[235803]: 2025-10-02 12:09:30.814 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:09:30 np0005466031 nova_compute[235803]: 2025-10-02 12:09:30.815 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:09:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.223 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.224 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.224 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.224 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid db731c69-9df0-4f81-b108-1fbb5e8a35c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.380 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.707 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.729 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.729 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.729 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:31 np0005466031 nova_compute[235803]: 2025-10-02 12:09:31.730 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:32.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:34.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:34.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:35 np0005466031 nova_compute[235803]: 2025-10-02 12:09:35.632 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:35 np0005466031 nova_compute[235803]: 2025-10-02 12:09:35.632 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:35 np0005466031 nova_compute[235803]: 2025-10-02 12:09:35.661 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:09:35 np0005466031 nova_compute[235803]: 2025-10-02 12:09:35.833 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:35 np0005466031 nova_compute[235803]: 2025-10-02 12:09:35.834 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:35 np0005466031 nova_compute[235803]: 2025-10-02 12:09:35.843 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:09:35 np0005466031 nova_compute[235803]: 2025-10-02 12:09:35.843 2 INFO nova.compute.claims [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:09:35 np0005466031 nova_compute[235803]: 2025-10-02 12:09:35.953 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:36 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1301256854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.375 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.383 2 DEBUG nova.compute.provider_tree [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.410 2 DEBUG nova.scheduler.client.report [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.457 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.458 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.549 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.549 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:09:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.595 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.617 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.743 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.744 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.745 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Creating image(s)#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.771 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.801 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.833 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.836 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.891 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.892 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.893 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.893 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.918 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.921 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:36.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:36 np0005466031 nova_compute[235803]: 2025-10-02 12:09:36.967 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Automatically allocating a network for project fa15236c63df4c43bf19989029fcda0f. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Oct  2 08:09:37 np0005466031 nova_compute[235803]: 2025-10-02 12:09:37.591 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:37 np0005466031 nova_compute[235803]: 2025-10-02 12:09:37.665 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] resizing rbd image ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:09:37 np0005466031 nova_compute[235803]: 2025-10-02 12:09:37.766 2 DEBUG nova.objects.instance [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'migration_context' on Instance uuid ab9f1bf2-9381-49d9-a097-5f482594f8fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:37 np0005466031 nova_compute[235803]: 2025-10-02 12:09:37.797 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:09:37 np0005466031 nova_compute[235803]: 2025-10-02 12:09:37.798 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Ensure instance console log exists: /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:09:37 np0005466031 nova_compute[235803]: 2025-10-02 12:09:37.798 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:37 np0005466031 nova_compute[235803]: 2025-10-02 12:09:37.798 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:37 np0005466031 nova_compute[235803]: 2025-10-02 12:09:37.799 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:38.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:38.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:40 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 08:09:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:40 np0005466031 podman[239212]: 2025-10-02 12:09:40.639908939 +0000 UTC m=+0.067699253 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:09:40 np0005466031 podman[239213]: 2025-10-02 12:09:40.668211795 +0000 UTC m=+0.096148824 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:09:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:40.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:42.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:42.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:44.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:44.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:46.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:48 np0005466031 podman[239311]: 2025-10-02 12:09:48.625691562 +0000 UTC m=+0.055964224 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:09:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:48.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:50.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:50.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:52.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:52 np0005466031 podman[239335]: 2025-10-02 12:09:52.655134189 +0000 UTC m=+0.074398962 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:09:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:52.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:54.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:56.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:56.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:09:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:58.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:00.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 08:10:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:00.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:02.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:02.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:04.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:04.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:06.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:06.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:08.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:09.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:10 np0005466031 nova_compute[235803]: 2025-10-02 12:10:10.114 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Automatically allocated network: {'id': 'b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'name': 'auto_allocated_network', 'tenant_id': 'fa15236c63df4c43bf19989029fcda0f', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['227aa610-f00f-4cec-b799-1839354b34be', 'c08cc57a-142e-470f-ae51-6b2f21e9e17e'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-10-02T12:09:38Z', 'updated_at': '2025-10-02T12:09:57Z', 'revision_number': 4, 'project_id': 'fa15236c63df4c43bf19989029fcda0f'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Oct  2 08:10:10 np0005466031 nova_compute[235803]: 2025-10-02 12:10:10.130 2 WARNING oslo_policy.policy [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  2 08:10:10 np0005466031 nova_compute[235803]: 2025-10-02 12:10:10.130 2 WARNING oslo_policy.policy [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  2 08:10:10 np0005466031 nova_compute[235803]: 2025-10-02 12:10:10.133 2 DEBUG nova.policy [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b81237ef015d48dfa022b6761d706e36', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fa15236c63df4c43bf19989029fcda0f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:10:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:10.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:11.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.621215) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011621274, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1722, "num_deletes": 251, "total_data_size": 3951027, "memory_usage": 4012952, "flush_reason": "Manual Compaction"}
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011633003, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1572669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20169, "largest_seqno": 21886, "table_properties": {"data_size": 1567204, "index_size": 2669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14134, "raw_average_key_size": 20, "raw_value_size": 1555080, "raw_average_value_size": 2263, "num_data_blocks": 120, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406865, "oldest_key_time": 1759406865, "file_creation_time": 1759407011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 11844 microseconds, and 7456 cpu microseconds.
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.633057) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1572669 bytes OK
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.633092) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.634669) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.634684) EVENT_LOG_v1 {"time_micros": 1759407011634679, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.634705) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3943186, prev total WAL file size 3943186, number of live WAL files 2.
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.635536) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1535KB)], [39(9411KB)]
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011635609, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11209942, "oldest_snapshot_seqno": -1}
Oct  2 08:10:11 np0005466031 podman[239544]: 2025-10-02 12:10:11.64171541 +0000 UTC m=+0.059299252 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  2 08:10:11 np0005466031 podman[239545]: 2025-10-02 12:10:11.670281694 +0000 UTC m=+0.088084662 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4674 keys, 8369787 bytes, temperature: kUnknown
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011675933, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8369787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8338515, "index_size": 18490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 116159, "raw_average_key_size": 24, "raw_value_size": 8253795, "raw_average_value_size": 1765, "num_data_blocks": 765, "num_entries": 4674, "num_filter_entries": 4674, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759407011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.676238) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8369787 bytes
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.677451) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 277.5 rd, 207.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(12.4) write-amplify(5.3) OK, records in: 5126, records dropped: 452 output_compression: NoCompression
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.677473) EVENT_LOG_v1 {"time_micros": 1759407011677461, "job": 22, "event": "compaction_finished", "compaction_time_micros": 40389, "compaction_time_cpu_micros": 21613, "output_level": 6, "num_output_files": 1, "total_output_size": 8369787, "num_input_records": 5126, "num_output_records": 4674, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011677880, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011679515, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.635455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.679615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.679623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.679624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.679626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:10:11.679627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466031 nova_compute[235803]: 2025-10-02 12:10:11.842 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Successfully created port: 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:10:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:12.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:13.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:10:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:10:13 np0005466031 nova_compute[235803]: 2025-10-02 12:10:13.943 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Successfully updated port: 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:10:13 np0005466031 nova_compute[235803]: 2025-10-02 12:10:13.988 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "refresh_cache-ab9f1bf2-9381-49d9-a097-5f482594f8fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:13 np0005466031 nova_compute[235803]: 2025-10-02 12:10:13.988 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquired lock "refresh_cache-ab9f1bf2-9381-49d9-a097-5f482594f8fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:13 np0005466031 nova_compute[235803]: 2025-10-02 12:10:13.989 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:10:14 np0005466031 nova_compute[235803]: 2025-10-02 12:10:14.160 2 DEBUG nova.compute.manager [req-02b71a65-5c26-4069-a634-af1505cf114d req-441c2d38-3ec2-40a7-903b-0f8cda60b8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received event network-changed-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:14 np0005466031 nova_compute[235803]: 2025-10-02 12:10:14.160 2 DEBUG nova.compute.manager [req-02b71a65-5c26-4069-a634-af1505cf114d req-441c2d38-3ec2-40a7-903b-0f8cda60b8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Refreshing instance network info cache due to event network-changed-514acc69-e9db-45c8-b3f1-6e1efb4ccd91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:10:14 np0005466031 nova_compute[235803]: 2025-10-02 12:10:14.161 2 DEBUG oslo_concurrency.lockutils [req-02b71a65-5c26-4069-a634-af1505cf114d req-441c2d38-3ec2-40a7-903b-0f8cda60b8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ab9f1bf2-9381-49d9-a097-5f482594f8fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:14 np0005466031 nova_compute[235803]: 2025-10-02 12:10:14.301 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:10:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:14.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:15.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:16.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:17.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.205 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Updating instance_info_cache with network_info: [{"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.232 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Releasing lock "refresh_cache-ab9f1bf2-9381-49d9-a097-5f482594f8fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.233 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Instance network_info: |[{"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.234 2 DEBUG oslo_concurrency.lockutils [req-02b71a65-5c26-4069-a634-af1505cf114d req-441c2d38-3ec2-40a7-903b-0f8cda60b8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ab9f1bf2-9381-49d9-a097-5f482594f8fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.234 2 DEBUG nova.network.neutron [req-02b71a65-5c26-4069-a634-af1505cf114d req-441c2d38-3ec2-40a7-903b-0f8cda60b8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Refreshing network info cache for port 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.238 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Start _get_guest_xml network_info=[{"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.245 2 WARNING nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.250 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.251 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.253 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.254 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.255 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.255 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.255 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.256 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.256 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.256 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.256 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.256 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.257 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.257 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.257 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.257 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.260 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1883701792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.693 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.723 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:17 np0005466031 nova_compute[235803]: 2025-10-02 12:10:17.728 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2438634577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.331 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.332 2 DEBUG nova.virt.libvirt.vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-3',id=4,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:36Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=ab9f1bf2-9381-49d9-a097-5f482594f8fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.333 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.334 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:de:e5,bridge_name='br-int',has_traffic_filtering=True,id=514acc69-e9db-45c8-b3f1-6e1efb4ccd91,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514acc69-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.336 2 DEBUG nova.objects.instance [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'pci_devices' on Instance uuid ab9f1bf2-9381-49d9-a097-5f482594f8fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.352 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <uuid>ab9f1bf2-9381-49d9-a097-5f482594f8fa</uuid>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <name>instance-00000004</name>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <nova:name>tempest-tempest.common.compute-instance-1858146006-3</nova:name>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:10:17</nova:creationTime>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <nova:user uuid="b81237ef015d48dfa022b6761d706e36">tempest-AutoAllocateNetworkTest-1017519520-project-member</nova:user>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <nova:project uuid="fa15236c63df4c43bf19989029fcda0f">tempest-AutoAllocateNetworkTest-1017519520</nova:project>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <nova:port uuid="514acc69-e9db-45c8-b3f1-6e1efb4ccd91">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="fdfe:381f:8400:1::25d" ipVersion="6"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.1.0.69" ipVersion="4"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <entry name="serial">ab9f1bf2-9381-49d9-a097-5f482594f8fa</entry>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <entry name="uuid">ab9f1bf2-9381-49d9-a097-5f482594f8fa</entry>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk.config">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:09:de:e5"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <target dev="tap514acc69-e9"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa/console.log" append="off"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:10:18 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:10:18 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:10:18 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:10:18 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.354 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Preparing to wait for external event network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.354 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.354 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.355 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.355 2 DEBUG nova.virt.libvirt.vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-3',id=4,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:36Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=ab9f1bf2-9381-49d9-a097-5f482594f8fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.356 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.356 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:de:e5,bridge_name='br-int',has_traffic_filtering=True,id=514acc69-e9db-45c8-b3f1-6e1efb4ccd91,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514acc69-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.357 2 DEBUG os_vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:de:e5,bridge_name='br-int',has_traffic_filtering=True,id=514acc69-e9db-45c8-b3f1-6e1efb4ccd91,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514acc69-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.393 2 DEBUG ovsdbapp.backend.ovs_idl [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.393 2 DEBUG ovsdbapp.backend.ovs_idl [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.394 2 DEBUG ovsdbapp.backend.ovs_idl [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct  2 08:10:18 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.410 2 INFO oslo.privsep.daemon [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmprvcl56n5/privsep.sock']#033[00m
Oct  2 08:10:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:18.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:19.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.108 2 INFO oslo.privsep.daemon [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.971 852 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.976 852 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.979 852 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:18.979 852 INFO oslo.privsep.daemon [-] privsep daemon running as pid 852#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:19.425 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:19.426 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514acc69-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap514acc69-e9, col_values=(('external_ids', {'iface-id': '514acc69-e9db-45c8-b3f1-6e1efb4ccd91', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:de:e5', 'vm-uuid': 'ab9f1bf2-9381-49d9-a097-5f482594f8fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:19 np0005466031 NetworkManager[44907]: <info>  [1759407019.4311] manager: (tap514acc69-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.440 2 INFO os_vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:de:e5,bridge_name='br-int',has_traffic_filtering=True,id=514acc69-e9db-45c8-b3f1-6e1efb4ccd91,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514acc69-e9')#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.496 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.496 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.496 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No VIF found with MAC fa:16:3e:09:de:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.497 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Using config drive#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.523 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:19 np0005466031 podman[239667]: 2025-10-02 12:10:19.532110031 +0000 UTC m=+0.059076495 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:10:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.945 2 DEBUG nova.network.neutron [req-02b71a65-5c26-4069-a634-af1505cf114d req-441c2d38-3ec2-40a7-903b-0f8cda60b8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Updated VIF entry in instance network info cache for port 514acc69-e9db-45c8-b3f1-6e1efb4ccd91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.946 2 DEBUG nova.network.neutron [req-02b71a65-5c26-4069-a634-af1505cf114d req-441c2d38-3ec2-40a7-903b-0f8cda60b8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Updating instance_info_cache with network_info: [{"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:19 np0005466031 nova_compute[235803]: 2025-10-02 12:10:19.968 2 DEBUG oslo_concurrency.lockutils [req-02b71a65-5c26-4069-a634-af1505cf114d req-441c2d38-3ec2-40a7-903b-0f8cda60b8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ab9f1bf2-9381-49d9-a097-5f482594f8fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.064 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Creating config drive at /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa/disk.config#033[00m
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.070 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc5yqmuu7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.199 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc5yqmuu7" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.231 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.234 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa/disk.config ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.451 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa/disk.config ab9f1bf2-9381-49d9-a097-5f482594f8fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.452 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Deleting local config drive /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa/disk.config because it was imported into RBD.#033[00m
Oct  2 08:10:20 np0005466031 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  2 08:10:20 np0005466031 kernel: tap514acc69-e9: entered promiscuous mode
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:20 np0005466031 NetworkManager[44907]: <info>  [1759407020.5524] manager: (tap514acc69-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct  2 08:10:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:20Z|00027|binding|INFO|Claiming lport 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 for this chassis.
Oct  2 08:10:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:20Z|00028|binding|INFO|514acc69-e9db-45c8-b3f1-6e1efb4ccd91: Claiming fa:16:3e:09:de:e5 10.1.0.69 fdfe:381f:8400:1::25d
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:20.585 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:de:e5 10.1.0.69 fdfe:381f:8400:1::25d'], port_security=['fa:16:3e:09:de:e5 10.1.0.69 fdfe:381f:8400:1::25d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.69/26 fdfe:381f:8400:1::25d/64', 'neutron:device_id': 'ab9f1bf2-9381-49d9-a097-5f482594f8fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa15236c63df4c43bf19989029fcda0f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e3feb76-9212-430e-bcfa-0b85f7aedc4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1382d266-669c-46c5-981d-23fbe67f9508, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=514acc69-e9db-45c8-b3f1-6e1efb4ccd91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:20.589 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 in datapath b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 bound to our chassis#033[00m
Oct  2 08:10:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:20.595 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4aadb38-89a4-463f-b7b5-8bb4dcce7d32#033[00m
Oct  2 08:10:20 np0005466031 systemd-udevd[239760]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:20.597 141898 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpepzjqbd9/privsep.sock']#033[00m
Oct  2 08:10:20 np0005466031 NetworkManager[44907]: <info>  [1759407020.6145] device (tap514acc69-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:10:20 np0005466031 NetworkManager[44907]: <info>  [1759407020.6161] device (tap514acc69-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:10:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:20.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:20 np0005466031 systemd-machined[192227]: New machine qemu-2-instance-00000004.
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:20 np0005466031 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Oct  2 08:10:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:20Z|00029|binding|INFO|Setting lport 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 ovn-installed in OVS
Oct  2 08:10:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:20Z|00030|binding|INFO|Setting lport 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 up in Southbound
Oct  2 08:10:20 np0005466031 nova_compute[235803]: 2025-10-02 12:10:20.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:21.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:21.302 141898 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:10:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:21.302 141898 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpepzjqbd9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 08:10:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:21.160 239779 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:10:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:21.164 239779 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:10:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:21.166 239779 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  2 08:10:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:21.166 239779 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239779#033[00m
Oct  2 08:10:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:21.305 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[06b0841a-a08d-4faf-a74d-c0a4369b9048]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.556 2 DEBUG nova.compute.manager [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received event network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.557 2 DEBUG oslo_concurrency.lockutils [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.557 2 DEBUG oslo_concurrency.lockutils [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.557 2 DEBUG oslo_concurrency.lockutils [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.557 2 DEBUG nova.compute.manager [req-a8bc369e-6166-42ab-85b3-a8dff0f65e8c req-a4566a74-a0e3-4275-bfaa-2b993361387d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Processing event network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.922 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.923 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407021.921858, ab9f1bf2-9381-49d9-a097-5f482594f8fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.923 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] VM Started (Lifecycle Event)#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.926 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.931 2 INFO nova.virt.libvirt.driver [-] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Instance spawned successfully.#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.931 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.958 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.964 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.967 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.968 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.968 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.969 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.969 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.969 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.992 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.993 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407021.9220133, ab9f1bf2-9381-49d9-a097-5f482594f8fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:21 np0005466031 nova_compute[235803]: 2025-10-02 12:10:21.993 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.023 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.026 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407021.9256186, ab9f1bf2-9381-49d9-a097-5f482594f8fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.027 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.066 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.069 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.099 2 INFO nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Took 45.36 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.100 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.101 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.141 239779 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.141 239779 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.141 239779 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.179 2 INFO nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Took 46.41 seconds to build instance.#033[00m
Oct  2 08:10:22 np0005466031 nova_compute[235803]: 2025-10-02 12:10:22.192 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 46.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:22.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.914 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[da7a91db-7524-4761-840e-2193fd265dc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.916 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4aadb38-81 in ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.918 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4aadb38-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.918 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f9caa6b9-33c2-425b-92bd-8a7b32b4612b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.922 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e6148450-d4a8-4088-963a-2587dcd4208b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.947 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[7298bb9a-b4c4-48c9-96e7-80f6d7e70257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.976 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6e37baab-d010-4a11-a676-7d1680494855]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:22.978 141898 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpkjxyb60d/privsep.sock']#033[00m
Oct  2 08:10:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:23.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:23 np0005466031 podman[239881]: 2025-10-02 12:10:23.028481614 +0000 UTC m=+0.054096992 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:23.663 141898 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:23.664 141898 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkjxyb60d/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 08:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:23.516 239908 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:23.519 239908 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:23.521 239908 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  2 08:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:23.521 239908 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239908#033[00m
Oct  2 08:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:23.666 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d27db0-817f-4e20-a224-bc0d0d11c4f9]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:23 np0005466031 nova_compute[235803]: 2025-10-02 12:10:23.764 2 DEBUG nova.compute.manager [req-d1cc0281-5685-4c3d-bf37-678ba59558a1 req-e4f276dd-7488-4b68-9dd1-00107cd3f4b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received event network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:23 np0005466031 nova_compute[235803]: 2025-10-02 12:10:23.765 2 DEBUG oslo_concurrency.lockutils [req-d1cc0281-5685-4c3d-bf37-678ba59558a1 req-e4f276dd-7488-4b68-9dd1-00107cd3f4b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:23 np0005466031 nova_compute[235803]: 2025-10-02 12:10:23.765 2 DEBUG oslo_concurrency.lockutils [req-d1cc0281-5685-4c3d-bf37-678ba59558a1 req-e4f276dd-7488-4b68-9dd1-00107cd3f4b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:23 np0005466031 nova_compute[235803]: 2025-10-02 12:10:23.765 2 DEBUG oslo_concurrency.lockutils [req-d1cc0281-5685-4c3d-bf37-678ba59558a1 req-e4f276dd-7488-4b68-9dd1-00107cd3f4b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:23 np0005466031 nova_compute[235803]: 2025-10-02 12:10:23.766 2 DEBUG nova.compute.manager [req-d1cc0281-5685-4c3d-bf37-678ba59558a1 req-e4f276dd-7488-4b68-9dd1-00107cd3f4b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] No waiting events found dispatching network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:23 np0005466031 nova_compute[235803]: 2025-10-02 12:10:23.766 2 WARNING nova.compute.manager [req-d1cc0281-5685-4c3d-bf37-678ba59558a1 req-e4f276dd-7488-4b68-9dd1-00107cd3f4b3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received unexpected event network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.197 239908 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.198 239908 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.198 239908 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:24 np0005466031 nova_compute[235803]: 2025-10-02 12:10:24.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:24.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:24 np0005466031 nova_compute[235803]: 2025-10-02 12:10:24.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:24 np0005466031 nova_compute[235803]: 2025-10-02 12:10:24.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:10:24 np0005466031 nova_compute[235803]: 2025-10-02 12:10:24.667 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:10:24 np0005466031 nova_compute[235803]: 2025-10-02 12:10:24.668 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:24 np0005466031 nova_compute[235803]: 2025-10-02 12:10:24.668 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:10:24 np0005466031 nova_compute[235803]: 2025-10-02 12:10:24.685 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.783 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[752e986b-dc6b-4b2e-b602-c42004a49b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.789 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1679e89d-0d60-4e4d-bb0f-f3424734e35d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 NetworkManager[44907]: <info>  [1759407024.8043] manager: (tapb4aadb38-80): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.820 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[70473799-3de2-457f-a9d4-be3f5ad5bce7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.823 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e36174-26e7-4853-9cb9-215df2ad62f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 NetworkManager[44907]: <info>  [1759407024.8473] device (tapb4aadb38-80): carrier: link connected
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.852 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4a0e05-d4b3-43e3-8ca8-d6dfaa5d7bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 systemd-udevd[239971]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.884 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[039f51e5-7bf8-47af-92bc-41e4be8c54f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4aadb38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:b6:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488043, 'reachable_time': 20146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239973, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.902 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[75617fae-63f1-4d00-8606-1e5f13fd8cf7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:b633'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488043, 'tstamp': 488043}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239989, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.918 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[045eb734-1197-4293-b069-d54821ed0223]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4aadb38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:b6:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488043, 'reachable_time': 20146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239991, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.945 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[df3b2a2c-d627-4cfc-a412-f55cdd4b9ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:24.998 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[de4cb5c9-7297-4d49-bcd1-29da3deab9cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.000 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aadb38-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.000 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.001 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4aadb38-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:25 np0005466031 nova_compute[235803]: 2025-10-02 12:10:25.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:25 np0005466031 NetworkManager[44907]: <info>  [1759407025.0040] manager: (tapb4aadb38-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct  2 08:10:25 np0005466031 kernel: tapb4aadb38-80: entered promiscuous mode
Oct  2 08:10:25 np0005466031 nova_compute[235803]: 2025-10-02 12:10:25.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.008 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4aadb38-80, col_values=(('external_ids', {'iface-id': 'de74dbb2-fac5-494f-b65c-51300143a2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:25 np0005466031 nova_compute[235803]: 2025-10-02 12:10:25.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:25Z|00031|binding|INFO|Releasing lport de74dbb2-fac5-494f-b65c-51300143a2da from this chassis (sb_readonly=0)
Oct  2 08:10:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:25.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:25 np0005466031 nova_compute[235803]: 2025-10-02 12:10:25.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.024 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.025 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4b65dbf1-e336-40d6-85fe-0d5ace961a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.026 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.027 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'env', 'PROCESS_TAG=haproxy-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:10:25 np0005466031 podman[240024]: 2025-10-02 12:10:25.393273473 +0000 UTC m=+0.075716746 container create acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.429 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:25 np0005466031 podman[240024]: 2025-10-02 12:10:25.34188107 +0000 UTC m=+0.024324383 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:10:25 np0005466031 systemd[1]: Started libpod-conmon-acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359.scope.
Oct  2 08:10:25 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:10:25 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d92bf3c2eb824279efe9ef2490cfb541aacfe9adfbc8c9f25ebbe5a645c6947/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:10:25 np0005466031 podman[240024]: 2025-10-02 12:10:25.483393923 +0000 UTC m=+0.165837226 container init acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:10:25 np0005466031 podman[240024]: 2025-10-02 12:10:25.491008122 +0000 UTC m=+0.173451395 container start acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:10:25 np0005466031 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[240039]: [NOTICE]   (240043) : New worker (240045) forked
Oct  2 08:10:25 np0005466031 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[240039]: [NOTICE]   (240043) : Loading success.
Oct  2 08:10:25 np0005466031 nova_compute[235803]: 2025-10-02 12:10:25.704 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.816 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.817 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:25.818 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct  2 08:10:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:26.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:26 np0005466031 nova_compute[235803]: 2025-10-02 12:10:26.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:27.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:27 np0005466031 nova_compute[235803]: 2025-10-02 12:10:27.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:27 np0005466031 nova_compute[235803]: 2025-10-02 12:10:27.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:27 np0005466031 nova_compute[235803]: 2025-10-02 12:10:27.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:27 np0005466031 nova_compute[235803]: 2025-10-02 12:10:27.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:10:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:28.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:28 np0005466031 nova_compute[235803]: 2025-10-02 12:10:28.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:28 np0005466031 nova_compute[235803]: 2025-10-02 12:10:28.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:10:28 np0005466031 nova_compute[235803]: 2025-10-02 12:10:28.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:10:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:29.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:29 np0005466031 nova_compute[235803]: 2025-10-02 12:10:29.163 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:29 np0005466031 nova_compute[235803]: 2025-10-02 12:10:29.164 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:29 np0005466031 nova_compute[235803]: 2025-10-02 12:10:29.164 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:10:29 np0005466031 nova_compute[235803]: 2025-10-02 12:10:29.165 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid db731c69-9df0-4f81-b108-1fbb5e8a35c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:29 np0005466031 nova_compute[235803]: 2025-10-02 12:10:29.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:29 np0005466031 nova_compute[235803]: 2025-10-02 12:10:29.593 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.434 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.577 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.577 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.578 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.578 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.579 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.617 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.618 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.618 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.618 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:10:30 np0005466031 nova_compute[235803]: 2025-10-02 12:10:30.619 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:30.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/204562109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:31.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.033 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.142 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.142 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.149 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.150 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.370 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.372 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4664MB free_disk=20.810245513916016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.372 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.372 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.654 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance db731c69-9df0-4f81-b108-1fbb5e8a35c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.654 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance ab9f1bf2-9381-49d9-a097-5f482594f8fa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.655 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:10:31 np0005466031 nova_compute[235803]: 2025-10-02 12:10:31.655 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:10:32 np0005466031 nova_compute[235803]: 2025-10-02 12:10:32.044 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:32 np0005466031 nova_compute[235803]: 2025-10-02 12:10:32.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1286005547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:32 np0005466031 nova_compute[235803]: 2025-10-02 12:10:32.491 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:32 np0005466031 nova_compute[235803]: 2025-10-02 12:10:32.501 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:32.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:32 np0005466031 nova_compute[235803]: 2025-10-02 12:10:32.666 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:32 np0005466031 nova_compute[235803]: 2025-10-02 12:10:32.714 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:10:32 np0005466031 nova_compute[235803]: 2025-10-02 12:10:32.714 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:32 np0005466031 nova_compute[235803]: 2025-10-02 12:10:32.773 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:33.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:34 np0005466031 nova_compute[235803]: 2025-10-02 12:10:34.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:34.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:34Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:de:e5 10.1.0.69
Oct  2 08:10:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:34Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:de:e5 10.1.0.69
Oct  2 08:10:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:35.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:36.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:37.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:37 np0005466031 nova_compute[235803]: 2025-10-02 12:10:37.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:38.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:39.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:39 np0005466031 nova_compute[235803]: 2025-10-02 12:10:39.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:40.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:40 np0005466031 nova_compute[235803]: 2025-10-02 12:10:40.848 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:40 np0005466031 nova_compute[235803]: 2025-10-02 12:10:40.848 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:40 np0005466031 nova_compute[235803]: 2025-10-02 12:10:40.849 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:40 np0005466031 nova_compute[235803]: 2025-10-02 12:10:40.849 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:40 np0005466031 nova_compute[235803]: 2025-10-02 12:10:40.850 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:40 np0005466031 nova_compute[235803]: 2025-10-02 12:10:40.852 2 INFO nova.compute.manager [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Terminating instance#033[00m
Oct  2 08:10:40 np0005466031 nova_compute[235803]: 2025-10-02 12:10:40.853 2 DEBUG nova.compute.manager [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:10:40 np0005466031 kernel: tap514acc69-e9 (unregistering): left promiscuous mode
Oct  2 08:10:40 np0005466031 NetworkManager[44907]: <info>  [1759407040.9990] device (tap514acc69-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:10:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:41Z|00032|binding|INFO|Releasing lport 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 from this chassis (sb_readonly=0)
Oct  2 08:10:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:41Z|00033|binding|INFO|Setting lport 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 down in Southbound
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:10:41Z|00034|binding|INFO|Removing iface tap514acc69-e9 ovn-installed in OVS
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.022 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:de:e5 10.1.0.69 fdfe:381f:8400:1::25d'], port_security=['fa:16:3e:09:de:e5 10.1.0.69 fdfe:381f:8400:1::25d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.69/26 fdfe:381f:8400:1::25d/64', 'neutron:device_id': 'ab9f1bf2-9381-49d9-a097-5f482594f8fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa15236c63df4c43bf19989029fcda0f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e3feb76-9212-430e-bcfa-0b85f7aedc4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1382d266-669c-46c5-981d-23fbe67f9508, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=514acc69-e9db-45c8-b3f1-6e1efb4ccd91) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.023 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 514acc69-e9db-45c8-b3f1-6e1efb4ccd91 in datapath b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 unbound from our chassis#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.025 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.026 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e7b3de-b84b-4cd2-b958-e23d993bcd31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.027 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 namespace which is not needed anymore#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:41.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:41 np0005466031 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct  2 08:10:41 np0005466031 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 13.715s CPU time.
Oct  2 08:10:41 np0005466031 systemd-machined[192227]: Machine qemu-2-instance-00000004 terminated.
Oct  2 08:10:41 np0005466031 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[240039]: [NOTICE]   (240043) : haproxy version is 2.8.14-c23fe91
Oct  2 08:10:41 np0005466031 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[240039]: [NOTICE]   (240043) : path to executable is /usr/sbin/haproxy
Oct  2 08:10:41 np0005466031 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[240039]: [WARNING]  (240043) : Exiting Master process...
Oct  2 08:10:41 np0005466031 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[240039]: [ALERT]    (240043) : Current worker (240045) exited with code 143 (Terminated)
Oct  2 08:10:41 np0005466031 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[240039]: [WARNING]  (240043) : All workers exited. Exiting... (0)
Oct  2 08:10:41 np0005466031 systemd[1]: libpod-acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359.scope: Deactivated successfully.
Oct  2 08:10:41 np0005466031 podman[240134]: 2025-10-02 12:10:41.195621037 +0000 UTC m=+0.045453122 container died acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:10:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359-userdata-shm.mount: Deactivated successfully.
Oct  2 08:10:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay-0d92bf3c2eb824279efe9ef2490cfb541aacfe9adfbc8c9f25ebbe5a645c6947-merged.mount: Deactivated successfully.
Oct  2 08:10:41 np0005466031 podman[240134]: 2025-10-02 12:10:41.229683 +0000 UTC m=+0.079515065 container cleanup acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:10:41 np0005466031 kernel: tap514acc69-e9: entered promiscuous mode
Oct  2 08:10:41 np0005466031 systemd[1]: libpod-conmon-acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359.scope: Deactivated successfully.
Oct  2 08:10:41 np0005466031 kernel: tap514acc69-e9 (unregistering): left promiscuous mode
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.292 2 INFO nova.virt.libvirt.driver [-] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Instance destroyed successfully.#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.293 2 DEBUG nova.objects.instance [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'resources' on Instance uuid ab9f1bf2-9381-49d9-a097-5f482594f8fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.305 2 DEBUG nova.virt.libvirt.vif [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-3',id=4,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-02T12:10:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:22Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=ab9f1bf2-9381-49d9-a097-5f482594f8fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.306 2 DEBUG nova.network.os_vif_util [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "address": "fa:16:3e:09:de:e5", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::25d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap514acc69-e9", "ovs_interfaceid": "514acc69-e9db-45c8-b3f1-6e1efb4ccd91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.307 2 DEBUG nova.network.os_vif_util [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:de:e5,bridge_name='br-int',has_traffic_filtering=True,id=514acc69-e9db-45c8-b3f1-6e1efb4ccd91,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514acc69-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.307 2 DEBUG os_vif [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:de:e5,bridge_name='br-int',has_traffic_filtering=True,id=514acc69-e9db-45c8-b3f1-6e1efb4ccd91,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514acc69-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.311 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514acc69-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.316 2 INFO os_vif [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:de:e5,bridge_name='br-int',has_traffic_filtering=True,id=514acc69-e9db-45c8-b3f1-6e1efb4ccd91,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap514acc69-e9')#033[00m
Oct  2 08:10:41 np0005466031 podman[240163]: 2025-10-02 12:10:41.325562376 +0000 UTC m=+0.047898283 container remove acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.333 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6dae69-83f3-4d86-a64f-13d34060aa70]: (4, ('Thu Oct  2 12:10:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 (acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359)\nacaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359\nThu Oct  2 12:10:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 (acaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359)\nacaaa90bae605074a3cf5e70a4a22703925ffd2f6765f65eda5177e8a95dd359\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.335 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[60f6a739-ba99-4968-bdc0-fd296a24494f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.336 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aadb38-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:41 np0005466031 kernel: tapb4aadb38-80: left promiscuous mode
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.352 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[757dff5a-c74b-4108-b1a5-9ce9c8cfbcc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.375 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a09d4408-1164-48a2-ade5-eb0a07df16c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.377 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7136d093-7fc8-4297-a6a1-86501b96651d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.378 2 DEBUG nova.compute.manager [req-78dedde6-35c0-4b4c-8fb5-1e372a28de8c req-09fa1bbe-15d8-414a-80f8-95eadced1f75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received event network-vif-unplugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.379 2 DEBUG oslo_concurrency.lockutils [req-78dedde6-35c0-4b4c-8fb5-1e372a28de8c req-09fa1bbe-15d8-414a-80f8-95eadced1f75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.379 2 DEBUG oslo_concurrency.lockutils [req-78dedde6-35c0-4b4c-8fb5-1e372a28de8c req-09fa1bbe-15d8-414a-80f8-95eadced1f75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.379 2 DEBUG oslo_concurrency.lockutils [req-78dedde6-35c0-4b4c-8fb5-1e372a28de8c req-09fa1bbe-15d8-414a-80f8-95eadced1f75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.379 2 DEBUG nova.compute.manager [req-78dedde6-35c0-4b4c-8fb5-1e372a28de8c req-09fa1bbe-15d8-414a-80f8-95eadced1f75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] No waiting events found dispatching network-vif-unplugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:41 np0005466031 nova_compute[235803]: 2025-10-02 12:10:41.379 2 DEBUG nova.compute.manager [req-78dedde6-35c0-4b4c-8fb5-1e372a28de8c req-09fa1bbe-15d8-414a-80f8-95eadced1f75 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received event network-vif-unplugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.391 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ba944d23-b9fc-40ee-a9a6-96867880d20f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488036, 'reachable_time': 23367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240208, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:41 np0005466031 systemd[1]: run-netns-ovnmeta\x2db4aadb38\x2d89a4\x2d463f\x2db7b5\x2d8bb4dcce7d32.mount: Deactivated successfully.
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.402 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:10:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:10:41.403 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac16a4e-47b8-4ddf-894e-2ed136d268a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:42 np0005466031 nova_compute[235803]: 2025-10-02 12:10:42.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:42.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:42 np0005466031 podman[240212]: 2025-10-02 12:10:42.65233305 +0000 UTC m=+0.073008627 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:10:42 np0005466031 podman[240213]: 2025-10-02 12:10:42.674919701 +0000 UTC m=+0.091813929 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:10:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:43.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:43 np0005466031 nova_compute[235803]: 2025-10-02 12:10:43.119 2 INFO nova.virt.libvirt.driver [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Deleting instance files /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa_del#033[00m
Oct  2 08:10:43 np0005466031 nova_compute[235803]: 2025-10-02 12:10:43.120 2 INFO nova.virt.libvirt.driver [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Deletion of /var/lib/nova/instances/ab9f1bf2-9381-49d9-a097-5f482594f8fa_del complete#033[00m
Oct  2 08:10:43 np0005466031 nova_compute[235803]: 2025-10-02 12:10:43.203 2 DEBUG nova.virt.libvirt.host [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  2 08:10:43 np0005466031 nova_compute[235803]: 2025-10-02 12:10:43.203 2 INFO nova.virt.libvirt.host [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] UEFI support detected#033[00m
Oct  2 08:10:43 np0005466031 nova_compute[235803]: 2025-10-02 12:10:43.207 2 INFO nova.compute.manager [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Took 2.35 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:10:43 np0005466031 nova_compute[235803]: 2025-10-02 12:10:43.207 2 DEBUG oslo.service.loopingcall [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:10:43 np0005466031 nova_compute[235803]: 2025-10-02 12:10:43.208 2 DEBUG nova.compute.manager [-] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:10:43 np0005466031 nova_compute[235803]: 2025-10-02 12:10:43.208 2 DEBUG nova.network.neutron [-] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:10:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:45.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:46 np0005466031 nova_compute[235803]: 2025-10-02 12:10:46.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:46.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:47.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:47 np0005466031 nova_compute[235803]: 2025-10-02 12:10:47.090 2 DEBUG nova.compute.manager [req-194baad7-4ba7-4f8f-bfa2-f15cfcc43b81 req-106ab061-2d24-4525-b9a4-a758503eb5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received event network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:47 np0005466031 nova_compute[235803]: 2025-10-02 12:10:47.090 2 DEBUG oslo_concurrency.lockutils [req-194baad7-4ba7-4f8f-bfa2-f15cfcc43b81 req-106ab061-2d24-4525-b9a4-a758503eb5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:47 np0005466031 nova_compute[235803]: 2025-10-02 12:10:47.090 2 DEBUG oslo_concurrency.lockutils [req-194baad7-4ba7-4f8f-bfa2-f15cfcc43b81 req-106ab061-2d24-4525-b9a4-a758503eb5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:47 np0005466031 nova_compute[235803]: 2025-10-02 12:10:47.090 2 DEBUG oslo_concurrency.lockutils [req-194baad7-4ba7-4f8f-bfa2-f15cfcc43b81 req-106ab061-2d24-4525-b9a4-a758503eb5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:47 np0005466031 nova_compute[235803]: 2025-10-02 12:10:47.091 2 DEBUG nova.compute.manager [req-194baad7-4ba7-4f8f-bfa2-f15cfcc43b81 req-106ab061-2d24-4525-b9a4-a758503eb5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] No waiting events found dispatching network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:47 np0005466031 nova_compute[235803]: 2025-10-02 12:10:47.091 2 WARNING nova.compute.manager [req-194baad7-4ba7-4f8f-bfa2-f15cfcc43b81 req-106ab061-2d24-4525-b9a4-a758503eb5d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received unexpected event network-vif-plugged-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:10:47 np0005466031 nova_compute[235803]: 2025-10-02 12:10:47.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:48.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:49.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:49 np0005466031 nova_compute[235803]: 2025-10-02 12:10:49.646 2 DEBUG nova.network.neutron [-] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:49 np0005466031 podman[240312]: 2025-10-02 12:10:49.679851289 +0000 UTC m=+0.101882349 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:10:49 np0005466031 nova_compute[235803]: 2025-10-02 12:10:49.870 2 INFO nova.compute.manager [-] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Took 6.66 seconds to deallocate network for instance.#033[00m
Oct  2 08:10:50 np0005466031 nova_compute[235803]: 2025-10-02 12:10:50.104 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:50 np0005466031 nova_compute[235803]: 2025-10-02 12:10:50.104 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:50 np0005466031 nova_compute[235803]: 2025-10-02 12:10:50.197 2 DEBUG oslo_concurrency.processutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3951825793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:50 np0005466031 nova_compute[235803]: 2025-10-02 12:10:50.636 2 DEBUG oslo_concurrency.processutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:50 np0005466031 nova_compute[235803]: 2025-10-02 12:10:50.645 2 DEBUG nova.compute.provider_tree [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:50.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:50 np0005466031 nova_compute[235803]: 2025-10-02 12:10:50.788 2 DEBUG nova.compute.manager [req-f6326071-c86f-4172-947f-f1c12922e884 req-23ebce45-affe-444a-9c00-a693265b94bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Received event network-vif-deleted-514acc69-e9db-45c8-b3f1-6e1efb4ccd91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:51.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:51 np0005466031 nova_compute[235803]: 2025-10-02 12:10:51.065 2 DEBUG nova.scheduler.client.report [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:51 np0005466031 nova_compute[235803]: 2025-10-02 12:10:51.231 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:51 np0005466031 nova_compute[235803]: 2025-10-02 12:10:51.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:51 np0005466031 nova_compute[235803]: 2025-10-02 12:10:51.903 2 INFO nova.scheduler.client.report [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Deleted allocations for instance ab9f1bf2-9381-49d9-a097-5f482594f8fa#033[00m
Oct  2 08:10:52 np0005466031 nova_compute[235803]: 2025-10-02 12:10:52.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:52.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:52 np0005466031 nova_compute[235803]: 2025-10-02 12:10:52.940 2 DEBUG oslo_concurrency.lockutils [None req-7b11147e-83c5-4344-928a-af9c2dec61ea b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "ab9f1bf2-9381-49d9-a097-5f482594f8fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:53.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:53 np0005466031 podman[240357]: 2025-10-02 12:10:53.642321848 +0000 UTC m=+0.066487499 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:10:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:54.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:55.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:56 np0005466031 nova_compute[235803]: 2025-10-02 12:10:56.291 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407041.2899837, ab9f1bf2-9381-49d9-a097-5f482594f8fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:56 np0005466031 nova_compute[235803]: 2025-10-02 12:10:56.291 2 INFO nova.compute.manager [-] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:10:56 np0005466031 nova_compute[235803]: 2025-10-02 12:10:56.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:56.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:56 np0005466031 nova_compute[235803]: 2025-10-02 12:10:56.719 2 DEBUG nova.compute.manager [None req-e4747803-ea7a-454f-87a6-ce04c663d1f2 - - - - - -] [instance: ab9f1bf2-9381-49d9-a097-5f482594f8fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:57.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:57 np0005466031 nova_compute[235803]: 2025-10-02 12:10:57.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:10:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:10:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:59.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:10:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:00.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:01.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:01 np0005466031 nova_compute[235803]: 2025-10-02 12:11:01.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:02 np0005466031 nova_compute[235803]: 2025-10-02 12:11:02.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:02 np0005466031 nova_compute[235803]: 2025-10-02 12:11:02.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:02.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:03.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.512 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.513 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.513 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.514 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.514 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.515 2 INFO nova.compute.manager [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Terminating instance#033[00m
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.516 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.516 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquired lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.516 2 DEBUG nova.network.neutron [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:11:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:04 np0005466031 nova_compute[235803]: 2025-10-02 12:11:04.757 2 DEBUG nova.network.neutron [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:11:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:05.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:05 np0005466031 nova_compute[235803]: 2025-10-02 12:11:05.371 2 DEBUG nova.network.neutron [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:05 np0005466031 nova_compute[235803]: 2025-10-02 12:11:05.388 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Releasing lock "refresh_cache-db731c69-9df0-4f81-b108-1fbb5e8a35c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:11:05 np0005466031 nova_compute[235803]: 2025-10-02 12:11:05.389 2 DEBUG nova.compute.manager [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:11:06 np0005466031 nova_compute[235803]: 2025-10-02 12:11:06.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:06 np0005466031 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct  2 08:11:06 np0005466031 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.464s CPU time.
Oct  2 08:11:06 np0005466031 systemd-machined[192227]: Machine qemu-1-instance-00000001 terminated.
Oct  2 08:11:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.763492) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066763533, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 904, "num_deletes": 255, "total_data_size": 1648780, "memory_usage": 1666816, "flush_reason": "Manual Compaction"}
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066772921, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1077109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21891, "largest_seqno": 22790, "table_properties": {"data_size": 1072939, "index_size": 1822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9373, "raw_average_key_size": 18, "raw_value_size": 1064325, "raw_average_value_size": 2150, "num_data_blocks": 80, "num_entries": 495, "num_filter_entries": 495, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407011, "oldest_key_time": 1759407011, "file_creation_time": 1759407066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 9470 microseconds, and 3297 cpu microseconds.
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.772963) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1077109 bytes OK
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.772984) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.774238) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.774250) EVENT_LOG_v1 {"time_micros": 1759407066774245, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.774270) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1644129, prev total WAL file size 1644129, number of live WAL files 2.
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.774799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1051KB)], [42(8173KB)]
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066774836, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 9446896, "oldest_snapshot_seqno": -1}
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 4640 keys, 9302786 bytes, temperature: kUnknown
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066811653, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 9302786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9270368, "index_size": 19700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116681, "raw_average_key_size": 25, "raw_value_size": 9184943, "raw_average_value_size": 1979, "num_data_blocks": 814, "num_entries": 4640, "num_filter_entries": 4640, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759407066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.812532) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9302786 bytes
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.813826) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.2 rd, 248.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.0 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(17.4) write-amplify(8.6) OK, records in: 5169, records dropped: 529 output_compression: NoCompression
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.813846) EVENT_LOG_v1 {"time_micros": 1759407066813834, "job": 24, "event": "compaction_finished", "compaction_time_micros": 37460, "compaction_time_cpu_micros": 18938, "output_level": 6, "num_output_files": 1, "total_output_size": 9302786, "num_input_records": 5169, "num_output_records": 4640, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066814119, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066815614, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.774708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.815694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.815701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.815703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.815705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:06.815707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:06 np0005466031 nova_compute[235803]: 2025-10-02 12:11:06.823 2 INFO nova.virt.libvirt.driver [-] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Instance destroyed successfully.#033[00m
Oct  2 08:11:06 np0005466031 nova_compute[235803]: 2025-10-02 12:11:06.824 2 DEBUG nova.objects.instance [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'resources' on Instance uuid db731c69-9df0-4f81-b108-1fbb5e8a35c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:07.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:07 np0005466031 nova_compute[235803]: 2025-10-02 12:11:07.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct  2 08:11:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.766342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068766434, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 293, "num_deletes": 251, "total_data_size": 94819, "memory_usage": 101112, "flush_reason": "Manual Compaction"}
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068769459, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 62040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22795, "largest_seqno": 23083, "table_properties": {"data_size": 60111, "index_size": 157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5086, "raw_average_key_size": 18, "raw_value_size": 56228, "raw_average_value_size": 204, "num_data_blocks": 7, "num_entries": 275, "num_filter_entries": 275, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407067, "oldest_key_time": 1759407067, "file_creation_time": 1759407068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3142 microseconds, and 1280 cpu microseconds.
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.769516) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 62040 bytes OK
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.769570) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.771313) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.771335) EVENT_LOG_v1 {"time_micros": 1759407068771328, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.771357) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 92661, prev total WAL file size 92661, number of live WAL files 2.
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.771988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(60KB)], [45(9084KB)]
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068772116, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 9364826, "oldest_snapshot_seqno": -1}
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4401 keys, 7342943 bytes, temperature: kUnknown
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068803131, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 7342943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7313837, "index_size": 17028, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 112405, "raw_average_key_size": 25, "raw_value_size": 7234201, "raw_average_value_size": 1643, "num_data_blocks": 693, "num_entries": 4401, "num_filter_entries": 4401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759407068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.803405) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7342943 bytes
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.804733) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 301.4 rd, 236.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(269.3) write-amplify(118.4) OK, records in: 4915, records dropped: 514 output_compression: NoCompression
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.804752) EVENT_LOG_v1 {"time_micros": 1759407068804742, "job": 26, "event": "compaction_finished", "compaction_time_micros": 31069, "compaction_time_cpu_micros": 15728, "output_level": 6, "num_output_files": 1, "total_output_size": 7342943, "num_input_records": 4915, "num_output_records": 4401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068804899, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068806350, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.771813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.806498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.806518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.806523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.806527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:11:08.806532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:09.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:11:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:11.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:11:11 np0005466031 nova_compute[235803]: 2025-10-02 12:11:11.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:11 np0005466031 nova_compute[235803]: 2025-10-02 12:11:11.561 2 INFO nova.virt.libvirt.driver [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Deleting instance files /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7_del#033[00m
Oct  2 08:11:11 np0005466031 nova_compute[235803]: 2025-10-02 12:11:11.562 2 INFO nova.virt.libvirt.driver [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Deletion of /var/lib/nova/instances/db731c69-9df0-4f81-b108-1fbb5e8a35c7_del complete#033[00m
Oct  2 08:11:11 np0005466031 nova_compute[235803]: 2025-10-02 12:11:11.618 2 INFO nova.compute.manager [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Took 6.23 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:11:11 np0005466031 nova_compute[235803]: 2025-10-02 12:11:11.618 2 DEBUG oslo.service.loopingcall [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:11:11 np0005466031 nova_compute[235803]: 2025-10-02 12:11:11.619 2 DEBUG nova.compute.manager [-] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:11:11 np0005466031 nova_compute[235803]: 2025-10-02 12:11:11.619 2 DEBUG nova.network.neutron [-] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.373 2 DEBUG nova.network.neutron [-] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.386 2 DEBUG nova.network.neutron [-] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.404 2 INFO nova.compute.manager [-] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Took 0.79 seconds to deallocate network for instance.#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.447 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.447 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.513 2 DEBUG oslo_concurrency.processutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:12.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2090179139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.938 2 DEBUG oslo_concurrency.processutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.943 2 DEBUG nova.compute.provider_tree [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.962 2 DEBUG nova.scheduler.client.report [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:12 np0005466031 nova_compute[235803]: 2025-10-02 12:11:12.992 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:13 np0005466031 nova_compute[235803]: 2025-10-02 12:11:13.017 2 INFO nova.scheduler.client.report [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Deleted allocations for instance db731c69-9df0-4f81-b108-1fbb5e8a35c7#033[00m
Oct  2 08:11:13 np0005466031 nova_compute[235803]: 2025-10-02 12:11:13.083 2 DEBUG oslo_concurrency.lockutils [None req-de7129c6-c78b-4e78-a84f-50c2a0249766 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "db731c69-9df0-4f81-b108-1fbb5e8a35c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:13.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:11:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377094611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:11:13 np0005466031 podman[240481]: 2025-10-02 12:11:13.621464025 +0000 UTC m=+0.055128371 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:11:13 np0005466031 podman[240482]: 2025-10-02 12:11:13.712678386 +0000 UTC m=+0.145078996 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:11:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:15.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:16 np0005466031 nova_compute[235803]: 2025-10-02 12:11:16.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:16.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:17.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:17 np0005466031 nova_compute[235803]: 2025-10-02 12:11:17.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:19.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:19 np0005466031 nova_compute[235803]: 2025-10-02 12:11:19.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:11:19.713 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:11:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:11:19.715 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:11:20 np0005466031 podman[240528]: 2025-10-02 12:11:20.658452908 +0000 UTC m=+0.084450837 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:11:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:11:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:11:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:11:20.717 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:21.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:21 np0005466031 nova_compute[235803]: 2025-10-02 12:11:21.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 e142: 3 total, 3 up, 3 in
Oct  2 08:11:21 np0005466031 nova_compute[235803]: 2025-10-02 12:11:21.821 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407066.8197725, db731c69-9df0-4f81-b108-1fbb5e8a35c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:21 np0005466031 nova_compute[235803]: 2025-10-02 12:11:21.822 2 INFO nova.compute.manager [-] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:11:21 np0005466031 nova_compute[235803]: 2025-10-02 12:11:21.852 2 DEBUG nova.compute.manager [None req-1703341c-f893-4ada-b2de-f035ef0aca15 - - - - - -] [instance: db731c69-9df0-4f81-b108-1fbb5e8a35c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:22 np0005466031 nova_compute[235803]: 2025-10-02 12:11:22.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:23.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:11:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:11:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:11:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:24 np0005466031 podman[240681]: 2025-10-02 12:11:24.626790545 +0000 UTC m=+0.061671760 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:11:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:25.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:11:25.817 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:11:25.818 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:11:25.818 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.835 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.836 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.859 2 DEBUG nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.947 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.948 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.954 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:11:26 np0005466031 nova_compute[235803]: 2025-10-02 12:11:26.954 2 INFO nova.compute.claims [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.057 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:27.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1743318663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.491 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.500 2 DEBUG nova.compute.provider_tree [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.524 2 DEBUG nova.scheduler.client.report [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.560 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.561 2 DEBUG nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.627 2 DEBUG nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.628 2 DEBUG nova.network.neutron [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.633 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.647 2 INFO nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.664 2 DEBUG nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.748 2 DEBUG nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.749 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.749 2 INFO nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Creating image(s)#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.780 2 DEBUG nova.storage.rbd_utils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.822 2 DEBUG nova.storage.rbd_utils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.853 2 DEBUG nova.storage.rbd_utils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.857 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.928 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.930 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.930 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.931 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.961 2 DEBUG nova.storage.rbd_utils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:27 np0005466031 nova_compute[235803]: 2025-10-02 12:11:27.966 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.032 2 DEBUG nova.network.neutron [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.033 2 DEBUG nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.387 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.472 2 DEBUG nova.storage.rbd_utils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] resizing rbd image 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.563 2 DEBUG nova.objects.instance [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 9895a511-7390-44bc-86eb-1cb1b4e0dda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.577 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.578 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Ensure instance console log exists: /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.578 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.578 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.579 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.580 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.583 2 WARNING nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.588 2 DEBUG nova.virt.libvirt.host [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.588 2 DEBUG nova.virt.libvirt.host [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.592 2 DEBUG nova.virt.libvirt.host [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.592 2 DEBUG nova.virt.libvirt.host [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.593 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.593 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.594 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.594 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.594 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.594 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.594 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.595 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.595 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.595 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.595 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.595 2 DEBUG nova.virt.hardware [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:11:28 np0005466031 nova_compute[235803]: 2025-10-02 12:11:28.598 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:28.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:11:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2720756083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.013 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.051 2 DEBUG nova.storage.rbd_utils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.055 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:29.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:11:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3626303909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.491 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.493 2 DEBUG nova.objects.instance [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9895a511-7390-44bc-86eb-1cb1b4e0dda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.538 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <uuid>9895a511-7390-44bc-86eb-1cb1b4e0dda6</uuid>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <name>instance-00000008</name>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1706548958</nova:name>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:11:28</nova:creationTime>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <nova:user uuid="2cdfea5c8e074c59b963b1fba6b35e1f">tempest-DeleteServersAdminTestJSON-98667439-project-member</nova:user>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <nova:project uuid="14444ba992464a08be0b7dc7a5dd00c2">tempest-DeleteServersAdminTestJSON-98667439</nova:project>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <entry name="serial">9895a511-7390-44bc-86eb-1cb1b4e0dda6</entry>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <entry name="uuid">9895a511-7390-44bc-86eb-1cb1b4e0dda6</entry>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk.config">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6/console.log" append="off"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:11:29 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:11:29 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:11:29 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:11:29 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:11:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.595 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.595 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.596 2 INFO nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Using config drive#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.628 2 DEBUG nova.storage.rbd_utils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.633 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.664 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.664 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.715 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.716 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.716 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.716 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.742 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.743 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.743 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.743 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.743 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.835 2 INFO nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Creating config drive at /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6/disk.config#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.840 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4i3s_mv8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:29 np0005466031 nova_compute[235803]: 2025-10-02 12:11:29.965 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4i3s_mv8" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.004 2 DEBUG nova.storage.rbd_utils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.009 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6/disk.config 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/955745629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.168 2 DEBUG oslo_concurrency.processutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6/disk.config 9895a511-7390-44bc-86eb-1cb1b4e0dda6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.169 2 INFO nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Deleting local config drive /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6/disk.config because it was imported into RBD.#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.171 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:30 np0005466031 systemd-machined[192227]: New machine qemu-3-instance-00000008.
Oct  2 08:11:30 np0005466031 systemd[1]: Started Virtual Machine qemu-3-instance-00000008.
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.470 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.472 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.633 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.634 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4959MB free_disk=20.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.635 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.635 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:30.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.869 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 9895a511-7390-44bc-86eb-1cb1b4e0dda6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.869 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.869 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:11:30 np0005466031 nova_compute[235803]: 2025-10-02 12:11:30.945 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.067 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407091.0670106, 9895a511-7390-44bc-86eb-1cb1b4e0dda6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.068 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.071 2 DEBUG nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.072 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.075 2 INFO nova.virt.libvirt.driver [-] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Instance spawned successfully.#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.075 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:11:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:31.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.113 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.117 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.118 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.118 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.119 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.119 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.120 2 DEBUG nova.virt.libvirt.driver [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.125 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.202 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.202 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407091.072879, 9895a511-7390-44bc-86eb-1cb1b4e0dda6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.202 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.323 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.327 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2081085421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.378 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.381 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.383 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.421 2 INFO nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Took 3.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.421 2 DEBUG nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.512 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.637 2 INFO nova.compute.manager [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Took 4.72 seconds to build instance.#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.763 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.764 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:31 np0005466031 nova_compute[235803]: 2025-10-02 12:11:31.792 2 DEBUG oslo_concurrency.lockutils [None req-a726b02e-9a45-41fc-b761-ac28ad43f91b 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:11:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.090 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Acquiring lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.091 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.091 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Acquiring lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.092 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.092 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.095 2 INFO nova.compute.manager [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Terminating instance#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.096 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Acquiring lock "refresh_cache-9895a511-7390-44bc-86eb-1cb1b4e0dda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.097 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Acquired lock "refresh_cache-9895a511-7390-44bc-86eb-1cb1b4e0dda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.097 2 DEBUG nova.network.neutron [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.683 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.684 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:32 np0005466031 nova_compute[235803]: 2025-10-02 12:11:32.685 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:33.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:33 np0005466031 nova_compute[235803]: 2025-10-02 12:11:33.121 2 DEBUG nova.network.neutron [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:11:33 np0005466031 nova_compute[235803]: 2025-10-02 12:11:33.699 2 DEBUG nova.network.neutron [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:33 np0005466031 nova_compute[235803]: 2025-10-02 12:11:33.766 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Releasing lock "refresh_cache-9895a511-7390-44bc-86eb-1cb1b4e0dda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:11:33 np0005466031 nova_compute[235803]: 2025-10-02 12:11:33.767 2 DEBUG nova.compute.manager [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:11:33 np0005466031 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct  2 08:11:33 np0005466031 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 3.566s CPU time.
Oct  2 08:11:33 np0005466031 systemd-machined[192227]: Machine qemu-3-instance-00000008 terminated.
Oct  2 08:11:33 np0005466031 nova_compute[235803]: 2025-10-02 12:11:33.990 2 INFO nova.virt.libvirt.driver [-] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Instance destroyed successfully.#033[00m
Oct  2 08:11:33 np0005466031 nova_compute[235803]: 2025-10-02 12:11:33.991 2 DEBUG nova.objects.instance [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Lazy-loading 'resources' on Instance uuid 9895a511-7390-44bc-86eb-1cb1b4e0dda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:34.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:34 np0005466031 nova_compute[235803]: 2025-10-02 12:11:34.774 2 INFO nova.virt.libvirt.driver [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Deleting instance files /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6_del#033[00m
Oct  2 08:11:34 np0005466031 nova_compute[235803]: 2025-10-02 12:11:34.775 2 INFO nova.virt.libvirt.driver [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Deletion of /var/lib/nova/instances/9895a511-7390-44bc-86eb-1cb1b4e0dda6_del complete#033[00m
Oct  2 08:11:35 np0005466031 nova_compute[235803]: 2025-10-02 12:11:35.038 2 INFO nova.compute.manager [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:11:35 np0005466031 nova_compute[235803]: 2025-10-02 12:11:35.038 2 DEBUG oslo.service.loopingcall [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:11:35 np0005466031 nova_compute[235803]: 2025-10-02 12:11:35.039 2 DEBUG nova.compute.manager [-] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:11:35 np0005466031 nova_compute[235803]: 2025-10-02 12:11:35.039 2 DEBUG nova.network.neutron [-] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:11:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:11:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:35.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:11:35 np0005466031 nova_compute[235803]: 2025-10-02 12:11:35.429 2 DEBUG nova.network.neutron [-] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:11:35 np0005466031 nova_compute[235803]: 2025-10-02 12:11:35.625 2 DEBUG nova.network.neutron [-] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:35 np0005466031 nova_compute[235803]: 2025-10-02 12:11:35.847 2 INFO nova.compute.manager [-] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Took 0.81 seconds to deallocate network for instance.#033[00m
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.051 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.051 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.113 2 DEBUG oslo_concurrency.processutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:36 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1510747912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.601 2 DEBUG oslo_concurrency.processutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.607 2 DEBUG nova.compute.provider_tree [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.700 2 DEBUG nova.scheduler.client.report [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.837 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:36 np0005466031 nova_compute[235803]: 2025-10-02 12:11:36.959 2 INFO nova.scheduler.client.report [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Deleted allocations for instance 9895a511-7390-44bc-86eb-1cb1b4e0dda6#033[00m
Oct  2 08:11:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:37 np0005466031 nova_compute[235803]: 2025-10-02 12:11:37.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:37 np0005466031 nova_compute[235803]: 2025-10-02 12:11:37.302 2 DEBUG oslo_concurrency.lockutils [None req-b84835b2-025f-415c-81fc-563b8fe6e25a ce4af777c4dc49f7af0174a5a204b76c e76709d3befb4e8a8ffdf0b34af05d5d - - default default] Lock "9895a511-7390-44bc-86eb-1cb1b4e0dda6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:38.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:39.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:40.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:41.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:41 np0005466031 nova_compute[235803]: 2025-10-02 12:11:41.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:42 np0005466031 nova_compute[235803]: 2025-10-02 12:11:42.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:42.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:43.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:44 np0005466031 podman[241267]: 2025-10-02 12:11:44.662691798 +0000 UTC m=+0.086589249 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:11:44 np0005466031 podman[241268]: 2025-10-02 12:11:44.696507194 +0000 UTC m=+0.117355837 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 08:11:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:45.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:46 np0005466031 nova_compute[235803]: 2025-10-02 12:11:46.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:47.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:47 np0005466031 nova_compute[235803]: 2025-10-02 12:11:47.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:48 np0005466031 nova_compute[235803]: 2025-10-02 12:11:48.989 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407093.9859278, 9895a511-7390-44bc-86eb-1cb1b4e0dda6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:48 np0005466031 nova_compute[235803]: 2025-10-02 12:11:48.989 2 INFO nova.compute.manager [-] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:11:49 np0005466031 nova_compute[235803]: 2025-10-02 12:11:49.020 2 DEBUG nova.compute.manager [None req-299cde40-46a0-4b7c-bb37-6430a8dc4a11 - - - - - -] [instance: 9895a511-7390-44bc-86eb-1cb1b4e0dda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:51 np0005466031 nova_compute[235803]: 2025-10-02 12:11:51.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:51 np0005466031 podman[241363]: 2025-10-02 12:11:51.620698552 +0000 UTC m=+0.051077384 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:11:52 np0005466031 nova_compute[235803]: 2025-10-02 12:11:52.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:53 np0005466031 ovn_controller[132413]: 2025-10-02T12:11:53Z|00035|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 08:11:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:11:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:55.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:11:55 np0005466031 podman[241386]: 2025-10-02 12:11:55.66283653 +0000 UTC m=+0.087905477 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:11:56 np0005466031 nova_compute[235803]: 2025-10-02 12:11:56.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:57.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:57 np0005466031 nova_compute[235803]: 2025-10-02 12:11:57.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:11:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:58.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:11:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:11:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:59.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:00.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:01.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:01 np0005466031 nova_compute[235803]: 2025-10-02 12:12:01.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:02 np0005466031 nova_compute[235803]: 2025-10-02 12:12:02.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:02.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:02.910 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:12:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:02.911 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:12:02 np0005466031 nova_compute[235803]: 2025-10-02 12:12:02.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:03.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:03.914 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:04.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:05.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:06 np0005466031 nova_compute[235803]: 2025-10-02 12:12:06.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:06.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:07.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:07 np0005466031 nova_compute[235803]: 2025-10-02 12:12:07.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:08.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:09.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:11 np0005466031 nova_compute[235803]: 2025-10-02 12:12:11.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:12 np0005466031 nova_compute[235803]: 2025-10-02 12:12:12.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:12.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:13.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:12:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:14.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:12:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:15.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:15 np0005466031 podman[241467]: 2025-10-02 12:12:15.649884513 +0000 UTC m=+0.070895326 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:12:15 np0005466031 podman[241468]: 2025-10-02 12:12:15.706650651 +0000 UTC m=+0.122258568 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:12:16 np0005466031 nova_compute[235803]: 2025-10-02 12:12:16.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:12:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:16.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:12:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:17.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:17 np0005466031 nova_compute[235803]: 2025-10-02 12:12:17.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:18.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:19.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:20.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:21.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:21 np0005466031 nova_compute[235803]: 2025-10-02 12:12:21.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:22 np0005466031 nova_compute[235803]: 2025-10-02 12:12:22.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:22 np0005466031 podman[241514]: 2025-10-02 12:12:22.623723055 +0000 UTC m=+0.057804808 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:12:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:22.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:23.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:24.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:25.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:25.818 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:25.818 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:25.819 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:26 np0005466031 nova_compute[235803]: 2025-10-02 12:12:26.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:26 np0005466031 podman[241586]: 2025-10-02 12:12:26.622457319 +0000 UTC m=+0.049587091 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct  2 08:12:26 np0005466031 nova_compute[235803]: 2025-10-02 12:12:26.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:26.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:27 np0005466031 nova_compute[235803]: 2025-10-02 12:12:27.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:27.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:28 np0005466031 nova_compute[235803]: 2025-10-02 12:12:28.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:28.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:29.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:29 np0005466031 nova_compute[235803]: 2025-10-02 12:12:29.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.660 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.660 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.697 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.698 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.698 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.698 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:12:30 np0005466031 nova_compute[235803]: 2025-10-02 12:12:30.698 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:30.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:12:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2165228842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.125 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:31.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.283 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.285 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5001MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.285 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.285 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.374 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.374 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.406 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:12:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1978288928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.827 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.831 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.860 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.887 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:12:31 np0005466031 nova_compute[235803]: 2025-10-02 12:12:31.888 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:32 np0005466031 nova_compute[235803]: 2025-10-02 12:12:32.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:32.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:12:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:12:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:12:32 np0005466031 nova_compute[235803]: 2025-10-02 12:12:32.865 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:32 np0005466031 nova_compute[235803]: 2025-10-02 12:12:32.865 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:32 np0005466031 nova_compute[235803]: 2025-10-02 12:12:32.865 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:32 np0005466031 nova_compute[235803]: 2025-10-02 12:12:32.865 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:32 np0005466031 nova_compute[235803]: 2025-10-02 12:12:32.866 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:12:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:33.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:34.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:35.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:36 np0005466031 nova_compute[235803]: 2025-10-02 12:12:36.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:36.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:37 np0005466031 nova_compute[235803]: 2025-10-02 12:12:37.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:37.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:12:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:38.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:12:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:39.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:40.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:41.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:41 np0005466031 nova_compute[235803]: 2025-10-02 12:12:41.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:42 np0005466031 nova_compute[235803]: 2025-10-02 12:12:42.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:12:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:12:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:42.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:43.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:43 np0005466031 nova_compute[235803]: 2025-10-02 12:12:43.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:43.366 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:12:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:43.370 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:12:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:44.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:45.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:45 np0005466031 podman[241889]: 2025-10-02 12:12:45.842520018 +0000 UTC m=+0.097864605 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:12:45 np0005466031 podman[241890]: 2025-10-02 12:12:45.842476166 +0000 UTC m=+0.100768758 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:12:46 np0005466031 nova_compute[235803]: 2025-10-02 12:12:46.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:46.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:47 np0005466031 nova_compute[235803]: 2025-10-02 12:12:47.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:47.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:12:47.371 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:49.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:50.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:51.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:51 np0005466031 nova_compute[235803]: 2025-10-02 12:12:51.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:52 np0005466031 nova_compute[235803]: 2025-10-02 12:12:52.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:52.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:53 np0005466031 podman[241940]: 2025-10-02 12:12:53.629900997 +0000 UTC m=+0.061155165 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:12:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:54.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:56 np0005466031 nova_compute[235803]: 2025-10-02 12:12:56.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:56.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:57 np0005466031 nova_compute[235803]: 2025-10-02 12:12:57.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:57 np0005466031 podman[241963]: 2025-10-02 12:12:57.683572037 +0000 UTC m=+0.103052894 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:12:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:12:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:58.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:12:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:12:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:59.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:00.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:01.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:01 np0005466031 nova_compute[235803]: 2025-10-02 12:13:01.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:02 np0005466031 nova_compute[235803]: 2025-10-02 12:13:02.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:03.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:05.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:06 np0005466031 nova_compute[235803]: 2025-10-02 12:13:06.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:06.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:07 np0005466031 nova_compute[235803]: 2025-10-02 12:13:07.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:07.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:13:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:08.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:13:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:09.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:11 np0005466031 nova_compute[235803]: 2025-10-02 12:13:11.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:12 np0005466031 nova_compute[235803]: 2025-10-02 12:13:12.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:12.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:12 np0005466031 nova_compute[235803]: 2025-10-02 12:13:12.894 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:12 np0005466031 nova_compute[235803]: 2025-10-02 12:13:12.895 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:12 np0005466031 nova_compute[235803]: 2025-10-02 12:13:12.909 2 DEBUG nova.compute.manager [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:13:12 np0005466031 nova_compute[235803]: 2025-10-02 12:13:12.978 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:12 np0005466031 nova_compute[235803]: 2025-10-02 12:13:12.979 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:12 np0005466031 nova_compute[235803]: 2025-10-02 12:13:12.984 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:13:12 np0005466031 nova_compute[235803]: 2025-10-02 12:13:12.984 2 INFO nova.compute.claims [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.079 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:13.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1201019618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.498 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.505 2 DEBUG nova.compute.provider_tree [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.530 2 DEBUG nova.scheduler.client.report [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.562 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.563 2 DEBUG nova.compute.manager [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.755 2 DEBUG nova.compute.manager [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.755 2 DEBUG nova.network.neutron [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:13:13 np0005466031 nova_compute[235803]: 2025-10-02 12:13:13.952 2 INFO nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.055 2 DEBUG nova.compute.manager [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.259 2 DEBUG nova.compute.manager [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.261 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.262 2 INFO nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Creating image(s)#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.306 2 DEBUG nova.storage.rbd_utils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image 2b86a484-6fc6-4efa-983f-fb93053b0874_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.342 2 DEBUG nova.storage.rbd_utils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image 2b86a484-6fc6-4efa-983f-fb93053b0874_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.380 2 DEBUG nova.storage.rbd_utils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image 2b86a484-6fc6-4efa-983f-fb93053b0874_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.385 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.410 2 DEBUG nova.policy [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd29391679bd0482aada18c987e4c11ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.445 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.446 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.447 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.447 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.479 2 DEBUG nova.storage.rbd_utils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image 2b86a484-6fc6-4efa-983f-fb93053b0874_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:14 np0005466031 nova_compute[235803]: 2025-10-02 12:13:14.483 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2b86a484-6fc6-4efa-983f-fb93053b0874_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:14.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:15 np0005466031 nova_compute[235803]: 2025-10-02 12:13:15.155 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2b86a484-6fc6-4efa-983f-fb93053b0874_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:15 np0005466031 nova_compute[235803]: 2025-10-02 12:13:15.246 2 DEBUG nova.storage.rbd_utils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] resizing rbd image 2b86a484-6fc6-4efa-983f-fb93053b0874_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:13:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:15.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:15 np0005466031 nova_compute[235803]: 2025-10-02 12:13:15.380 2 DEBUG nova.objects.instance [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lazy-loading 'migration_context' on Instance uuid 2b86a484-6fc6-4efa-983f-fb93053b0874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:15 np0005466031 nova_compute[235803]: 2025-10-02 12:13:15.417 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:13:15 np0005466031 nova_compute[235803]: 2025-10-02 12:13:15.418 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Ensure instance console log exists: /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:13:15 np0005466031 nova_compute[235803]: 2025-10-02 12:13:15.418 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:15 np0005466031 nova_compute[235803]: 2025-10-02 12:13:15.419 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:15 np0005466031 nova_compute[235803]: 2025-10-02 12:13:15.419 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:16 np0005466031 nova_compute[235803]: 2025-10-02 12:13:16.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:16 np0005466031 podman[242231]: 2025-10-02 12:13:16.649995498 +0000 UTC m=+0.076329093 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:13:16 np0005466031 podman[242232]: 2025-10-02 12:13:16.68960063 +0000 UTC m=+0.109467569 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:13:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:13:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:16.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.182 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "0a19956a-a438-4ce5-a67c-e2f804af2722" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.183 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "0a19956a-a438-4ce5-a67c-e2f804af2722" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e143 e143: 3 total, 3 up, 3 in
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.242 2 DEBUG nova.compute.manager [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:13:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:13:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:17.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.366 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.367 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.374 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.374 2 INFO nova.compute.claims [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.576 2 DEBUG nova.network.neutron [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Successfully updated port: 8879d541-1199-497a-b096-b45e17e4df04 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.582 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.672 2 DEBUG nova.compute.manager [req-2a7cfd1e-9333-4488-9080-eefb11c44915 req-d2d5aa6e-d29d-473c-b328-e2a3561011d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received event network-changed-8879d541-1199-497a-b096-b45e17e4df04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.673 2 DEBUG nova.compute.manager [req-2a7cfd1e-9333-4488-9080-eefb11c44915 req-d2d5aa6e-d29d-473c-b328-e2a3561011d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Refreshing instance network info cache due to event network-changed-8879d541-1199-497a-b096-b45e17e4df04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.674 2 DEBUG oslo_concurrency.lockutils [req-2a7cfd1e-9333-4488-9080-eefb11c44915 req-d2d5aa6e-d29d-473c-b328-e2a3561011d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.674 2 DEBUG oslo_concurrency.lockutils [req-2a7cfd1e-9333-4488-9080-eefb11c44915 req-d2d5aa6e-d29d-473c-b328-e2a3561011d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.674 2 DEBUG nova.network.neutron [req-2a7cfd1e-9333-4488-9080-eefb11c44915 req-d2d5aa6e-d29d-473c-b328-e2a3561011d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Refreshing network info cache for port 8879d541-1199-497a-b096-b45e17e4df04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.724 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:13:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/261862872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:17 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.982 2 DEBUG nova.network.neutron [req-2a7cfd1e-9333-4488-9080-eefb11c44915 req-d2d5aa6e-d29d-473c-b328-e2a3561011d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:17.999 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.005 2 DEBUG nova.compute.provider_tree [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.072 2 DEBUG nova.scheduler.client.report [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.139 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.140 2 DEBUG nova.compute.manager [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.269 2 DEBUG nova.network.neutron [req-2a7cfd1e-9333-4488-9080-eefb11c44915 req-d2d5aa6e-d29d-473c-b328-e2a3561011d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.311 2 DEBUG oslo_concurrency.lockutils [req-2a7cfd1e-9333-4488-9080-eefb11c44915 req-d2d5aa6e-d29d-473c-b328-e2a3561011d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.312 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquired lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.312 2 DEBUG nova.network.neutron [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.319 2 DEBUG nova.compute.manager [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.319 2 DEBUG nova.network.neutron [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.457 2 INFO nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.463 2 DEBUG nova.network.neutron [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.512 2 DEBUG nova.network.neutron [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.513 2 DEBUG nova.compute.manager [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.579 2 DEBUG nova.compute.manager [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.808 2 DEBUG nova.compute.manager [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.810 2 DEBUG nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.811 2 INFO nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Creating image(s)#033[00m
Oct  2 08:13:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:18.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.848 2 DEBUG nova.storage.rbd_utils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 0a19956a-a438-4ce5-a67c-e2f804af2722_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.876 2 DEBUG nova.storage.rbd_utils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 0a19956a-a438-4ce5-a67c-e2f804af2722_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.904 2 DEBUG nova.storage.rbd_utils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 0a19956a-a438-4ce5-a67c-e2f804af2722_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.908 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.964 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.965 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.965 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.965 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.989 2 DEBUG nova.storage.rbd_utils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 0a19956a-a438-4ce5-a67c-e2f804af2722_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:18 np0005466031 nova_compute[235803]: 2025-10-02 12:13:18.992 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0a19956a-a438-4ce5-a67c-e2f804af2722_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 08:13:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:19.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 08:13:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.687 2 DEBUG nova.network.neutron [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Updating instance_info_cache with network_info: [{"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.787 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Releasing lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.787 2 DEBUG nova.compute.manager [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Instance network_info: |[{"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.792 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Start _get_guest_xml network_info=[{"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.798 2 WARNING nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.804 2 DEBUG nova.virt.libvirt.host [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.805 2 DEBUG nova.virt.libvirt.host [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.808 2 DEBUG nova.virt.libvirt.host [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.808 2 DEBUG nova.virt.libvirt.host [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.810 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.810 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.810 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.811 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.811 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.811 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.811 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.812 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.812 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.812 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.812 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.813 2 DEBUG nova.virt.hardware [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:13:19 np0005466031 nova_compute[235803]: 2025-10-02 12:13:19.815 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:13:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3914675815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.234 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.262 2 DEBUG nova.storage.rbd_utils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image 2b86a484-6fc6-4efa-983f-fb93053b0874_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.267 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:13:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1702175459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.710 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.716 2 DEBUG nova.virt.libvirt.vif [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-522976997',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-522976997',id=13,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-es3dgd0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:14Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=2b86a484-6fc6-4efa-983f-fb93053b0874,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.717 2 DEBUG nova.network.os_vif_util [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converting VIF {"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.718 2 DEBUG nova.network.os_vif_util [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.720 2 DEBUG nova.objects.instance [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b86a484-6fc6-4efa-983f-fb93053b0874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.761 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <uuid>2b86a484-6fc6-4efa-983f-fb93053b0874</uuid>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <name>instance-0000000d</name>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-522976997</nova:name>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:13:19</nova:creationTime>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <nova:user uuid="d29391679bd0482aada18c987e4c11ca">tempest-LiveAutoBlockMigrationV225Test-211124371-project-member</nova:user>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <nova:project uuid="4db2957ac1b546178a9f2c0f24807e5b">tempest-LiveAutoBlockMigrationV225Test-211124371</nova:project>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <nova:port uuid="8879d541-1199-497a-b096-b45e17e4df04">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <entry name="serial">2b86a484-6fc6-4efa-983f-fb93053b0874</entry>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <entry name="uuid">2b86a484-6fc6-4efa-983f-fb93053b0874</entry>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2b86a484-6fc6-4efa-983f-fb93053b0874_disk">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2b86a484-6fc6-4efa-983f-fb93053b0874_disk.config">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:d1:8f:1e"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <target dev="tap8879d541-11"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/console.log" append="off"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:13:20 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:13:20 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:13:20 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:13:20 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.763 2 DEBUG nova.compute.manager [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Preparing to wait for external event network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.763 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.763 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.763 2 DEBUG oslo_concurrency.lockutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.764 2 DEBUG nova.virt.libvirt.vif [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-522976997',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-522976997',id=13,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-es3dgd0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:14Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=2b86a484-6fc6-4efa-983f-fb93053b0874,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.764 2 DEBUG nova.network.os_vif_util [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converting VIF {"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.765 2 DEBUG nova.network.os_vif_util [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.765 2 DEBUG os_vif [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.766 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.767 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.770 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8879d541-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.771 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8879d541-11, col_values=(('external_ids', {'iface-id': '8879d541-1199-497a-b096-b45e17e4df04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:8f:1e', 'vm-uuid': '2b86a484-6fc6-4efa-983f-fb93053b0874'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:20.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:20 np0005466031 NetworkManager[44907]: <info>  [1759407200.8318] manager: (tap8879d541-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.844 2 INFO os_vif [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11')#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.944 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.945 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.945 2 DEBUG nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] No VIF found with MAC fa:16:3e:d1:8f:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.946 2 INFO nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Using config drive#033[00m
Oct  2 08:13:20 np0005466031 nova_compute[235803]: 2025-10-02 12:13:20.983 2 DEBUG nova.storage.rbd_utils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image 2b86a484-6fc6-4efa-983f-fb93053b0874_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.123 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0a19956a-a438-4ce5-a67c-e2f804af2722_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.232 2 DEBUG nova.storage.rbd_utils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] resizing rbd image 0a19956a-a438-4ce5-a67c-e2f804af2722_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:13:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:21.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.712 2 INFO nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Creating config drive at /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/disk.config#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.718 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoayp693h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:21.727 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:13:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:21.729 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.744 2 DEBUG nova.objects.instance [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lazy-loading 'migration_context' on Instance uuid 0a19956a-a438-4ce5-a67c-e2f804af2722 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.782 2 DEBUG nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.783 2 DEBUG nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Ensure instance console log exists: /var/lib/nova/instances/0a19956a-a438-4ce5-a67c-e2f804af2722/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.783 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.784 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.784 2 DEBUG oslo_concurrency.lockutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.786 2 DEBUG nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.791 2 WARNING nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.795 2 DEBUG nova.virt.libvirt.host [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.795 2 DEBUG nova.virt.libvirt.host [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.798 2 DEBUG nova.virt.libvirt.host [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.799 2 DEBUG nova.virt.libvirt.host [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.800 2 DEBUG nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.800 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.801 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.801 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.801 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.801 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.801 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.801 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.802 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.802 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.802 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.802 2 DEBUG nova.virt.hardware [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.805 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.842 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoayp693h" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.874 2 DEBUG nova.storage.rbd_utils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image 2b86a484-6fc6-4efa-983f-fb93053b0874_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:21 np0005466031 nova_compute[235803]: 2025-10-02 12:13:21.879 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/disk.config 2b86a484-6fc6-4efa-983f-fb93053b0874_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:13:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2915731104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.260 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.293 2 DEBUG nova.storage.rbd_utils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 0a19956a-a438-4ce5-a67c-e2f804af2722_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.298 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:13:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2071537996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.785 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.787 2 DEBUG nova.objects.instance [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a19956a-a438-4ce5-a67c-e2f804af2722 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.824 2 DEBUG nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <uuid>0a19956a-a438-4ce5-a67c-e2f804af2722</uuid>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <name>instance-0000000e</name>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <nova:name>tempest-LiveMigrationNegativeTest-server-212492069</nova:name>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:13:21</nova:creationTime>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <nova:user uuid="8da43dcf236343bfa92dff74df42cb79">tempest-LiveMigrationNegativeTest-496070914-project-member</nova:user>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <nova:project uuid="6933706e32a14e3c92fdf8c1df4f90b2">tempest-LiveMigrationNegativeTest-496070914</nova:project>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <entry name="serial">0a19956a-a438-4ce5-a67c-e2f804af2722</entry>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <entry name="uuid">0a19956a-a438-4ce5-a67c-e2f804af2722</entry>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/0a19956a-a438-4ce5-a67c-e2f804af2722_disk">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/0a19956a-a438-4ce5-a67c-e2f804af2722_disk.config">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/0a19956a-a438-4ce5-a67c-e2f804af2722/console.log" append="off"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:13:22 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:13:22 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:13:22 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:13:22 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:13:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:22.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.942 2 DEBUG nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.942 2 DEBUG nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.943 2 INFO nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Using config drive#033[00m
Oct  2 08:13:22 np0005466031 nova_compute[235803]: 2025-10-02 12:13:22.971 2 DEBUG nova.storage.rbd_utils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 0a19956a-a438-4ce5-a67c-e2f804af2722_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:23 np0005466031 nova_compute[235803]: 2025-10-02 12:13:23.210 2 INFO nova.virt.libvirt.driver [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 0a19956a-a438-4ce5-a67c-e2f804af2722] Creating config drive at /var/lib/nova/instances/0a19956a-a438-4ce5-a67c-e2f804af2722/disk.config#033[00m
Oct  2 08:13:23 np0005466031 nova_compute[235803]: 2025-10-02 12:13:23.216 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a19956a-a438-4ce5-a67c-e2f804af2722/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7wmcwb3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:23.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:23 np0005466031 nova_compute[235803]: 2025-10-02 12:13:23.346 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a19956a-a438-4ce5-a67c-e2f804af2722/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7wmcwb3" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:23 np0005466031 nova_compute[235803]: 2025-10-02 12:13:23.388 2 DEBUG nova.storage.rbd_utils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 0a19956a-a438-4ce5-a67c-e2f804af2722_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:23 np0005466031 nova_compute[235803]: 2025-10-02 12:13:23.392 2 DEBUG oslo_concurrency.processutils [None req-dc492c4e-68ef-41ab-8036-c0b4b5fab255 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a19956a-a438-4ce5-a67c-e2f804af2722/disk.config 0a19956a-a438-4ce5-a67c-e2f804af2722_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:23 np0005466031 nova_compute[235803]: 2025-10-02 12:13:23.968 2 DEBUG oslo_concurrency.processutils [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/disk.config 2b86a484-6fc6-4efa-983f-fb93053b0874_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:23 np0005466031 nova_compute[235803]: 2025-10-02 12:13:23.969 2 INFO nova.virt.libvirt.driver [None req-7f404117-de52-4c21-ad23-f3524111e7da d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Deleting local config drive /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/disk.config because it was imported into RBD.#033[00m
Oct  2 08:13:24 np0005466031 kernel: tap8879d541-11: entered promiscuous mode
Oct  2 08:13:24 np0005466031 NetworkManager[44907]: <info>  [1759407204.0182] manager: (tap8879d541-11): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Oct  2 08:13:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:13:24Z|00036|binding|INFO|Claiming lport 8879d541-1199-497a-b096-b45e17e4df04 for this chassis.
Oct  2 08:13:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:13:24Z|00037|binding|INFO|8879d541-1199-497a-b096-b45e17e4df04: Claiming fa:16:3e:d1:8f:1e 10.100.0.4
Oct  2 08:13:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:13:24Z|00038|binding|INFO|Claiming lport 96e672de-12ad-4022-be24-94113ee6de10 for this chassis.
Oct  2 08:13:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:13:24Z|00039|binding|INFO|96e672de-12ad-4022-be24-94113ee6de10: Claiming fa:16:3e:bb:a4:98 19.80.0.36
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:24 np0005466031 systemd-udevd[242731]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:13:24 np0005466031 systemd-machined[192227]: New machine qemu-4-instance-0000000d.
Oct  2 08:13:24 np0005466031 systemd[1]: Started Virtual Machine qemu-4-instance-0000000d.
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.077 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:8f:1e 10.100.0.4'], port_security=['fa:16:3e:d1:8f:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1633959326', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2b86a484-6fc6-4efa-983f-fb93053b0874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1633959326', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=8879d541-1199-497a-b096-b45e17e4df04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:13:24 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.079 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:a4:98 19.80.0.36'], port_security=['fa:16:3e:bb:a4:98 19.80.0.36'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['8879d541-1199-497a-b096-b45e17e4df04'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1987210166', 'neutron:cidrs': '19.80.0.36/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1987210166', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=ba88d201-1b94-4e72-bbe3-032bdf9cfc2d, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=96e672de-12ad-4022-be24-94113ee6de10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.080 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 8879d541-1199-497a-b096-b45e17e4df04 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 bound to our chassis#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.081 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b610572-0903-4bfb-be0b-9848e0af3ae3#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.096 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ae75f2d2-0697-4fca-8286-c510e1e49df5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.098 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b610572-01 in ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.100 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b610572-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.100 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[39d890c2-1213-4f3a-851b-599daf5e601d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 NetworkManager[44907]: <info>  [1759407204.1035] device (tap8879d541-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.101 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba3e0e8-793f-4acd-8c13-a511ca834260]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 NetworkManager[44907]: <info>  [1759407204.1083] device (tap8879d541-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.115 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[78568549-1c43-4d6c-a708-1dae68a0126d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 podman[242722]: 2025-10-02 12:13:24.135340324 +0000 UTC m=+0.083761697 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.143 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ade3d4-8240-41ad-a24c-967560db3727]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:13:24Z|00040|binding|INFO|Setting lport 8879d541-1199-497a-b096-b45e17e4df04 ovn-installed in OVS
Oct  2 08:13:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:13:24Z|00041|binding|INFO|Setting lport 8879d541-1199-497a-b096-b45e17e4df04 up in Southbound
Oct  2 08:13:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:13:24Z|00042|binding|INFO|Setting lport 96e672de-12ad-4022-be24-94113ee6de10 up in Southbound
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.179 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[52833431-54c7-4134-898d-bf79ef206ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.184 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[53b79bf3-0bb3-454b-b3b5-bd2273ce0667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 NetworkManager[44907]: <info>  [1759407204.1862] manager: (tap5b610572-00): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.224 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc61fbf-3668-45c6-bcdd-058dd81293f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.227 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0f20184f-5655-442f-995f-903b2826398e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 NetworkManager[44907]: <info>  [1759407204.2511] device (tap5b610572-00): carrier: link connected
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.259 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d114ed18-9c46-467e-ae00-be7b4e56f972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.277 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dacd5115-e2d6-447f-9a0b-881f674699d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505984, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242787, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.299 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6897701a-2e55-4eff-9257-a503ba391d09]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505984, 'tstamp': 505984}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242797, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.324 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5692cd6c-23f4-4855-9cfb-b18b4da5b776]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505984, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242800, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.368 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a21036de-9aae-49c6-9c95-dfe974b5550d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.426 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[28ed4002-2ff5-408f-8e0d-98c2df4048d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.429 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.429 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.430 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b610572-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:24 np0005466031 NetworkManager[44907]: <info>  [1759407204.4324] manager: (tap5b610572-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct  2 08:13:24 np0005466031 kernel: tap5b610572-00: entered promiscuous mode
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.435 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b610572-00, col_values=(('external_ids', {'iface-id': '02fa40d7-59fd-4885-996d-218aed489cb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:13:24Z|00043|binding|INFO|Releasing lport 02fa40d7-59fd-4885-996d-218aed489cb1 from this chassis (sb_readonly=0)
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.451 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b610572-0903-4bfb-be0b-9848e0af3ae3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b610572-0903-4bfb-be0b-9848e0af3ae3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.452 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8f607f40-b9a4-4950-957e-cbe9f41f018a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.452 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/5b610572-0903-4bfb-be0b-9848e0af3ae3.pid.haproxy
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.453 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'env', 'PROCESS_TAG=haproxy-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b610572-0903-4bfb-be0b-9848e0af3ae3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:13:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:13:24.731 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.733 2 DEBUG nova.compute.manager [req-198a67e6-2f12-47a4-bcb6-f7a9c4cd90e9 req-3316b34c-05df-4671-85dd-cb30a03cd0c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received event network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.733 2 DEBUG oslo_concurrency.lockutils [req-198a67e6-2f12-47a4-bcb6-f7a9c4cd90e9 req-3316b34c-05df-4671-85dd-cb30a03cd0c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.734 2 DEBUG oslo_concurrency.lockutils [req-198a67e6-2f12-47a4-bcb6-f7a9c4cd90e9 req-3316b34c-05df-4671-85dd-cb30a03cd0c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.734 2 DEBUG oslo_concurrency.lockutils [req-198a67e6-2f12-47a4-bcb6-f7a9c4cd90e9 req-3316b34c-05df-4671-85dd-cb30a03cd0c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:24 np0005466031 nova_compute[235803]: 2025-10-02 12:13:24.735 2 DEBUG nova.compute.manager [req-198a67e6-2f12-47a4-bcb6-f7a9c4cd90e9 req-3316b34c-05df-4671-85dd-cb30a03cd0c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Processing event network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:13:24 np0005466031 podman[242855]: 2025-10-02 12:13:24.819231763 +0000 UTC m=+0.047902713 container create daaccb87c24861f1986639a611d322165e38c4aa9fb8187b60da331422fc1bee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:13:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:13:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:24.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:24 np0005466031 systemd[1]: Started libpod-conmon-daaccb87c24861f1986639a611d322165e38c4aa9fb8187b60da331422fc1bee.scope.
Oct  2 08:13:24 np0005466031 podman[242855]: 2025-10-02 12:13:24.795637932 +0000 UTC m=+0.024308912 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:13:24 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:13:24 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ac895b4480e818e9996c6ae7b152396b438d73fe2e965feb2c23805bdf3be9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:13:24 np0005466031 podman[242855]: 2025-10-02 12:13:24.927176687 +0000 UTC m=+0.155847687 container init daaccb87c24861f1986639a611d322165e38c4aa9fb8187b60da331422fc1bee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:13:24 np0005466031 podman[242855]: 2025-10-02 12:13:24.934492858 +0000 UTC m=+0.163163828 container start daaccb87c24861f1986639a611d322165e38c4aa9fb8187b60da331422fc1bee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:13:24 np0005466031 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[242872]: [NOTICE]   (242876) : New worker (242878) forked
Oct  2 08:13:24 np0005466031 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[242872]: [NOTICE]   (242876) : Loading success.
Oct  2 08:15:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:04.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2848696870' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2848696870' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:15:05 np0005466031 rsyslogd[1006]: imjournal: 2067 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:15:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:06 np0005466031 nova_compute[235803]: 2025-10-02 12:15:06.041 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Check if temp file /var/lib/nova/instances/tmpvz1p5b8e exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct  2 08:15:06 np0005466031 nova_compute[235803]: 2025-10-02 12:15:06.042 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct  2 08:15:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:06.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:07.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:07 np0005466031 nova_compute[235803]: 2025-10-02 12:15:07.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:07 np0005466031 nova_compute[235803]: 2025-10-02 12:15:07.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:08.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:15:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:09.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:15:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:15:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:15:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:11.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:12 np0005466031 nova_compute[235803]: 2025-10-02 12:15:12.428 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407297.4271708, 553435e6-54c5-46e9-af9e-12d1f5840faf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:12 np0005466031 nova_compute[235803]: 2025-10-02 12:15:12.428 2 INFO nova.compute.manager [-] [instance: 553435e6-54c5-46e9-af9e-12d1f5840faf] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:15:12 np0005466031 nova_compute[235803]: 2025-10-02 12:15:12.462 2 DEBUG nova.compute.manager [None req-334a175f-aa90-45f7-93de-2961b4f0aa2c - - - - - -] [instance: 553435e6-54c5-46e9-af9e-12d1f5840faf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:12 np0005466031 nova_compute[235803]: 2025-10-02 12:15:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:12 np0005466031 nova_compute[235803]: 2025-10-02 12:15:12.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:15:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:13.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:15:13 np0005466031 nova_compute[235803]: 2025-10-02 12:15:13.952 2 DEBUG nova.compute.manager [req-9b45aa2f-84f1-4f22-a746-74c48b5e7615 req-630d7355-e236-4664-88cf-77cb5b2f795c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:13 np0005466031 nova_compute[235803]: 2025-10-02 12:15:13.953 2 DEBUG oslo_concurrency.lockutils [req-9b45aa2f-84f1-4f22-a746-74c48b5e7615 req-630d7355-e236-4664-88cf-77cb5b2f795c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:13 np0005466031 nova_compute[235803]: 2025-10-02 12:15:13.954 2 DEBUG oslo_concurrency.lockutils [req-9b45aa2f-84f1-4f22-a746-74c48b5e7615 req-630d7355-e236-4664-88cf-77cb5b2f795c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:13 np0005466031 nova_compute[235803]: 2025-10-02 12:15:13.954 2 DEBUG oslo_concurrency.lockutils [req-9b45aa2f-84f1-4f22-a746-74c48b5e7615 req-630d7355-e236-4664-88cf-77cb5b2f795c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:13 np0005466031 nova_compute[235803]: 2025-10-02 12:15:13.955 2 DEBUG nova.compute.manager [req-9b45aa2f-84f1-4f22-a746-74c48b5e7615 req-630d7355-e236-4664-88cf-77cb5b2f795c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:13 np0005466031 nova_compute[235803]: 2025-10-02 12:15:13.955 2 DEBUG nova.compute.manager [req-9b45aa2f-84f1-4f22-a746-74c48b5e7615 req-630d7355-e236-4664-88cf-77cb5b2f795c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.590 2 INFO nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 6.92 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.591 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:15:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.651 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(97ae7516-180f-4737-b04b-1360c9ea1c3d),old_vol_attachment_ids={ff92c1da-c1e7-425c-b20d-f332daad4188='8041298c-4154-45fe-81e2-9dfa6deeae22'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.653 2 DEBUG nova.objects.instance [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lazy-loading 'migration_context' on Instance uuid b8f8f97e-2823-451c-ab36-7f94ade8be46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.654 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.655 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.655 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.676 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Find same serial number: pos=1, serial=ff92c1da-c1e7-425c-b20d-f332daad4188 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.678 2 DEBUG nova.virt.libvirt.vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:01Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.679 2 DEBUG nova.network.os_vif_util [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.680 2 DEBUG nova.network.os_vif_util [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.681 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating guest XML with vif config: <interface type="ethernet">
Oct  2 08:15:14 np0005466031 nova_compute[235803]:  <mac address="fa:16:3e:b9:be:58"/>
Oct  2 08:15:14 np0005466031 nova_compute[235803]:  <model type="virtio"/>
Oct  2 08:15:14 np0005466031 nova_compute[235803]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:15:14 np0005466031 nova_compute[235803]:  <mtu size="1442"/>
Oct  2 08:15:14 np0005466031 nova_compute[235803]:  <target dev="tap647b79a6-6c"/>
Oct  2 08:15:14 np0005466031 nova_compute[235803]: </interface>
Oct  2 08:15:14 np0005466031 nova_compute[235803]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct  2 08:15:14 np0005466031 nova_compute[235803]: 2025-10-02 12:15:14.682 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct  2 08:15:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:14.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:15 np0005466031 nova_compute[235803]: 2025-10-02 12:15:15.158 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:15:15 np0005466031 nova_compute[235803]: 2025-10-02 12:15:15.159 2 INFO nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct  2 08:15:15 np0005466031 nova_compute[235803]: 2025-10-02 12:15:15.371 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct  2 08:15:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:15 np0005466031 nova_compute[235803]: 2025-10-02 12:15:15.874 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:15:15 np0005466031 nova_compute[235803]: 2025-10-02 12:15:15.875 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.075 2 DEBUG nova.compute.manager [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.076 2 DEBUG oslo_concurrency.lockutils [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.076 2 DEBUG oslo_concurrency.lockutils [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.076 2 DEBUG oslo_concurrency.lockutils [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.076 2 DEBUG nova.compute.manager [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.076 2 WARNING nova.compute.manager [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.077 2 DEBUG nova.compute.manager [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-changed-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.077 2 DEBUG nova.compute.manager [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Refreshing instance network info cache due to event network-changed-647b79a6-6cf5-4d28-afd1-9e21f2a56e32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.077 2 DEBUG oslo_concurrency.lockutils [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.077 2 DEBUG oslo_concurrency.lockutils [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.077 2 DEBUG nova.network.neutron [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Refreshing network info cache for port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.377 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.378 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.817 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407316.8165355, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.817 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.879 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.881 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.881 2 DEBUG nova.virt.libvirt.migration [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.883 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.921 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Oct  2 08:15:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:15:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:16.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:15:16 np0005466031 kernel: tap647b79a6-6c (unregistering): left promiscuous mode
Oct  2 08:15:16 np0005466031 NetworkManager[44907]: <info>  [1759407316.9817] device (tap647b79a6-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:15:16Z|00068|binding|INFO|Releasing lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 from this chassis (sb_readonly=0)
Oct  2 08:15:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:15:16Z|00069|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 down in Southbound
Oct  2 08:15:16 np0005466031 nova_compute[235803]: 2025-10-02 12:15:16.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:15:16Z|00070|binding|INFO|Removing iface tap647b79a6-6c ovn-installed in OVS
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.039 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:be:58 10.100.0.12'], port_security=['fa:16:3e:b9:be:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'db222192-8da1-4f7c-972d-dc680c3e6630'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8f8f97e-2823-451c-ab36-7f94ade8be46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '17', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=647b79a6-6cf5-4d28-afd1-9e21f2a56e32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.040 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 unbound from our chassis#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.041 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b610572-0903-4bfb-be0b-9848e0af3ae3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.042 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2bea690a-2390-431f-bb77-cc211486f2c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.043 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 namespace which is not needed anymore#033[00m
Oct  2 08:15:17 np0005466031 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct  2 08:15:17 np0005466031 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000014.scope: Consumed 2.934s CPU time.
Oct  2 08:15:17 np0005466031 systemd-machined[192227]: Machine qemu-8-instance-00000014 terminated.
Oct  2 08:15:17 np0005466031 virtqemud[235323]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-ff92c1da-c1e7-425c-b20d-f332daad4188: No such file or directory
Oct  2 08:15:17 np0005466031 virtqemud[235323]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-ff92c1da-c1e7-425c-b20d-f332daad4188: No such file or directory
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.156 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.157 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.157 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct  2 08:15:17 np0005466031 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[245546]: [NOTICE]   (245550) : haproxy version is 2.8.14-c23fe91
Oct  2 08:15:17 np0005466031 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[245546]: [NOTICE]   (245550) : path to executable is /usr/sbin/haproxy
Oct  2 08:15:17 np0005466031 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[245546]: [WARNING]  (245550) : Exiting Master process...
Oct  2 08:15:17 np0005466031 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[245546]: [ALERT]    (245550) : Current worker (245552) exited with code 143 (Terminated)
Oct  2 08:15:17 np0005466031 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[245546]: [WARNING]  (245550) : All workers exited. Exiting... (0)
Oct  2 08:15:17 np0005466031 systemd[1]: libpod-99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628.scope: Deactivated successfully.
Oct  2 08:15:17 np0005466031 podman[245990]: 2025-10-02 12:15:17.197069143 +0000 UTC m=+0.052699965 container died 99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:15:17 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628-userdata-shm.mount: Deactivated successfully.
Oct  2 08:15:17 np0005466031 systemd[1]: var-lib-containers-storage-overlay-11aac4fe1691a1ebc585fad69736c286cd4ed5e50a5427c163929b5e9b9b3bb3-merged.mount: Deactivated successfully.
Oct  2 08:15:17 np0005466031 podman[245990]: 2025-10-02 12:15:17.230952972 +0000 UTC m=+0.086583794 container cleanup 99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:15:17 np0005466031 systemd[1]: libpod-conmon-99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628.scope: Deactivated successfully.
Oct  2 08:15:17 np0005466031 podman[246029]: 2025-10-02 12:15:17.293594553 +0000 UTC m=+0.037232827 container remove 99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.298 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ccae4e-d7ad-44f2-987b-63991f101def]: (4, ('Thu Oct  2 12:15:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 (99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628)\n99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628\nThu Oct  2 12:15:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 (99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628)\n99c7d0a1f7baa40c5317f12414f1ed4e123387a5b07b4e854c713fbc7bc98628\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.301 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b703276d-9be0-445c-b428-449d0840e8bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.302 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:17 np0005466031 kernel: tap5b610572-00: left promiscuous mode
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.325 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aa929da2-d1ee-4a41-865d-7dc33cf1371f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.365 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0eccd547-5ae8-487f-92cb-b0d56a8e5ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.367 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3631ed-7c74-4385-ac87-e4091bb6b372]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.380 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8512bc0f-5264-42a5-83c7-c1bf26d42d07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514990, 'reachable_time': 41182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246048, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.382 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:15:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:17.383 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[8e011137-5278-46d3-a1be-c6dafb393622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:17 np0005466031 systemd[1]: run-netns-ovnmeta\x2d5b610572\x2d0903\x2d4bfb\x2dbe0b\x2d9848e0af3ae3.mount: Deactivated successfully.
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.385 2 DEBUG nova.virt.libvirt.guest [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'b8f8f97e-2823-451c-ab36-7f94ade8be46' (instance-00000014) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.386 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration operation has completed#033[00m
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.386 2 INFO nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] _post_live_migration() is started..#033[00m
Oct  2 08:15:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:17 np0005466031 nova_compute[235803]: 2025-10-02 12:15:17.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.019 2 DEBUG nova.compute.manager [req-0def621d-3ee2-4d94-a2f5-82b97309c8b7 req-b6f047d2-afa9-4ebf-9b22-7b5ba5485a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.019 2 DEBUG oslo_concurrency.lockutils [req-0def621d-3ee2-4d94-a2f5-82b97309c8b7 req-b6f047d2-afa9-4ebf-9b22-7b5ba5485a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.020 2 DEBUG oslo_concurrency.lockutils [req-0def621d-3ee2-4d94-a2f5-82b97309c8b7 req-b6f047d2-afa9-4ebf-9b22-7b5ba5485a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.020 2 DEBUG oslo_concurrency.lockutils [req-0def621d-3ee2-4d94-a2f5-82b97309c8b7 req-b6f047d2-afa9-4ebf-9b22-7b5ba5485a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.020 2 DEBUG nova.compute.manager [req-0def621d-3ee2-4d94-a2f5-82b97309c8b7 req-b6f047d2-afa9-4ebf-9b22-7b5ba5485a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.020 2 DEBUG nova.compute.manager [req-0def621d-3ee2-4d94-a2f5-82b97309c8b7 req-b6f047d2-afa9-4ebf-9b22-7b5ba5485a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.577 2 DEBUG nova.network.neutron [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updated VIF entry in instance network info cache for port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.577 2 DEBUG nova.network.neutron [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.673 2 DEBUG nova.compute.manager [req-44246d86-4d20-4c8b-8198-eb1ca8170732 req-5ef8cbdc-300b-404b-896e-8111c132beb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.673 2 DEBUG oslo_concurrency.lockutils [req-44246d86-4d20-4c8b-8198-eb1ca8170732 req-5ef8cbdc-300b-404b-896e-8111c132beb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.674 2 DEBUG oslo_concurrency.lockutils [req-44246d86-4d20-4c8b-8198-eb1ca8170732 req-5ef8cbdc-300b-404b-896e-8111c132beb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.674 2 DEBUG oslo_concurrency.lockutils [req-44246d86-4d20-4c8b-8198-eb1ca8170732 req-5ef8cbdc-300b-404b-896e-8111c132beb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.674 2 DEBUG nova.compute.manager [req-44246d86-4d20-4c8b-8198-eb1ca8170732 req-5ef8cbdc-300b-404b-896e-8111c132beb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.674 2 DEBUG nova.compute.manager [req-44246d86-4d20-4c8b-8198-eb1ca8170732 req-5ef8cbdc-300b-404b-896e-8111c132beb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:15:18 np0005466031 nova_compute[235803]: 2025-10-02 12:15:18.675 2 DEBUG oslo_concurrency.lockutils [req-4c710db6-687c-469d-9bab-b8a99b464ac6 req-562353b7-bb9e-4a1a-8a8f-656787c96d35 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:15:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:18.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.327 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Activated binding for port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.327 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.328 2 DEBUG nova.virt.libvirt.vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:04Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.328 2 DEBUG nova.network.os_vif_util [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.328 2 DEBUG nova.network.os_vif_util [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.329 2 DEBUG os_vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647b79a6-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.335 2 INFO os_vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c')#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.335 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.335 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.336 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.336 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.336 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deleting instance files /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46_del#033[00m
Oct  2 08:15:19 np0005466031 nova_compute[235803]: 2025-10-02 12:15:19.336 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deletion of /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46_del complete#033[00m
Oct  2 08:15:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:19.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:19 np0005466031 podman[246050]: 2025-10-02 12:15:19.652586289 +0000 UTC m=+0.078486430 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:15:19 np0005466031 podman[246051]: 2025-10-02 12:15:19.707799436 +0000 UTC m=+0.121477153 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.105 2 DEBUG nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.106 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.106 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.106 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.106 2 DEBUG nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.106 2 WARNING nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.107 2 DEBUG nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.107 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.107 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.107 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.107 2 DEBUG nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.107 2 WARNING nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.108 2 DEBUG nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.108 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.108 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.108 2 DEBUG oslo_concurrency.lockutils [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.108 2 DEBUG nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.109 2 WARNING nova.compute.manager [req-4e1944ca-20c8-4bd6-b3b9-29f497472f4c req-44a881e3-6c5f-483c-9c0d-c6a86a11267a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.860 2 DEBUG nova.compute.manager [req-03003f88-dc55-44a5-8912-cdbc015bfc69 req-6f44c1d1-7ddd-45a3-a7e3-ad877f6a984f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.860 2 DEBUG oslo_concurrency.lockutils [req-03003f88-dc55-44a5-8912-cdbc015bfc69 req-6f44c1d1-7ddd-45a3-a7e3-ad877f6a984f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.860 2 DEBUG oslo_concurrency.lockutils [req-03003f88-dc55-44a5-8912-cdbc015bfc69 req-6f44c1d1-7ddd-45a3-a7e3-ad877f6a984f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.861 2 DEBUG oslo_concurrency.lockutils [req-03003f88-dc55-44a5-8912-cdbc015bfc69 req-6f44c1d1-7ddd-45a3-a7e3-ad877f6a984f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.861 2 DEBUG nova.compute.manager [req-03003f88-dc55-44a5-8912-cdbc015bfc69 req-6f44c1d1-7ddd-45a3-a7e3-ad877f6a984f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:20 np0005466031 nova_compute[235803]: 2025-10-02 12:15:20.861 2 WARNING nova.compute.manager [req-03003f88-dc55-44a5-8912-cdbc015bfc69 req-6f44c1d1-7ddd-45a3-a7e3-ad877f6a984f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:15:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:20.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:21.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:22 np0005466031 nova_compute[235803]: 2025-10-02 12:15:22.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:22.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:23.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:24 np0005466031 nova_compute[235803]: 2025-10-02 12:15:24.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:24.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:25.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:25.821 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:25.822 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:25.822 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:25 np0005466031 nova_compute[235803]: 2025-10-02 12:15:25.953 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:25 np0005466031 nova_compute[235803]: 2025-10-02 12:15:25.953 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:25 np0005466031 nova_compute[235803]: 2025-10-02 12:15:25.953 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.040 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.041 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.041 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.041 2 DEBUG nova.compute.resource_tracker [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.041 2 DEBUG oslo_concurrency.processutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/521728697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.453 2 DEBUG oslo_concurrency.processutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.619 2 WARNING nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.621 2 DEBUG nova.compute.resource_tracker [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4860MB free_disk=20.830867767333984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.621 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.622 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.727 2 DEBUG nova.compute.resource_tracker [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Migration for instance b8f8f97e-2823-451c-ab36-7f94ade8be46 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.769 2 DEBUG nova.compute.resource_tracker [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.824 2 DEBUG nova.compute.resource_tracker [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Migration 97ae7516-180f-4737-b04b-1360c9ea1c3d is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.824 2 DEBUG nova.compute.resource_tracker [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.825 2 DEBUG nova.compute.resource_tracker [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:15:26 np0005466031 nova_compute[235803]: 2025-10-02 12:15:26.863 2 DEBUG oslo_concurrency.processutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:26.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1554869641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:27 np0005466031 podman[246166]: 2025-10-02 12:15:27.285830938 +0000 UTC m=+0.066662388 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.297 2 DEBUG oslo_concurrency.processutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.303 2 DEBUG nova.compute.provider_tree [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.330 2 DEBUG nova.scheduler.client.report [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.362 2 DEBUG nova.compute.resource_tracker [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.363 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.367 2 INFO nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migrating instance to compute-1.ctlplane.example.com finished successfully.#033[00m
Oct  2 08:15:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:27.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.528 2 INFO nova.scheduler.client.report [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Deleted allocation for migration 97ae7516-180f-4737-b04b-1360c9ea1c3d#033[00m
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.529 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct  2 08:15:27 np0005466031 nova_compute[235803]: 2025-10-02 12:15:27.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e149 e149: 3 total, 3 up, 3 in
Oct  2 08:15:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:15:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:28.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:15:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e150 e150: 3 total, 3 up, 3 in
Oct  2 08:15:29 np0005466031 nova_compute[235803]: 2025-10-02 12:15:29.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:29.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:29 np0005466031 nova_compute[235803]: 2025-10-02 12:15:29.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e151 e151: 3 total, 3 up, 3 in
Oct  2 08:15:30 np0005466031 nova_compute[235803]: 2025-10-02 12:15:30.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:30 np0005466031 nova_compute[235803]: 2025-10-02 12:15:30.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:30.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e152 e152: 3 total, 3 up, 3 in
Oct  2 08:15:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:31.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:31 np0005466031 nova_compute[235803]: 2025-10-02 12:15:31.648 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.155 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407317.1539743, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.156 2 INFO nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.176 2 DEBUG nova.compute.manager [None req-82964939-ba6d-4171-a260-8d9cd9d535fc - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:32 np0005466031 podman[246217]: 2025-10-02 12:15:32.625150511 +0000 UTC m=+0.057819462 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.669 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:15:32 np0005466031 nova_compute[235803]: 2025-10-02 12:15:32.670 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:15:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:32.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:15:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e153 e153: 3 total, 3 up, 3 in
Oct  2 08:15:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:33.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:33 np0005466031 nova_compute[235803]: 2025-10-02 12:15:33.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:33 np0005466031 nova_compute[235803]: 2025-10-02 12:15:33.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:33 np0005466031 nova_compute[235803]: 2025-10-02 12:15:33.765 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:33 np0005466031 nova_compute[235803]: 2025-10-02 12:15:33.766 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:33 np0005466031 nova_compute[235803]: 2025-10-02 12:15:33.766 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:33 np0005466031 nova_compute[235803]: 2025-10-02 12:15:33.766 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:15:33 np0005466031 nova_compute[235803]: 2025-10-02 12:15:33.767 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:34 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/442933021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.229 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.369 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.370 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4860MB free_disk=20.830852508544922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.370 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.371 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.580 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.581 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:15:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:34 np0005466031 nova_compute[235803]: 2025-10-02 12:15:34.671 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:34.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2688368381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:35 np0005466031 nova_compute[235803]: 2025-10-02 12:15:35.083 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:35 np0005466031 nova_compute[235803]: 2025-10-02 12:15:35.089 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:35 np0005466031 nova_compute[235803]: 2025-10-02 12:15:35.111 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:35 np0005466031 nova_compute[235803]: 2025-10-02 12:15:35.113 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:15:35 np0005466031 nova_compute[235803]: 2025-10-02 12:15:35.113 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:35.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e154 e154: 3 total, 3 up, 3 in
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.071 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "80c1a955-9ccd-4e3e-a048-9949b961b825" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.072 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "80c1a955-9ccd-4e3e-a048-9949b961b825" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.109 2 DEBUG nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.113 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.113 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.113 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.188 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.189 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.195 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.196 2 INFO nova.compute.claims [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.380 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:36 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/513948928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.809 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.815 2 DEBUG nova.compute.provider_tree [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.835 2 DEBUG nova.scheduler.client.report [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.867 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.868 2 DEBUG nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.944 2 DEBUG nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.945 2 DEBUG nova.network.neutron [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.971 2 INFO nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:15:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:36.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:36 np0005466031 nova_compute[235803]: 2025-10-02 12:15:36.997 2 DEBUG nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.250 2 DEBUG nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.252 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.253 2 INFO nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Creating image(s)#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.296 2 DEBUG nova.storage.rbd_utils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 80c1a955-9ccd-4e3e-a048-9949b961b825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.322 2 DEBUG nova.storage.rbd_utils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 80c1a955-9ccd-4e3e-a048-9949b961b825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.344 2 DEBUG nova.storage.rbd_utils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 80c1a955-9ccd-4e3e-a048-9949b961b825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.348 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.400 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.401 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.401 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.402 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.423 2 DEBUG nova.storage.rbd_utils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 80c1a955-9ccd-4e3e-a048-9949b961b825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.426 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 80c1a955-9ccd-4e3e-a048-9949b961b825_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:37.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.608 2 DEBUG nova.network.neutron [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.609 2 DEBUG nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:15:37 np0005466031 nova_compute[235803]: 2025-10-02 12:15:37.650 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:15:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e155 e155: 3 total, 3 up, 3 in
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.136 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 80c1a955-9ccd-4e3e-a048-9949b961b825_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.325 2 DEBUG nova.storage.rbd_utils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] resizing rbd image 80c1a955-9ccd-4e3e-a048-9949b961b825_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.439 2 DEBUG nova.objects.instance [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'migration_context' on Instance uuid 80c1a955-9ccd-4e3e-a048-9949b961b825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.451 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.451 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Ensure instance console log exists: /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.452 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.452 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.453 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.454 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.459 2 WARNING nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.463 2 DEBUG nova.virt.libvirt.host [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.464 2 DEBUG nova.virt.libvirt.host [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.467 2 DEBUG nova.virt.libvirt.host [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.467 2 DEBUG nova.virt.libvirt.host [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.468 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.468 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.469 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.469 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.470 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.470 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.470 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.470 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.471 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.471 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.471 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.471 2 DEBUG nova.virt.hardware [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.474 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:38 np0005466031 nova_compute[235803]: 2025-10-02 12:15:38.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:15:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:38 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/565957413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:38.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.242 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.276 2 DEBUG nova.storage.rbd_utils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 80c1a955-9ccd-4e3e-a048-9949b961b825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.281 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e156 e156: 3 total, 3 up, 3 in
Oct  2 08:15:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:39.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2178282675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.852 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.853 2 DEBUG nova.objects.instance [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80c1a955-9ccd-4e3e-a048-9949b961b825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.882 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <uuid>80c1a955-9ccd-4e3e-a048-9949b961b825</uuid>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <name>instance-00000017</name>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServersOnMultiNodesTest-server-20280581</nova:name>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:15:38</nova:creationTime>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <nova:user uuid="27279919e67c49e1a04b6eec249ecc87">tempest-ServersOnMultiNodesTest-348944321-project-member</nova:user>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <nova:project uuid="a5ac6058475f4875b46ae8f3c4ff33e8">tempest-ServersOnMultiNodesTest-348944321</nova:project>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <entry name="serial">80c1a955-9ccd-4e3e-a048-9949b961b825</entry>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <entry name="uuid">80c1a955-9ccd-4e3e-a048-9949b961b825</entry>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/80c1a955-9ccd-4e3e-a048-9949b961b825_disk">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/80c1a955-9ccd-4e3e-a048-9949b961b825_disk.config">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825/console.log" append="off"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:15:39 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:15:39 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:15:39 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:15:39 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.931 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.932 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.932 2 INFO nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Using config drive#033[00m
Oct  2 08:15:39 np0005466031 nova_compute[235803]: 2025-10-02 12:15:39.976 2 DEBUG nova.storage.rbd_utils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 80c1a955-9ccd-4e3e-a048-9949b961b825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:40 np0005466031 nova_compute[235803]: 2025-10-02 12:15:40.109 2 INFO nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Creating config drive at /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825/disk.config#033[00m
Oct  2 08:15:40 np0005466031 nova_compute[235803]: 2025-10-02 12:15:40.113 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprkmuznv8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:40 np0005466031 nova_compute[235803]: 2025-10-02 12:15:40.254 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprkmuznv8" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:40 np0005466031 nova_compute[235803]: 2025-10-02 12:15:40.287 2 DEBUG nova.storage.rbd_utils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 80c1a955-9ccd-4e3e-a048-9949b961b825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:40 np0005466031 nova_compute[235803]: 2025-10-02 12:15:40.291 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825/disk.config 80c1a955-9ccd-4e3e-a048-9949b961b825_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:40.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:41.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:41 np0005466031 nova_compute[235803]: 2025-10-02 12:15:41.674 2 DEBUG oslo_concurrency.processutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825/disk.config 80c1a955-9ccd-4e3e-a048-9949b961b825_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:41 np0005466031 nova_compute[235803]: 2025-10-02 12:15:41.674 2 INFO nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Deleting local config drive /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825/disk.config because it was imported into RBD.#033[00m
Oct  2 08:15:41 np0005466031 systemd-machined[192227]: New machine qemu-9-instance-00000017.
Oct  2 08:15:41 np0005466031 systemd[1]: Started Virtual Machine qemu-9-instance-00000017.
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.797 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407342.796798, 80c1a955-9ccd-4e3e-a048-9949b961b825 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.798 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.803 2 DEBUG nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.804 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.809 2 INFO nova.virt.libvirt.driver [-] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Instance spawned successfully.#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.809 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.828 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.838 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.844 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.844 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.845 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.846 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.847 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.847 2 DEBUG nova.virt.libvirt.driver [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.879 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.880 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407342.7981386, 80c1a955-9ccd-4e3e-a048-9949b961b825 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.880 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] VM Started (Lifecycle Event)#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.905 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.909 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.931 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.941 2 INFO nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Took 5.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:15:42 np0005466031 nova_compute[235803]: 2025-10-02 12:15:42.942 2 DEBUG nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:42.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:43 np0005466031 nova_compute[235803]: 2025-10-02 12:15:43.002 2 INFO nova.compute.manager [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Took 6.83 seconds to build instance.#033[00m
Oct  2 08:15:43 np0005466031 nova_compute[235803]: 2025-10-02 12:15:43.017 2 DEBUG oslo_concurrency.lockutils [None req-f62005a2-8348-41b6-a27a-fc13947cf675 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "80c1a955-9ccd-4e3e-a048-9949b961b825" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:43.042 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:43.044 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:15:43 np0005466031 nova_compute[235803]: 2025-10-02 12:15:43.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:43.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:44 np0005466031 nova_compute[235803]: 2025-10-02 12:15:44.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e157 e157: 3 total, 3 up, 3 in
Oct  2 08:15:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:44.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:45.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:46.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:15:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:47.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:15:47 np0005466031 nova_compute[235803]: 2025-10-02 12:15:47.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:15:48.046 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e158 e158: 3 total, 3 up, 3 in
Oct  2 08:15:49 np0005466031 nova_compute[235803]: 2025-10-02 12:15:49.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:49.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:50 np0005466031 podman[246708]: 2025-10-02 12:15:50.629662676 +0000 UTC m=+0.051620404 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 08:15:50 np0005466031 podman[246709]: 2025-10-02 12:15:50.666333866 +0000 UTC m=+0.084121153 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:15:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:51.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e159 e159: 3 total, 3 up, 3 in
Oct  2 08:15:52 np0005466031 nova_compute[235803]: 2025-10-02 12:15:52.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:53.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:54 np0005466031 nova_compute[235803]: 2025-10-02 12:15:54.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:54.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e160 e160: 3 total, 3 up, 3 in
Oct  2 08:15:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:55.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:56.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:57.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:57 np0005466031 nova_compute[235803]: 2025-10-02 12:15:57.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:57 np0005466031 podman[246756]: 2025-10-02 12:15:57.62071213 +0000 UTC m=+0.057240845 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:15:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e161 e161: 3 total, 3 up, 3 in
Oct  2 08:15:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:58.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:59 np0005466031 nova_compute[235803]: 2025-10-02 12:15:59.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:15:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:59.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:00.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:01.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:02 np0005466031 nova_compute[235803]: 2025-10-02 12:16:02.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:02.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:03 np0005466031 nova_compute[235803]: 2025-10-02 12:16:03.111 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:03 np0005466031 nova_compute[235803]: 2025-10-02 12:16:03.136 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Triggering sync for uuid 80c1a955-9ccd-4e3e-a048-9949b961b825 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:16:03 np0005466031 nova_compute[235803]: 2025-10-02 12:16:03.138 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "80c1a955-9ccd-4e3e-a048-9949b961b825" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:03 np0005466031 nova_compute[235803]: 2025-10-02 12:16:03.139 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "80c1a955-9ccd-4e3e-a048-9949b961b825" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:03 np0005466031 nova_compute[235803]: 2025-10-02 12:16:03.177 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "80c1a955-9ccd-4e3e-a048-9949b961b825" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:03.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:03 np0005466031 podman[246779]: 2025-10-02 12:16:03.636125471 +0000 UTC m=+0.069421428 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:16:04 np0005466031 nova_compute[235803]: 2025-10-02 12:16:04.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:05.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:05 np0005466031 nova_compute[235803]: 2025-10-02 12:16:05.304 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "4f887222-13c7-4435-86a6-639126cf456f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:05 np0005466031 nova_compute[235803]: 2025-10-02 12:16:05.305 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "4f887222-13c7-4435-86a6-639126cf456f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:05 np0005466031 nova_compute[235803]: 2025-10-02 12:16:05.326 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:16:05 np0005466031 nova_compute[235803]: 2025-10-02 12:16:05.409 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:05 np0005466031 nova_compute[235803]: 2025-10-02 12:16:05.409 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:05 np0005466031 nova_compute[235803]: 2025-10-02 12:16:05.417 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:16:05 np0005466031 nova_compute[235803]: 2025-10-02 12:16:05.417 2 INFO nova.compute.claims [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:16:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:16:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:05.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:16:05 np0005466031 nova_compute[235803]: 2025-10-02 12:16:05.575 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e162 e162: 3 total, 3 up, 3 in
Oct  2 08:16:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3103502843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.015 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.023 2 DEBUG nova.compute.provider_tree [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.188 2 DEBUG nova.scheduler.client.report [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.278 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.299 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.300 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.315 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] No node specified, defaulting to compute-2.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.411 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.412 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.473 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.474 2 DEBUG nova.network.neutron [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.492 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.518 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.665 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.666 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.667 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Creating image(s)#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.694 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 4f887222-13c7-4435-86a6-639126cf456f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.722 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 4f887222-13c7-4435-86a6-639126cf456f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.749 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 4f887222-13c7-4435-86a6-639126cf456f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.754 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.775 2 DEBUG nova.network.neutron [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.775 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.811 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.812 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.812 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.813 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.835 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 4f887222-13c7-4435-86a6-639126cf456f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:06 np0005466031 nova_compute[235803]: 2025-10-02 12:16:06.839 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4f887222-13c7-4435-86a6-639126cf456f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:07.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:07.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:07 np0005466031 nova_compute[235803]: 2025-10-02 12:16:07.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e163 e163: 3 total, 3 up, 3 in
Oct  2 08:16:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:16:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5175 writes, 26K keys, 5175 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5175 writes, 5175 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1600 writes, 7675 keys, 1600 commit groups, 1.0 writes per commit group, ingest: 16.04 MB, 0.03 MB/s#012Interval WAL: 1600 writes, 1600 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    112.7      0.27              0.09        14    0.020       0      0       0.0       0.0#012  L6      1/0    8.64 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    189.1    157.0      0.69              0.33        13    0.053     61K   6846       0.0       0.0#012 Sum      1/0    8.64 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    135.6    144.5      0.97              0.42        27    0.036     61K   6846       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   5.2    203.1    208.0      0.25              0.12        10    0.025     25K   2536       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    189.1    157.0      0.69              0.33        13    0.053     61K   6846       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    113.8      0.27              0.09        13    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.08 MB/s write, 0.13 GB read, 0.07 MB/s read, 1.0 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 12.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000158 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(723,11.89 MB,3.91217%) FilterBlock(27,176.05 KB,0.0565529%) IndexBlock(27,329.98 KB,0.106003%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:16:08 np0005466031 nova_compute[235803]: 2025-10-02 12:16:08.756 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4f887222-13c7-4435-86a6-639126cf456f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.917s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.005 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] resizing rbd image 4f887222-13c7-4435-86a6-639126cf456f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:16:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:09.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.588 2 DEBUG nova.objects.instance [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'migration_context' on Instance uuid 4f887222-13c7-4435-86a6-639126cf456f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.613 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.614 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Ensure instance console log exists: /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.614 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.614 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.615 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.616 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.620 2 WARNING nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.625 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.625 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.628 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.629 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.630 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.630 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.630 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.630 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.631 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.631 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.631 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.631 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.631 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.632 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.632 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.632 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:16:09 np0005466031 nova_compute[235803]: 2025-10-02 12:16:09.634 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/249885067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.031 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.056 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 4f887222-13c7-4435-86a6-639126cf456f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.059 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2701199019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.483 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.485 2 DEBUG nova.objects.instance [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f887222-13c7-4435-86a6-639126cf456f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.499 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <uuid>4f887222-13c7-4435-86a6-639126cf456f</uuid>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <name>instance-0000001b</name>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1094595277-1</nova:name>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:16:09</nova:creationTime>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <nova:user uuid="27279919e67c49e1a04b6eec249ecc87">tempest-ServersOnMultiNodesTest-348944321-project-member</nova:user>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <nova:project uuid="a5ac6058475f4875b46ae8f3c4ff33e8">tempest-ServersOnMultiNodesTest-348944321</nova:project>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <entry name="serial">4f887222-13c7-4435-86a6-639126cf456f</entry>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <entry name="uuid">4f887222-13c7-4435-86a6-639126cf456f</entry>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/4f887222-13c7-4435-86a6-639126cf456f_disk">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/4f887222-13c7-4435-86a6-639126cf456f_disk.config">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f/console.log" append="off"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:16:10 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:16:10 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:16:10 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:16:10 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.551 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.551 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.552 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Using config drive#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.573 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 4f887222-13c7-4435-86a6-639126cf456f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.868 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Creating config drive at /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f/disk.config#033[00m
Oct  2 08:16:10 np0005466031 nova_compute[235803]: 2025-10-02 12:16:10.875 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqq8u3lhe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:11 np0005466031 nova_compute[235803]: 2025-10-02 12:16:11.001 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqq8u3lhe" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:11.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:11 np0005466031 nova_compute[235803]: 2025-10-02 12:16:11.033 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image 4f887222-13c7-4435-86a6-639126cf456f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:11 np0005466031 nova_compute[235803]: 2025-10-02 12:16:11.037 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f/disk.config 4f887222-13c7-4435-86a6-639126cf456f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:11 np0005466031 nova_compute[235803]: 2025-10-02 12:16:11.207 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f/disk.config 4f887222-13c7-4435-86a6-639126cf456f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:11 np0005466031 nova_compute[235803]: 2025-10-02 12:16:11.208 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Deleting local config drive /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f/disk.config because it was imported into RBD.#033[00m
Oct  2 08:16:11 np0005466031 systemd-machined[192227]: New machine qemu-10-instance-0000001b.
Oct  2 08:16:11 np0005466031 systemd[1]: Started Virtual Machine qemu-10-instance-0000001b.
Oct  2 08:16:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:11.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:16:11Z|00071|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.472 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407372.4723275, 4f887222-13c7-4435-86a6-639126cf456f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.474 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.477 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.477 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.481 2 INFO nova.virt.libvirt.driver [-] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Instance spawned successfully.#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.481 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.512 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.515 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.523 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.523 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.524 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.525 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.525 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.526 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.563 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.564 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407372.4731297, 4f887222-13c7-4435-86a6-639126cf456f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.564 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] VM Started (Lifecycle Event)#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.617 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.622 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.641 2 INFO nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Took 5.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.642 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.662 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.741 2 INFO nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Took 7.37 seconds to build instance.#033[00m
Oct  2 08:16:12 np0005466031 nova_compute[235803]: 2025-10-02 12:16:12.794 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "4f887222-13c7-4435-86a6-639126cf456f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:13.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:13.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e164 e164: 3 total, 3 up, 3 in
Oct  2 08:16:14 np0005466031 nova_compute[235803]: 2025-10-02 12:16:14.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:15.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.275 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "4f887222-13c7-4435-86a6-639126cf456f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.275 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "4f887222-13c7-4435-86a6-639126cf456f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.276 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "4f887222-13c7-4435-86a6-639126cf456f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.276 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "4f887222-13c7-4435-86a6-639126cf456f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.276 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "4f887222-13c7-4435-86a6-639126cf456f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.277 2 INFO nova.compute.manager [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Terminating instance#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.278 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "refresh_cache-4f887222-13c7-4435-86a6-639126cf456f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.278 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquired lock "refresh_cache-4f887222-13c7-4435-86a6-639126cf456f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.278 2 DEBUG nova.network.neutron [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.492 2 DEBUG nova.network.neutron [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:15.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.880 2 DEBUG nova.network.neutron [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.898 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Releasing lock "refresh_cache-4f887222-13c7-4435-86a6-639126cf456f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:15 np0005466031 nova_compute[235803]: 2025-10-02 12:16:15.900 2 DEBUG nova.compute.manager [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:16:15 np0005466031 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct  2 08:16:15 np0005466031 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000001b.scope: Consumed 4.585s CPU time.
Oct  2 08:16:15 np0005466031 systemd-machined[192227]: Machine qemu-10-instance-0000001b terminated.
Oct  2 08:16:16 np0005466031 nova_compute[235803]: 2025-10-02 12:16:16.120 2 INFO nova.virt.libvirt.driver [-] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Instance destroyed successfully.#033[00m
Oct  2 08:16:16 np0005466031 nova_compute[235803]: 2025-10-02 12:16:16.120 2 DEBUG nova.objects.instance [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'resources' on Instance uuid 4f887222-13c7-4435-86a6-639126cf456f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:17.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e165 e165: 3 total, 3 up, 3 in
Oct  2 08:16:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:17.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:17 np0005466031 nova_compute[235803]: 2025-10-02 12:16:17.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:19.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:19 np0005466031 nova_compute[235803]: 2025-10-02 12:16:19.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:19.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:21.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:21.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:21 np0005466031 podman[247495]: 2025-10-02 12:16:21.630786851 +0000 UTC m=+0.059850421 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 08:16:21 np0005466031 podman[247496]: 2025-10-02 12:16:21.706832109 +0000 UTC m=+0.135070536 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.061 2 INFO nova.virt.libvirt.driver [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Deleting instance files /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f_del#033[00m
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.061 2 INFO nova.virt.libvirt.driver [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Deletion of /var/lib/nova/instances/4f887222-13c7-4435-86a6-639126cf456f_del complete#033[00m
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.120 2 INFO nova.compute.manager [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Took 6.22 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.121 2 DEBUG oslo.service.loopingcall [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.121 2 DEBUG nova.compute.manager [-] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.122 2 DEBUG nova.network.neutron [-] [instance: 4f887222-13c7-4435-86a6-639126cf456f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.622 2 DEBUG nova.network.neutron [-] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:22 np0005466031 nova_compute[235803]: 2025-10-02 12:16:22.799 2 DEBUG nova.network.neutron [-] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:23 np0005466031 nova_compute[235803]: 2025-10-02 12:16:23.012 2 INFO nova.compute.manager [-] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Took 0.89 seconds to deallocate network for instance.#033[00m
Oct  2 08:16:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:23.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:23 np0005466031 nova_compute[235803]: 2025-10-02 12:16:23.290 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:23 np0005466031 nova_compute[235803]: 2025-10-02 12:16:23.291 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:23 np0005466031 nova_compute[235803]: 2025-10-02 12:16:23.391 2 DEBUG oslo_concurrency.processutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 e166: 3 total, 3 up, 3 in
Oct  2 08:16:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3466060320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:23 np0005466031 nova_compute[235803]: 2025-10-02 12:16:23.884 2 DEBUG oslo_concurrency.processutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:23 np0005466031 nova_compute[235803]: 2025-10-02 12:16:23.889 2 DEBUG nova.compute.provider_tree [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:23 np0005466031 nova_compute[235803]: 2025-10-02 12:16:23.987 2 DEBUG nova.scheduler.client.report [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:24 np0005466031 nova_compute[235803]: 2025-10-02 12:16:24.190 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:24 np0005466031 nova_compute[235803]: 2025-10-02 12:16:24.274 2 INFO nova.scheduler.client.report [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Deleted allocations for instance 4f887222-13c7-4435-86a6-639126cf456f#033[00m
Oct  2 08:16:24 np0005466031 nova_compute[235803]: 2025-10-02 12:16:24.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:24 np0005466031 nova_compute[235803]: 2025-10-02 12:16:24.579 2 DEBUG oslo_concurrency.lockutils [None req-cc86b419-a2e7-45fd-b546-170845b4f570 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "4f887222-13c7-4435-86a6-639126cf456f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:25.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:25.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:16:25.823 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:16:25.823 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:16:25.823 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:16:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:16:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:27.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:27 np0005466031 nova_compute[235803]: 2025-10-02 12:16:27.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:27 np0005466031 podman[247590]: 2025-10-02 12:16:27.873478701 +0000 UTC m=+0.054747313 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:16:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:29.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:29 np0005466031 nova_compute[235803]: 2025-10-02 12:16:29.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:29.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:16:29.999 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:16:29.999 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:16:30 np0005466031 nova_compute[235803]: 2025-10-02 12:16:30.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:30 np0005466031 nova_compute[235803]: 2025-10-02 12:16:30.664 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:31.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:31 np0005466031 nova_compute[235803]: 2025-10-02 12:16:31.119 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407376.1178493, 4f887222-13c7-4435-86a6-639126cf456f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:31 np0005466031 nova_compute[235803]: 2025-10-02 12:16:31.119 2 INFO nova.compute.manager [-] [instance: 4f887222-13c7-4435-86a6-639126cf456f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:16:31 np0005466031 nova_compute[235803]: 2025-10-02 12:16:31.214 2 DEBUG nova.compute.manager [None req-0a240a08-7e1a-451b-a227-91a7ec419d8d - - - - - -] [instance: 4f887222-13c7-4435-86a6-639126cf456f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:32 np0005466031 nova_compute[235803]: 2025-10-02 12:16:32.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:32 np0005466031 nova_compute[235803]: 2025-10-02 12:16:32.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:33.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:33 np0005466031 nova_compute[235803]: 2025-10-02 12:16:33.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:33 np0005466031 nova_compute[235803]: 2025-10-02 12:16:33.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:16:33 np0005466031 nova_compute[235803]: 2025-10-02 12:16:33.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:16:34 np0005466031 nova_compute[235803]: 2025-10-02 12:16:34.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:34 np0005466031 nova_compute[235803]: 2025-10-02 12:16:34.615 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-80c1a955-9ccd-4e3e-a048-9949b961b825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:34 np0005466031 nova_compute[235803]: 2025-10-02 12:16:34.616 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-80c1a955-9ccd-4e3e-a048-9949b961b825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:34 np0005466031 nova_compute[235803]: 2025-10-02 12:16:34.616 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:16:34 np0005466031 nova_compute[235803]: 2025-10-02 12:16:34.616 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 80c1a955-9ccd-4e3e-a048-9949b961b825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:34 np0005466031 podman[247639]: 2025-10-02 12:16:34.626693355 +0000 UTC m=+0.057971947 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:16:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:35.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.053 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.430 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.458 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-80c1a955-9ccd-4e3e-a048-9949b961b825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.458 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.459 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.459 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.459 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:35.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.548 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.549 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.549 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.549 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.550 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3171481446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:35 np0005466031 nova_compute[235803]: 2025-10-02 12:16:35.965 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.286 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.287 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.417 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.418 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4629MB free_disk=20.737285614013672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.419 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.419 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.502 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 80c1a955-9ccd-4e3e-a048-9949b961b825 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.503 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.503 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.544 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:36 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2193579931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.958 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:36 np0005466031 nova_compute[235803]: 2025-10-02 12:16:36.963 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:37 np0005466031 nova_compute[235803]: 2025-10-02 12:16:37.012 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:37.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:37 np0005466031 nova_compute[235803]: 2025-10-02 12:16:37.058 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:16:37 np0005466031 nova_compute[235803]: 2025-10-02 12:16:37.058 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:37 np0005466031 nova_compute[235803]: 2025-10-02 12:16:37.235 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:37 np0005466031 nova_compute[235803]: 2025-10-02 12:16:37.236 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:37 np0005466031 nova_compute[235803]: 2025-10-02 12:16:37.236 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:16:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:37.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:37 np0005466031 nova_compute[235803]: 2025-10-02 12:16:37.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:37 np0005466031 nova_compute[235803]: 2025-10-02 12:16:37.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:39.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.246 2 DEBUG nova.compute.manager [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.422 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.423 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.476 2 DEBUG nova.objects.instance [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'pci_requests' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.516 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.517 2 INFO nova.compute.claims [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.517 2 DEBUG nova.objects.instance [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'resources' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.534 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "80c1a955-9ccd-4e3e-a048-9949b961b825" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.534 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "80c1a955-9ccd-4e3e-a048-9949b961b825" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.535 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "80c1a955-9ccd-4e3e-a048-9949b961b825-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.535 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "80c1a955-9ccd-4e3e-a048-9949b961b825-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.535 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "80c1a955-9ccd-4e3e-a048-9949b961b825-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.537 2 INFO nova.compute.manager [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Terminating instance#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.538 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "refresh_cache-80c1a955-9ccd-4e3e-a048-9949b961b825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.538 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquired lock "refresh_cache-80c1a955-9ccd-4e3e-a048-9949b961b825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.538 2 DEBUG nova.network.neutron [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.565 2 DEBUG nova.objects.instance [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'numa_topology' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.626 2 DEBUG nova.objects.instance [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.727 2 DEBUG nova.network.neutron [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.832 2 INFO nova.compute.resource_tracker [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating resource usage from migration 7e1ee5ca-994f-4ce3-b2ab-0267a1862ca4#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.832 2 DEBUG nova.compute.resource_tracker [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Starting to track incoming migration 7e1ee5ca-994f-4ce3-b2ab-0267a1862ca4 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.952 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:39 np0005466031 nova_compute[235803]: 2025-10-02 12:16:39.986 2 DEBUG nova.network.neutron [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:16:40.001 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.082 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Releasing lock "refresh_cache-80c1a955-9ccd-4e3e-a048-9949b961b825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.083 2 DEBUG nova.compute.manager [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:16:40 np0005466031 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct  2 08:16:40 np0005466031 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000017.scope: Consumed 15.209s CPU time.
Oct  2 08:16:40 np0005466031 systemd-machined[192227]: Machine qemu-9-instance-00000017 terminated.
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.309 2 INFO nova.virt.libvirt.driver [-] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Instance destroyed successfully.#033[00m
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.310 2 DEBUG nova.objects.instance [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'resources' on Instance uuid 80c1a955-9ccd-4e3e-a048-9949b961b825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:40 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/142292739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.395 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.401 2 DEBUG nova.compute.provider_tree [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.428 2 DEBUG nova.scheduler.client.report [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.530 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:40 np0005466031 nova_compute[235803]: 2025-10-02 12:16:40.530 2 INFO nova.compute.manager [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Migrating#033[00m
Oct  2 08:16:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:41.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:41.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:41 np0005466031 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:16:41 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:16:41 np0005466031 systemd-logind[786]: New session 53 of user nova.
Oct  2 08:16:41 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:16:41 np0005466031 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:16:41 np0005466031 systemd[247756]: Queued start job for default target Main User Target.
Oct  2 08:16:41 np0005466031 systemd[247756]: Created slice User Application Slice.
Oct  2 08:16:41 np0005466031 systemd[247756]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:16:41 np0005466031 systemd[247756]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:16:41 np0005466031 systemd[247756]: Reached target Paths.
Oct  2 08:16:41 np0005466031 systemd[247756]: Reached target Timers.
Oct  2 08:16:41 np0005466031 systemd[247756]: Starting D-Bus User Message Bus Socket...
Oct  2 08:16:41 np0005466031 systemd[247756]: Starting Create User's Volatile Files and Directories...
Oct  2 08:16:41 np0005466031 systemd[247756]: Finished Create User's Volatile Files and Directories.
Oct  2 08:16:41 np0005466031 systemd[247756]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:16:41 np0005466031 systemd[247756]: Reached target Sockets.
Oct  2 08:16:41 np0005466031 systemd[247756]: Reached target Basic System.
Oct  2 08:16:41 np0005466031 systemd[247756]: Reached target Main User Target.
Oct  2 08:16:41 np0005466031 systemd[247756]: Startup finished in 140ms.
Oct  2 08:16:41 np0005466031 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:16:41 np0005466031 systemd[1]: Started Session 53 of User nova.
Oct  2 08:16:42 np0005466031 systemd[1]: session-53.scope: Deactivated successfully.
Oct  2 08:16:42 np0005466031 systemd-logind[786]: Session 53 logged out. Waiting for processes to exit.
Oct  2 08:16:42 np0005466031 systemd-logind[786]: Removed session 53.
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.114 2 INFO nova.virt.libvirt.driver [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Deleting instance files /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825_del#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.115 2 INFO nova.virt.libvirt.driver [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Deletion of /var/lib/nova/instances/80c1a955-9ccd-4e3e-a048-9949b961b825_del complete#033[00m
Oct  2 08:16:42 np0005466031 systemd-logind[786]: New session 55 of user nova.
Oct  2 08:16:42 np0005466031 systemd[1]: Started Session 55 of User nova.
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.246 2 INFO nova.compute.manager [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Took 2.16 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.246 2 DEBUG oslo.service.loopingcall [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.246 2 DEBUG nova.compute.manager [-] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.246 2 DEBUG nova.network.neutron [-] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:16:42 np0005466031 systemd[1]: session-55.scope: Deactivated successfully.
Oct  2 08:16:42 np0005466031 systemd-logind[786]: Session 55 logged out. Waiting for processes to exit.
Oct  2 08:16:42 np0005466031 systemd-logind[786]: Removed session 55.
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.382 2 DEBUG nova.network.neutron [-] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.397 2 DEBUG nova.network.neutron [-] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.434 2 INFO nova.compute.manager [-] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Took 0.19 seconds to deallocate network for instance.#033[00m
Oct  2 08:16:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:16:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 14K writes, 58K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 4617 syncs, 3.16 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9114 writes, 35K keys, 9114 commit groups, 1.0 writes per commit group, ingest: 37.51 MB, 0.06 MB/s#012Interval WAL: 9114 writes, 3701 syncs, 2.46 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.541 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.541 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:42 np0005466031 nova_compute[235803]: 2025-10-02 12:16:42.599 2 DEBUG oslo_concurrency.processutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3362255199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:43 np0005466031 nova_compute[235803]: 2025-10-02 12:16:43.023 2 DEBUG oslo_concurrency.processutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:43 np0005466031 nova_compute[235803]: 2025-10-02 12:16:43.029 2 DEBUG nova.compute.provider_tree [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:43.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:43 np0005466031 nova_compute[235803]: 2025-10-02 12:16:43.080 2 DEBUG nova.scheduler.client.report [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:43 np0005466031 nova_compute[235803]: 2025-10-02 12:16:43.128 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:43 np0005466031 nova_compute[235803]: 2025-10-02 12:16:43.197 2 INFO nova.scheduler.client.report [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Deleted allocations for instance 80c1a955-9ccd-4e3e-a048-9949b961b825#033[00m
Oct  2 08:16:43 np0005466031 nova_compute[235803]: 2025-10-02 12:16:43.364 2 DEBUG oslo_concurrency.lockutils [None req-c7e475e1-aacf-4ca8-83a0-61366d52d112 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "80c1a955-9ccd-4e3e-a048-9949b961b825" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:43 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 08:16:43 np0005466031 ovn_controller[132413]: 2025-10-02T12:16:43Z|00072|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  2 08:16:44 np0005466031 nova_compute[235803]: 2025-10-02 12:16:44.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:45.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:45.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:47.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:47 np0005466031 nova_compute[235803]: 2025-10-02 12:16:47.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:49.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:49 np0005466031 nova_compute[235803]: 2025-10-02 12:16:49.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:51.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:51 np0005466031 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 08:16:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:51.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:52 np0005466031 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:16:52 np0005466031 systemd[247756]: Activating special unit Exit the Session...
Oct  2 08:16:52 np0005466031 systemd[247756]: Stopped target Main User Target.
Oct  2 08:16:52 np0005466031 systemd[247756]: Stopped target Basic System.
Oct  2 08:16:52 np0005466031 systemd[247756]: Stopped target Paths.
Oct  2 08:16:52 np0005466031 systemd[247756]: Stopped target Sockets.
Oct  2 08:16:52 np0005466031 systemd[247756]: Stopped target Timers.
Oct  2 08:16:52 np0005466031 systemd[247756]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:16:52 np0005466031 systemd[247756]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:16:52 np0005466031 systemd[247756]: Closed D-Bus User Message Bus Socket.
Oct  2 08:16:52 np0005466031 systemd[247756]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:16:52 np0005466031 systemd[247756]: Removed slice User Application Slice.
Oct  2 08:16:52 np0005466031 systemd[247756]: Reached target Shutdown.
Oct  2 08:16:52 np0005466031 systemd[247756]: Finished Exit the Session.
Oct  2 08:16:52 np0005466031 systemd[247756]: Reached target Exit the Session.
Oct  2 08:16:52 np0005466031 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:16:52 np0005466031 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:16:52 np0005466031 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:16:52 np0005466031 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:16:52 np0005466031 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:16:52 np0005466031 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:16:52 np0005466031 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:16:52 np0005466031 podman[247906]: 2025-10-02 12:16:52.420432584 +0000 UTC m=+0.080228560 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:16:52 np0005466031 podman[247907]: 2025-10-02 12:16:52.455517189 +0000 UTC m=+0.113701208 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:16:52 np0005466031 nova_compute[235803]: 2025-10-02 12:16:52.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:53.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:53.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:54 np0005466031 nova_compute[235803]: 2025-10-02 12:16:54.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:55.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:55 np0005466031 nova_compute[235803]: 2025-10-02 12:16:55.308 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407400.3065338, 80c1a955-9ccd-4e3e-a048-9949b961b825 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:55 np0005466031 nova_compute[235803]: 2025-10-02 12:16:55.308 2 INFO nova.compute.manager [-] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:16:55 np0005466031 nova_compute[235803]: 2025-10-02 12:16:55.342 2 DEBUG nova.compute.manager [None req-5da52369-b814-435e-a910-633a019fdf08 - - - - - -] [instance: 80c1a955-9ccd-4e3e-a048-9949b961b825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:57.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:57.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:57 np0005466031 nova_compute[235803]: 2025-10-02 12:16:57.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:57 np0005466031 nova_compute[235803]: 2025-10-02 12:16:57.990 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquiring lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:57 np0005466031 nova_compute[235803]: 2025-10-02 12:16:57.991 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquired lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:57 np0005466031 nova_compute[235803]: 2025-10-02 12:16:57.991 2 DEBUG nova.network.neutron [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:58 np0005466031 nova_compute[235803]: 2025-10-02 12:16:58.158 2 DEBUG nova.network.neutron [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:58 np0005466031 nova_compute[235803]: 2025-10-02 12:16:58.446 2 DEBUG nova.network.neutron [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:58 np0005466031 nova_compute[235803]: 2025-10-02 12:16:58.463 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Releasing lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:58 np0005466031 nova_compute[235803]: 2025-10-02 12:16:58.547 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:16:58 np0005466031 nova_compute[235803]: 2025-10-02 12:16:58.549 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:16:58 np0005466031 nova_compute[235803]: 2025-10-02 12:16:58.549 2 INFO nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Creating image(s)#033[00m
Oct  2 08:16:58 np0005466031 nova_compute[235803]: 2025-10-02 12:16:58.584 2 DEBUG nova.storage.rbd_utils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] creating snapshot(nova-resize) on rbd image(cfe39611-f626-4dba-8730-190f423de8a1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:16:58 np0005466031 podman[247953]: 2025-10-02 12:16:58.623978534 +0000 UTC m=+0.058813832 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd)
Oct  2 08:16:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:16:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:59.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:16:59 np0005466031 nova_compute[235803]: 2025-10-02 12:16:59.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:16:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:59.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e167 e167: 3 total, 3 up, 3 in
Oct  2 08:16:59 np0005466031 nova_compute[235803]: 2025-10-02 12:16:59.924 2 DEBUG nova.objects.instance [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.059 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.059 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Ensure instance console log exists: /var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.060 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.060 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.060 2 DEBUG oslo_concurrency.lockutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.062 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.067 2 WARNING nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.072 2 DEBUG nova.virt.libvirt.host [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.072 2 DEBUG nova.virt.libvirt.host [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.075 2 DEBUG nova.virt.libvirt.host [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.075 2 DEBUG nova.virt.libvirt.host [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.076 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.076 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.077 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.077 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.077 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.077 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.078 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.078 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.078 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.078 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.079 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.079 2 DEBUG nova.virt.hardware [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.079 2 DEBUG nova.objects.instance [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.095 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4085572415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.506 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:00 np0005466031 nova_compute[235803]: 2025-10-02 12:17:00.542 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3244676548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:01 np0005466031 nova_compute[235803]: 2025-10-02 12:17:01.013 2 DEBUG oslo_concurrency.processutils [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:01 np0005466031 nova_compute[235803]: 2025-10-02 12:17:01.016 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <uuid>cfe39611-f626-4dba-8730-190f423de8a1</uuid>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <name>instance-0000001e</name>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <nova:name>tempest-MigrationsAdminTest-server-407394181</nova:name>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:17:00</nova:creationTime>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <entry name="serial">cfe39611-f626-4dba-8730-190f423de8a1</entry>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <entry name="uuid">cfe39611-f626-4dba-8730-190f423de8a1</entry>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/cfe39611-f626-4dba-8730-190f423de8a1_disk">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/cfe39611-f626-4dba-8730-190f423de8a1_disk.config">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/cfe39611-f626-4dba-8730-190f423de8a1/console.log" append="off"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:17:01 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:17:01 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:17:01 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:17:01 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:17:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:01.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:01 np0005466031 nova_compute[235803]: 2025-10-02 12:17:01.241 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:01 np0005466031 nova_compute[235803]: 2025-10-02 12:17:01.241 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:01 np0005466031 nova_compute[235803]: 2025-10-02 12:17:01.242 2 INFO nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Using config drive#033[00m
Oct  2 08:17:01 np0005466031 systemd-machined[192227]: New machine qemu-11-instance-0000001e.
Oct  2 08:17:01 np0005466031 systemd[1]: Started Virtual Machine qemu-11-instance-0000001e.
Oct  2 08:17:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.101 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407422.1015115, cfe39611-f626-4dba-8730-190f423de8a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.103 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.106 2 DEBUG nova.compute.manager [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.109 2 INFO nova.virt.libvirt.driver [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance running successfully.#033[00m
Oct  2 08:17:02 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.112 2 DEBUG nova.virt.libvirt.guest [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.112 2 DEBUG nova.virt.libvirt.driver [None req-a2bf3fce-8344-45b6-abce-28f81feb5bc2 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.129 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.133 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.178 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.178 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407422.102476, cfe39611-f626-4dba-8730-190f423de8a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.178 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Started (Lifecycle Event)#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.211 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.214 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:02 np0005466031 nova_compute[235803]: 2025-10-02 12:17:02.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:03.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:03 np0005466031 nova_compute[235803]: 2025-10-02 12:17:03.923 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:03 np0005466031 nova_compute[235803]: 2025-10-02 12:17:03.923 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:03 np0005466031 nova_compute[235803]: 2025-10-02 12:17:03.924 2 DEBUG nova.network.neutron [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:04 np0005466031 nova_compute[235803]: 2025-10-02 12:17:04.102 2 DEBUG nova.network.neutron [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:04 np0005466031 nova_compute[235803]: 2025-10-02 12:17:04.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:04 np0005466031 nova_compute[235803]: 2025-10-02 12:17:04.540 2 DEBUG nova.network.neutron [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:04 np0005466031 nova_compute[235803]: 2025-10-02 12:17:04.622 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-cfe39611-f626-4dba-8730-190f423de8a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:04 np0005466031 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct  2 08:17:04 np0005466031 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000001e.scope: Consumed 3.416s CPU time.
Oct  2 08:17:04 np0005466031 systemd-machined[192227]: Machine qemu-11-instance-0000001e terminated.
Oct  2 08:17:04 np0005466031 podman[248186]: 2025-10-02 12:17:04.771563875 +0000 UTC m=+0.051426038 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:17:04 np0005466031 nova_compute[235803]: 2025-10-02 12:17:04.881 2 INFO nova.virt.libvirt.driver [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Instance destroyed successfully.#033[00m
Oct  2 08:17:04 np0005466031 nova_compute[235803]: 2025-10-02 12:17:04.882 2 DEBUG nova.objects.instance [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:04 np0005466031 nova_compute[235803]: 2025-10-02 12:17:04.917 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:04 np0005466031 nova_compute[235803]: 2025-10-02 12:17:04.918 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:05 np0005466031 nova_compute[235803]: 2025-10-02 12:17:05.007 2 DEBUG nova.objects.instance [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'migration_context' on Instance uuid cfe39611-f626-4dba-8730-190f423de8a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:05.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:17:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3342925200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:17:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:17:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3342925200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:17:05 np0005466031 nova_compute[235803]: 2025-10-02 12:17:05.155 2 DEBUG oslo_concurrency.processutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3288555429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:05 np0005466031 nova_compute[235803]: 2025-10-02 12:17:05.560 2 DEBUG oslo_concurrency.processutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:05 np0005466031 nova_compute[235803]: 2025-10-02 12:17:05.570 2 DEBUG nova.compute.provider_tree [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:05 np0005466031 nova_compute[235803]: 2025-10-02 12:17:05.681 2 DEBUG nova.scheduler.client.report [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:05 np0005466031 nova_compute[235803]: 2025-10-02 12:17:05.845 2 DEBUG oslo_concurrency.lockutils [None req-0736795f-92ac-439d-81d9-47e836f0210d ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:07.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:07.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:07 np0005466031 nova_compute[235803]: 2025-10-02 12:17:07.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:09.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:09 np0005466031 nova_compute[235803]: 2025-10-02 12:17:09.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e168 e168: 3 total, 3 up, 3 in
Oct  2 08:17:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:11.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:12 np0005466031 nova_compute[235803]: 2025-10-02 12:17:12.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:14 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 08:17:14 np0005466031 nova_compute[235803]: 2025-10-02 12:17:14.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:14 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  2 08:17:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.003000087s ======
Oct  2 08:17:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:15.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000087s
Oct  2 08:17:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:15.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:15.777 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:15.778 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:17:15 np0005466031 nova_compute[235803]: 2025-10-02 12:17:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:16 np0005466031 nova_compute[235803]: 2025-10-02 12:17:16.320 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:16 np0005466031 nova_compute[235803]: 2025-10-02 12:17:16.320 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:16 np0005466031 nova_compute[235803]: 2025-10-02 12:17:16.348 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:17:16 np0005466031 nova_compute[235803]: 2025-10-02 12:17:16.415 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:16 np0005466031 nova_compute[235803]: 2025-10-02 12:17:16.416 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:16 np0005466031 nova_compute[235803]: 2025-10-02 12:17:16.423 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:17:16 np0005466031 nova_compute[235803]: 2025-10-02 12:17:16.423 2 INFO nova.compute.claims [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:17:16 np0005466031 nova_compute[235803]: 2025-10-02 12:17:16.541 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:16.780 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2874766076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.018 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.028 2 DEBUG nova.compute.provider_tree [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.045 2 DEBUG nova.scheduler.client.report [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:17.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.089 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.090 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.144 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.145 2 DEBUG nova.network.neutron [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.173 2 INFO nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.195 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.251 2 INFO nova.virt.block_device [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Booting with volume 20a19061-0239-43b4-b9d7-980e7acde072 at /dev/vda#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.421 2 DEBUG nova.policy [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0cdfd1473bd4963b4ded642a43c35f3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.465 2 DEBUG os_brick.utils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.466 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.477 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.477 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[368d6c2a-76a6-437d-a5f2-e81e67f3ade5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.478 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.487 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.487 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[67bc33a2-0dc3-4d28-b6d5-549fe0895d4e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.489 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.497 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.497 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4cc46f-e664-4db3-b787-ecc88432613b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.499 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8f45be-e672-4d3c-94c7-6b804db4c559]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.499 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.522 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.525 2 DEBUG os_brick.initiator.connectors.lightos [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.526 2 DEBUG os_brick.initiator.connectors.lightos [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.526 2 DEBUG os_brick.initiator.connectors.lightos [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.526 2 DEBUG os_brick.utils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.527 2 DEBUG nova.virt.block_device [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating existing volume attachment record: 6a6d4750-7860-4bbf-bba3-50eb20d823fd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:17:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:17.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:17 np0005466031 nova_compute[235803]: 2025-10-02 12:17:17.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.710 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.711 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.711 2 INFO nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating image(s)#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.712 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.712 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Ensure instance console log exists: /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.712 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.712 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.712 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:18 np0005466031 nova_compute[235803]: 2025-10-02 12:17:18.930 2 DEBUG nova.network.neutron [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Successfully created port: 7539c03e-c932-4473-8d75-729cbed6008a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:17:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 e169: 3 total, 3 up, 3 in
Oct  2 08:17:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:19.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:19.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.822 2 DEBUG nova.network.neutron [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Successfully updated port: 7539c03e-c932-4473-8d75-729cbed6008a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.839 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.840 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.840 2 DEBUG nova.network.neutron [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.879 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407424.8783228, cfe39611-f626-4dba-8730-190f423de8a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.880 2 INFO nova.compute.manager [-] [instance: cfe39611-f626-4dba-8730-190f423de8a1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.903 2 DEBUG nova.compute.manager [None req-d4ae4150-ebfe-482d-88c1-f512c16406c5 - - - - - -] [instance: cfe39611-f626-4dba-8730-190f423de8a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.923 2 DEBUG nova.compute.manager [req-b224d89d-54c9-47c6-83a3-c4906d4dff49 req-a62dd566-39f8-44e9-9f84-c93ab2746787 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-changed-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.923 2 DEBUG nova.compute.manager [req-b224d89d-54c9-47c6-83a3-c4906d4dff49 req-a62dd566-39f8-44e9-9f84-c93ab2746787 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Refreshing instance network info cache due to event network-changed-7539c03e-c932-4473-8d75-729cbed6008a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:17:19 np0005466031 nova_compute[235803]: 2025-10-02 12:17:19.924 2 DEBUG oslo_concurrency.lockutils [req-b224d89d-54c9-47c6-83a3-c4906d4dff49 req-a62dd566-39f8-44e9-9f84-c93ab2746787 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:20 np0005466031 nova_compute[235803]: 2025-10-02 12:17:20.017 2 DEBUG nova.network.neutron [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:20 np0005466031 nova_compute[235803]: 2025-10-02 12:17:20.994 2 DEBUG nova.network.neutron [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.024 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.025 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Instance network_info: |[{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.025 2 DEBUG oslo_concurrency.lockutils [req-b224d89d-54c9-47c6-83a3-c4906d4dff49 req-a62dd566-39f8-44e9-9f84-c93ab2746787 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.025 2 DEBUG nova.network.neutron [req-b224d89d-54c9-47c6-83a3-c4906d4dff49 req-a62dd566-39f8-44e9-9f84-c93ab2746787 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Refreshing network info cache for port 7539c03e-c932-4473-8d75-729cbed6008a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.028 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Start _get_guest_xml network_info=[{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-20a19061-0239-43b4-b9d7-980e7acde072', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '20a19061-0239-43b4-b9d7-980e7acde072', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'attached_at': '', 'detached_at': '', 'volume_id': '20a19061-0239-43b4-b9d7-980e7acde072', 'serial': '20a19061-0239-43b4-b9d7-980e7acde072'}, 'attachment_id': '6a6d4750-7860-4bbf-bba3-50eb20d823fd', 'delete_on_termination': True, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.033 2 WARNING nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.041 2 DEBUG nova.virt.libvirt.host [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.041 2 DEBUG nova.virt.libvirt.host [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.045 2 DEBUG nova.virt.libvirt.host [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.046 2 DEBUG nova.virt.libvirt.host [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.047 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.047 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.048 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.048 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.048 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.048 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.049 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.049 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.049 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.049 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.050 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.050 2 DEBUG nova.virt.hardware [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.079 2 DEBUG nova.storage.rbd_utils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image ecee1ec0-1a8d-4d67-b996-205a942120ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.083 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:21.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1496903658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.519 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.544 2 DEBUG nova.virt.libvirt.vif [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:17Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.545 2 DEBUG nova.network.os_vif_util [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.546 2 DEBUG nova.network.os_vif_util [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.546 2 DEBUG nova.objects.instance [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ecee1ec0-1a8d-4d67-b996-205a942120ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.559 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <uuid>ecee1ec0-1a8d-4d67-b996-205a942120ae</uuid>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <name>instance-0000001f</name>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <nova:name>tempest-LiveMigrationTest-server-343208228</nova:name>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:17:21</nova:creationTime>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <nova:user uuid="e0cdfd1473bd4963b4ded642a43c35f3">tempest-LiveMigrationTest-1880928942-project-member</nova:user>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <nova:project uuid="7f6188e258a04ea1a49e6b415bce3fc9">tempest-LiveMigrationTest-1880928942</nova:project>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <nova:port uuid="7539c03e-c932-4473-8d75-729cbed6008a">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <entry name="serial">ecee1ec0-1a8d-4d67-b996-205a942120ae</entry>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <entry name="uuid">ecee1ec0-1a8d-4d67-b996-205a942120ae</entry>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/ecee1ec0-1a8d-4d67-b996-205a942120ae_disk.config">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-20a19061-0239-43b4-b9d7-980e7acde072">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <serial>20a19061-0239-43b4-b9d7-980e7acde072</serial>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:0e:5e:ba"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <target dev="tap7539c03e-c9"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/console.log" append="off"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:17:21 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:17:21 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:17:21 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:17:21 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.561 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Preparing to wait for external event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.561 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.561 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.561 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.562 2 DEBUG nova.virt.libvirt.vif [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:17Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.562 2 DEBUG nova.network.os_vif_util [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.563 2 DEBUG nova.network.os_vif_util [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.563 2 DEBUG os_vif [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.564 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.568 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7539c03e-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.568 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7539c03e-c9, col_values=(('external_ids', {'iface-id': '7539c03e-c932-4473-8d75-729cbed6008a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:5e:ba', 'vm-uuid': 'ecee1ec0-1a8d-4d67-b996-205a942120ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:21 np0005466031 NetworkManager[44907]: <info>  [1759407441.5709] manager: (tap7539c03e-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.579 2 INFO os_vif [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9')#033[00m
Oct  2 08:17:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:21.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.643 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.644 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.644 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] No VIF found with MAC fa:16:3e:0e:5e:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.644 2 INFO nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Using config drive#033[00m
Oct  2 08:17:21 np0005466031 nova_compute[235803]: 2025-10-02 12:17:21.667 2 DEBUG nova.storage.rbd_utils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image ecee1ec0-1a8d-4d67-b996-205a942120ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.222 2 INFO nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating config drive at /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/disk.config#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.231 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjx5fmp88 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.360 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjx5fmp88" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.393 2 DEBUG nova.storage.rbd_utils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] rbd image ecee1ec0-1a8d-4d67-b996-205a942120ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.397 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/disk.config ecee1ec0-1a8d-4d67-b996-205a942120ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.438 2 DEBUG nova.network.neutron [req-b224d89d-54c9-47c6-83a3-c4906d4dff49 req-a62dd566-39f8-44e9-9f84-c93ab2746787 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updated VIF entry in instance network info cache for port 7539c03e-c932-4473-8d75-729cbed6008a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.439 2 DEBUG nova.network.neutron [req-b224d89d-54c9-47c6-83a3-c4906d4dff49 req-a62dd566-39f8-44e9-9f84-c93ab2746787 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.504 2 DEBUG oslo_concurrency.lockutils [req-b224d89d-54c9-47c6-83a3-c4906d4dff49 req-a62dd566-39f8-44e9-9f84-c93ab2746787 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:22 np0005466031 podman[248417]: 2025-10-02 12:17:22.643745381 +0000 UTC m=+0.065214917 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:17:22 np0005466031 podman[248418]: 2025-10-02 12:17:22.672402779 +0000 UTC m=+0.093000260 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.903 2 DEBUG oslo_concurrency.processutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/disk.config ecee1ec0-1a8d-4d67-b996-205a942120ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.903 2 INFO nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deleting local config drive /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/disk.config because it was imported into RBD.#033[00m
Oct  2 08:17:22 np0005466031 kernel: tap7539c03e-c9: entered promiscuous mode
Oct  2 08:17:22 np0005466031 NetworkManager[44907]: <info>  [1759407442.9466] manager: (tap7539c03e-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:22Z|00073|binding|INFO|Claiming lport 7539c03e-c932-4473-8d75-729cbed6008a for this chassis.
Oct  2 08:17:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:22Z|00074|binding|INFO|7539c03e-c932-4473-8d75-729cbed6008a: Claiming fa:16:3e:0e:5e:ba 10.100.0.9
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:22 np0005466031 nova_compute[235803]: 2025-10-02 12:17:22.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.963 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:5e:ba 10.100.0.9'], port_security=['fa:16:3e:0e:5e:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7539c03e-c932-4473-8d75-729cbed6008a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.964 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7539c03e-c932-4473-8d75-729cbed6008a in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e bound to our chassis#033[00m
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.966 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e#033[00m
Oct  2 08:17:22 np0005466031 systemd-machined[192227]: New machine qemu-12-instance-0000001f.
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.978 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe5307e-4b88-4443-8619-61c2e527fb7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.979 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5989958f-c1 in ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.981 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5989958f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.981 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[26fe2698-d4df-493d-9169-9a0c2c404fdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.981 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3616f9-b1f5-4960-9cce-f309c5bd2d39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:22.992 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[b656e093-72a1-4cd9-802e-f335f0fbd3be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:22 np0005466031 systemd[1]: Started Virtual Machine qemu-12-instance-0000001f.
Oct  2 08:17:23 np0005466031 systemd-udevd[248476]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:17:23 np0005466031 NetworkManager[44907]: <info>  [1759407443.0133] device (tap7539c03e-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:17:23 np0005466031 NetworkManager[44907]: <info>  [1759407443.0141] device (tap7539c03e-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.017 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[44618bad-b9b0-4aaa-9ed8-c0aca4eb0c3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:23Z|00075|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a ovn-installed in OVS
Oct  2 08:17:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:23Z|00076|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a up in Southbound
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.047 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[62965d05-a958-4119-9fd4-32f5bf7f1d78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 systemd-udevd[248478]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:17:23 np0005466031 NetworkManager[44907]: <info>  [1759407443.0539] manager: (tap5989958f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.054 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[79f6cf96-c92b-4a02-85a2-3e8d04a14c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.081 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[880244a5-6a5f-4b6b-888e-52fc47f1e293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.083 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2370f701-ab06-4a1f-be2e-a67d33184596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:23.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:23 np0005466031 NetworkManager[44907]: <info>  [1759407443.1031] device (tap5989958f-c0): carrier: link connected
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.109 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0f84c1f4-66c0-41e3-ae6f-701ec488bda1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.128 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6cfddc-2a2c-46b2-8020-375d33b14e9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529869, 'reachable_time': 26648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248506, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.140 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c77af54f-289e-415b-a58a-acc4feb686fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:d212'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529869, 'tstamp': 529869}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248507, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.157 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[baa9bd79-aba2-40d8-9d51-151dbef70745]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529869, 'reachable_time': 26648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248508, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.185 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[29a0a591-4083-4b31-9f1f-a8cdc3a29330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.237 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8329e291-5414-441e-91ea-32adbcf4c55c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.239 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.239 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.239 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:23 np0005466031 kernel: tap5989958f-c0: entered promiscuous mode
Oct  2 08:17:23 np0005466031 NetworkManager[44907]: <info>  [1759407443.2418] manager: (tap5989958f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.244 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:23Z|00077|binding|INFO|Releasing lport c7d8e124-cc34-42e6-82ac-6fdf057166bf from this chassis (sb_readonly=0)
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.263 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.264 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5eb0ff-8e65-4d30-a90f-2b5db250a429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.264 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:23.265 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'env', 'PROCESS_TAG=haproxy-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5989958f-ccbb-4db4-8dcb-18563aa2418e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.423 2 DEBUG nova.compute.manager [req-9953df44-2551-4fac-9c77-563af1bb0f38 req-b6933f2b-dad6-4542-bdcf-1639805a6bec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.424 2 DEBUG oslo_concurrency.lockutils [req-9953df44-2551-4fac-9c77-563af1bb0f38 req-b6933f2b-dad6-4542-bdcf-1639805a6bec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.424 2 DEBUG oslo_concurrency.lockutils [req-9953df44-2551-4fac-9c77-563af1bb0f38 req-b6933f2b-dad6-4542-bdcf-1639805a6bec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.425 2 DEBUG oslo_concurrency.lockutils [req-9953df44-2551-4fac-9c77-563af1bb0f38 req-b6933f2b-dad6-4542-bdcf-1639805a6bec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.425 2 DEBUG nova.compute.manager [req-9953df44-2551-4fac-9c77-563af1bb0f38 req-b6933f2b-dad6-4542-bdcf-1639805a6bec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Processing event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:17:23 np0005466031 podman[248582]: 2025-10-02 12:17:23.598854571 +0000 UTC m=+0.046378572 container create ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:17:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:23.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:23 np0005466031 systemd[1]: Started libpod-conmon-ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a.scope.
Oct  2 08:17:23 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:17:23 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad2f406c6b07cc49e8ce248f81ca1e4a16c359b1aae21cf316a9c6c29edf227c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:17:23 np0005466031 podman[248582]: 2025-10-02 12:17:23.575365212 +0000 UTC m=+0.022889183 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:17:23 np0005466031 podman[248582]: 2025-10-02 12:17:23.694436194 +0000 UTC m=+0.141960275 container init ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:17:23 np0005466031 podman[248582]: 2025-10-02 12:17:23.700536911 +0000 UTC m=+0.148060922 container start ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:17:23 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[248597]: [NOTICE]   (248601) : New worker (248603) forked
Oct  2 08:17:23 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[248597]: [NOTICE]   (248601) : Loading success.
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.906 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407443.905698, ecee1ec0-1a8d-4d67-b996-205a942120ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.906 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Started (Lifecycle Event)#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.909 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.913 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.917 2 INFO nova.virt.libvirt.driver [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Instance spawned successfully.#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.917 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.938 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.942 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.960 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.960 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.961 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.961 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.962 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:23 np0005466031 nova_compute[235803]: 2025-10-02 12:17:23.962 2 DEBUG nova.virt.libvirt.driver [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.057 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.057 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407443.9087882, ecee1ec0-1a8d-4d67-b996-205a942120ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.058 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.092 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.096 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407443.9122305, ecee1ec0-1a8d-4d67-b996-205a942120ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.096 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.101 2 INFO nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Took 5.39 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.101 2 DEBUG nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.116 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.119 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.199 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.289 2 INFO nova.compute.manager [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Took 7.90 seconds to build instance.#033[00m
Oct  2 08:17:24 np0005466031 nova_compute[235803]: 2025-10-02 12:17:24.388 2 DEBUG oslo_concurrency.lockutils [None req-40fca58c-e9e8-4d8a-9413-1d95719cf178 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:25.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:25.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:25 np0005466031 nova_compute[235803]: 2025-10-02 12:17:25.645 2 DEBUG nova.compute.manager [req-6762bdb1-1b76-44d3-9273-01035df5a9e2 req-e1a4b3be-c8b1-436b-a878-7642df8a647a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:25 np0005466031 nova_compute[235803]: 2025-10-02 12:17:25.645 2 DEBUG oslo_concurrency.lockutils [req-6762bdb1-1b76-44d3-9273-01035df5a9e2 req-e1a4b3be-c8b1-436b-a878-7642df8a647a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:25 np0005466031 nova_compute[235803]: 2025-10-02 12:17:25.646 2 DEBUG oslo_concurrency.lockutils [req-6762bdb1-1b76-44d3-9273-01035df5a9e2 req-e1a4b3be-c8b1-436b-a878-7642df8a647a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:25 np0005466031 nova_compute[235803]: 2025-10-02 12:17:25.646 2 DEBUG oslo_concurrency.lockutils [req-6762bdb1-1b76-44d3-9273-01035df5a9e2 req-e1a4b3be-c8b1-436b-a878-7642df8a647a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:25 np0005466031 nova_compute[235803]: 2025-10-02 12:17:25.646 2 DEBUG nova.compute.manager [req-6762bdb1-1b76-44d3-9273-01035df5a9e2 req-e1a4b3be-c8b1-436b-a878-7642df8a647a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:25 np0005466031 nova_compute[235803]: 2025-10-02 12:17:25.646 2 WARNING nova.compute.manager [req-6762bdb1-1b76-44d3-9273-01035df5a9e2 req-e1a4b3be-c8b1-436b-a878-7642df8a647a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:17:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:25.824 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:25.825 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:25.826 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:26 np0005466031 nova_compute[235803]: 2025-10-02 12:17:26.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:27.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:27.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:27 np0005466031 nova_compute[235803]: 2025-10-02 12:17:27.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:28 np0005466031 nova_compute[235803]: 2025-10-02 12:17:28.453 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Check if temp file /var/lib/nova/instances/tmpyzl_yqg1 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct  2 08:17:28 np0005466031 nova_compute[235803]: 2025-10-02 12:17:28.454 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct  2 08:17:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:29.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:29.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:29 np0005466031 podman[248665]: 2025-10-02 12:17:29.663797574 +0000 UTC m=+0.085168393 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:17:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:31.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:31 np0005466031 nova_compute[235803]: 2025-10-02 12:17:31.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:17:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:31.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:17:31 np0005466031 nova_compute[235803]: 2025-10-02 12:17:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769959602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:32 np0005466031 nova_compute[235803]: 2025-10-02 12:17:32.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:32 np0005466031 nova_compute[235803]: 2025-10-02 12:17:32.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:33.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:33.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:33 np0005466031 nova_compute[235803]: 2025-10-02 12:17:33.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:33 np0005466031 nova_compute[235803]: 2025-10-02 12:17:33.939 2 DEBUG nova.compute.manager [req-94de7272-49a9-4166-8f77-0f4b8fd80966 req-214739bd-edc4-4ec6-a935-40703b9545bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:33 np0005466031 nova_compute[235803]: 2025-10-02 12:17:33.940 2 DEBUG oslo_concurrency.lockutils [req-94de7272-49a9-4166-8f77-0f4b8fd80966 req-214739bd-edc4-4ec6-a935-40703b9545bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:33 np0005466031 nova_compute[235803]: 2025-10-02 12:17:33.940 2 DEBUG oslo_concurrency.lockutils [req-94de7272-49a9-4166-8f77-0f4b8fd80966 req-214739bd-edc4-4ec6-a935-40703b9545bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:33 np0005466031 nova_compute[235803]: 2025-10-02 12:17:33.941 2 DEBUG oslo_concurrency.lockutils [req-94de7272-49a9-4166-8f77-0f4b8fd80966 req-214739bd-edc4-4ec6-a935-40703b9545bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:33 np0005466031 nova_compute[235803]: 2025-10-02 12:17:33.941 2 DEBUG nova.compute.manager [req-94de7272-49a9-4166-8f77-0f4b8fd80966 req-214739bd-edc4-4ec6-a935-40703b9545bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:33 np0005466031 nova_compute[235803]: 2025-10-02 12:17:33.942 2 DEBUG nova.compute.manager [req-94de7272-49a9-4166-8f77-0f4b8fd80966 req-214739bd-edc4-4ec6-a935-40703b9545bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.052479) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454052585, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2499, "num_deletes": 265, "total_data_size": 5696652, "memory_usage": 5789952, "flush_reason": "Manual Compaction"}
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454070208, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3692782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25493, "largest_seqno": 27987, "table_properties": {"data_size": 3682594, "index_size": 6426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21956, "raw_average_key_size": 20, "raw_value_size": 3661730, "raw_average_value_size": 3438, "num_data_blocks": 280, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407277, "oldest_key_time": 1759407277, "file_creation_time": 1759407454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17765 microseconds, and 7624 cpu microseconds.
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.070254) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3692782 bytes OK
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.070278) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.071955) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.071972) EVENT_LOG_v1 {"time_micros": 1759407454071966, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.071993) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5685374, prev total WAL file size 5685374, number of live WAL files 2.
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.073237) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3606KB)], [51(8842KB)]
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454073324, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12747429, "oldest_snapshot_seqno": -1}
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5399 keys, 12630675 bytes, temperature: kUnknown
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454176608, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 12630675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12589843, "index_size": 26258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 135673, "raw_average_key_size": 25, "raw_value_size": 12487903, "raw_average_value_size": 2313, "num_data_blocks": 1084, "num_entries": 5399, "num_filter_entries": 5399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759407454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.176999) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12630675 bytes
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.178713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.3 rd, 122.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.6 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(6.9) write-amplify(3.4) OK, records in: 5944, records dropped: 545 output_compression: NoCompression
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.178747) EVENT_LOG_v1 {"time_micros": 1759407454178731, "job": 30, "event": "compaction_finished", "compaction_time_micros": 103391, "compaction_time_cpu_micros": 49482, "output_level": 6, "num_output_files": 1, "total_output_size": 12630675, "num_input_records": 5944, "num_output_records": 5399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454180056, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454183136, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.073107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.183268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.183276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.183279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.183282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:34.183285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.708 2 INFO nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Took 5.05 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.709 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.729 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(c51f2e80-99c7-4848-8e64-214b4d6d2c8b),old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='6a6d4750-7860-4bbf-bba3-50eb20d823fd'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.733 2 DEBUG nova.objects.instance [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lazy-loading 'migration_context' on Instance uuid ecee1ec0-1a8d-4d67-b996-205a942120ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.734 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.736 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.737 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.777 2 DEBUG nova.virt.libvirt.migration [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Find same serial number: pos=1, serial=20a19061-0239-43b4-b9d7-980e7acde072 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.778 2 DEBUG nova.virt.libvirt.vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:24Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.779 2 DEBUG nova.network.os_vif_util [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.780 2 DEBUG nova.network.os_vif_util [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.780 2 DEBUG nova.virt.libvirt.migration [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating guest XML with vif config: <interface type="ethernet">
Oct  2 08:17:34 np0005466031 nova_compute[235803]:  <mac address="fa:16:3e:0e:5e:ba"/>
Oct  2 08:17:34 np0005466031 nova_compute[235803]:  <model type="virtio"/>
Oct  2 08:17:34 np0005466031 nova_compute[235803]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:17:34 np0005466031 nova_compute[235803]:  <mtu size="1442"/>
Oct  2 08:17:34 np0005466031 nova_compute[235803]:  <target dev="tap7539c03e-c9"/>
Oct  2 08:17:34 np0005466031 nova_compute[235803]: </interface>
Oct  2 08:17:34 np0005466031 nova_compute[235803]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct  2 08:17:34 np0005466031 nova_compute[235803]: 2025-10-02 12:17:34.781 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct  2 08:17:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:35.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.240 2 DEBUG nova.virt.libvirt.migration [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.240 2 INFO nova.virt.libvirt.migration [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.447 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct  2 08:17:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:35.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:35 np0005466031 podman[248691]: 2025-10-02 12:17:35.62964152 +0000 UTC m=+0.060584272 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.656 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.656 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.656 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.656 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ecee1ec0-1a8d-4d67-b996-205a942120ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.949 2 DEBUG nova.virt.libvirt.migration [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:35 np0005466031 nova_compute[235803]: 2025-10-02 12:17:35.950 2 DEBUG nova.virt.libvirt.migration [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.043 2 DEBUG nova.compute.manager [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.043 2 DEBUG oslo_concurrency.lockutils [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.043 2 DEBUG oslo_concurrency.lockutils [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.044 2 DEBUG oslo_concurrency.lockutils [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.044 2 DEBUG nova.compute.manager [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.044 2 WARNING nova.compute.manager [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.044 2 DEBUG nova.compute.manager [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-changed-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.045 2 DEBUG nova.compute.manager [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Refreshing instance network info cache due to event network-changed-7539c03e-c932-4473-8d75-729cbed6008a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.045 2 DEBUG oslo_concurrency.lockutils [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.453 2 DEBUG nova.virt.libvirt.migration [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.455 2 DEBUG nova.virt.libvirt.migration [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.701 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407456.7015297, ecee1ec0-1a8d-4d67-b996-205a942120ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.702 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.733 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.737 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.771 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Oct  2 08:17:36 np0005466031 kernel: tap7539c03e-c9 (unregistering): left promiscuous mode
Oct  2 08:17:36 np0005466031 NetworkManager[44907]: <info>  [1759407456.9444] device (tap7539c03e-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:17:36 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:36Z|00078|binding|INFO|Releasing lport 7539c03e-c932-4473-8d75-729cbed6008a from this chassis (sb_readonly=0)
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:36 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:36Z|00079|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a down in Southbound
Oct  2 08:17:36 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:36Z|00080|binding|INFO|Removing iface tap7539c03e-c9 ovn-installed in OVS
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:36.973 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:5e:ba 10.100.0.9'], port_security=['fa:16:3e:0e:5e:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'db222192-8da1-4f7c-972d-dc680c3e6630'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7539c03e-c932-4473-8d75-729cbed6008a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:36.976 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7539c03e-c932-4473-8d75-729cbed6008a in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis#033[00m
Oct  2 08:17:36 np0005466031 nova_compute[235803]: 2025-10-02 12:17:36.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:36.981 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:17:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:36.984 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8a124965-0f7c-4325-9f5f-b1282176a16e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:36.986 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace which is not needed anymore#033[00m
Oct  2 08:17:37 np0005466031 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct  2 08:17:37 np0005466031 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001f.scope: Consumed 11.918s CPU time.
Oct  2 08:17:37 np0005466031 systemd-machined[192227]: Machine qemu-12-instance-0000001f terminated.
Oct  2 08:17:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:37.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:37 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[248597]: [NOTICE]   (248601) : haproxy version is 2.8.14-c23fe91
Oct  2 08:17:37 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[248597]: [NOTICE]   (248601) : path to executable is /usr/sbin/haproxy
Oct  2 08:17:37 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[248597]: [WARNING]  (248601) : Exiting Master process...
Oct  2 08:17:37 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[248597]: [ALERT]    (248601) : Current worker (248603) exited with code 143 (Terminated)
Oct  2 08:17:37 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[248597]: [WARNING]  (248601) : All workers exited. Exiting... (0)
Oct  2 08:17:37 np0005466031 systemd[1]: libpod-ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a.scope: Deactivated successfully.
Oct  2 08:17:37 np0005466031 podman[248743]: 2025-10-02 12:17:37.12674431 +0000 UTC m=+0.049901674 container died ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.128 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.154 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.155 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.155 2 DEBUG oslo_concurrency.lockutils [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.156 2 DEBUG nova.network.neutron [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Refreshing network info cache for port 7539c03e-c932-4473-8d75-729cbed6008a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.158 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:37 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:17:37 np0005466031 systemd[1]: var-lib-containers-storage-overlay-ad2f406c6b07cc49e8ce248f81ca1e4a16c359b1aae21cf316a9c6c29edf227c-merged.mount: Deactivated successfully.
Oct  2 08:17:37 np0005466031 podman[248743]: 2025-10-02 12:17:37.176167089 +0000 UTC m=+0.099324473 container cleanup ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:17:37 np0005466031 systemd[1]: libpod-conmon-ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a.scope: Deactivated successfully.
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.197 2 DEBUG nova.compute.manager [req-e1130ceb-d962-402a-a93a-763109174cc7 req-a4078b29-8846-49e2-96cf-aa8ff60da3f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.198 2 DEBUG oslo_concurrency.lockutils [req-e1130ceb-d962-402a-a93a-763109174cc7 req-a4078b29-8846-49e2-96cf-aa8ff60da3f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.199 2 DEBUG oslo_concurrency.lockutils [req-e1130ceb-d962-402a-a93a-763109174cc7 req-a4078b29-8846-49e2-96cf-aa8ff60da3f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.199 2 DEBUG oslo_concurrency.lockutils [req-e1130ceb-d962-402a-a93a-763109174cc7 req-a4078b29-8846-49e2-96cf-aa8ff60da3f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.199 2 DEBUG nova.compute.manager [req-e1130ceb-d962-402a-a93a-763109174cc7 req-a4078b29-8846-49e2-96cf-aa8ff60da3f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.200 2 DEBUG nova.compute.manager [req-e1130ceb-d962-402a-a93a-763109174cc7 req-a4078b29-8846-49e2-96cf-aa8ff60da3f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:17:37 np0005466031 virtqemud[235323]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-20a19061-0239-43b4-b9d7-980e7acde072: No such file or directory
Oct  2 08:17:37 np0005466031 virtqemud[235323]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-20a19061-0239-43b4-b9d7-980e7acde072: No such file or directory
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.212 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.213 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.213 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.213 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.213 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:37 np0005466031 virtqemud[235323]: Cannot recv data: Input/output error
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.236 2 DEBUG nova.virt.libvirt.guest [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.237 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation has completed#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.238 2 INFO nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] _post_live_migration() is started..#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.239 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.240 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.240 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct  2 08:17:37 np0005466031 podman[248774]: 2025-10-02 12:17:37.248232212 +0000 UTC m=+0.051074448 container remove ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.255 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[999459ab-7cfd-4ce5-8f60-ec9ce033906e]: (4, ('Thu Oct  2 12:17:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a)\nff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a\nThu Oct  2 12:17:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (ff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a)\nff1063b157e985a19db14ccb90d1fb47ed989469359979f6e8694e2ee360c33a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.257 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d5e259-7466-45f1-9a52-d294f6711e13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.259 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:37 np0005466031 kernel: tap5989958f-c0: left promiscuous mode
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.281 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0938a517-3751-477c-8d00-fdf304a5b4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.322 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b6762796-be70-4b98-9772-aee6dc68e394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.324 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea47bcf-4703-4878-8063-991f9df1b949]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.343 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f29e4601-ae5e-4a78-b6b4-7daa2f774cde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529862, 'reachable_time': 18381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248804, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.347 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:17:37 np0005466031 systemd[1]: run-netns-ovnmeta\x2d5989958f\x2dccbb\x2d4db4\x2d8dcb\x2d18563aa2418e.mount: Deactivated successfully.
Oct  2 08:17:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:37.347 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ef091a-43c6-45b7-823b-6f5c4124893b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:37.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3581511965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.658 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.829 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.830 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4831MB free_disk=20.893749237060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.830 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.830 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.928 2 INFO nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating resource usage from migration c51f2e80-99c7-4848-8e64-214b4d6d2c8b#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.954 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Migration c51f2e80-99c7-4848-8e64-214b4d6d2c8b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.955 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:17:37 np0005466031 nova_compute[235803]: 2025-10-02 12:17:37.955 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.017 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.344605) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458344667, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 300, "num_deletes": 251, "total_data_size": 125012, "memory_usage": 131176, "flush_reason": "Manual Compaction"}
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458348262, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 81910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27992, "largest_seqno": 28287, "table_properties": {"data_size": 79983, "index_size": 155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5074, "raw_average_key_size": 18, "raw_value_size": 76145, "raw_average_value_size": 276, "num_data_blocks": 7, "num_entries": 275, "num_filter_entries": 275, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407454, "oldest_key_time": 1759407454, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 3733 microseconds, and 1495 cpu microseconds.
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.348334) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 81910 bytes OK
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.348362) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.350010) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.350046) EVENT_LOG_v1 {"time_micros": 1759407458350033, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.350078) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 122817, prev total WAL file size 122817, number of live WAL files 2.
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.350675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(79KB)], [54(12MB)]
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458350731, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 12712585, "oldest_snapshot_seqno": -1}
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5164 keys, 10806109 bytes, temperature: kUnknown
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458413675, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10806109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10768419, "index_size": 23692, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 131493, "raw_average_key_size": 25, "raw_value_size": 10672116, "raw_average_value_size": 2066, "num_data_blocks": 968, "num_entries": 5164, "num_filter_entries": 5164, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.414021) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10806109 bytes
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.415215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.5 rd, 171.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(287.1) write-amplify(131.9) OK, records in: 5674, records dropped: 510 output_compression: NoCompression
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.415235) EVENT_LOG_v1 {"time_micros": 1759407458415225, "job": 32, "event": "compaction_finished", "compaction_time_micros": 63087, "compaction_time_cpu_micros": 22542, "output_level": 6, "num_output_files": 1, "total_output_size": 10806109, "num_input_records": 5674, "num_output_records": 5164, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458415393, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458417326, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.350537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.417383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.417390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.417391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.417393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:17:38.417395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:38 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1347014307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.450 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.464 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.488 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.507 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.508 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.774 2 DEBUG nova.network.neutron [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updated VIF entry in instance network info cache for port 7539c03e-c932-4473-8d75-729cbed6008a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.775 2 DEBUG nova.network.neutron [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.807 2 DEBUG oslo_concurrency.lockutils [req-e1c18435-62e6-4e50-a3f2-72bce7ca8f24 req-e63729d5-3cd2-4e11-bae3-86ec6c7795ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.937 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Activated binding for port 7539c03e-c932-4473-8d75-729cbed6008a and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.937 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.938 2 DEBUG nova.virt.libvirt.vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:27Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.938 2 DEBUG nova.network.os_vif_util [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.939 2 DEBUG nova.network.os_vif_util [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.939 2 DEBUG os_vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7539c03e-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.949 2 INFO os_vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9')#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.949 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.949 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.949 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.950 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.950 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deleting instance files /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae_del#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.950 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deletion of /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae_del complete#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.985 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.986 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:38 np0005466031 nova_compute[235803]: 2025-10-02 12:17:38.986 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:17:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:39.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.437 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.437 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.438 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.438 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.438 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.438 2 WARNING nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.439 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.439 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.439 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.439 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.440 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.440 2 WARNING nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.440 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.440 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.441 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.441 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.441 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.441 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.442 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.442 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.443 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.443 2 DEBUG oslo_concurrency.lockutils [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.444 2 DEBUG nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.444 2 WARNING nova.compute.manager [req-5c9301b2-b9cf-476c-99cd-b325ad9ba3e1 req-c63fadb4-895f-4b88-b7c4-ba35790ee2cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:39.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:39 np0005466031 nova_compute[235803]: 2025-10-02 12:17:39.662 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:41.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:41.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:41 np0005466031 nova_compute[235803]: 2025-10-02 12:17:41.631 2 DEBUG nova.compute.manager [req-590c0807-5938-4d85-b137-5e5edd1e8cfe req-1233771b-a788-429a-9e0a-9a9c57cf689e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:41 np0005466031 nova_compute[235803]: 2025-10-02 12:17:41.632 2 DEBUG oslo_concurrency.lockutils [req-590c0807-5938-4d85-b137-5e5edd1e8cfe req-1233771b-a788-429a-9e0a-9a9c57cf689e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:41 np0005466031 nova_compute[235803]: 2025-10-02 12:17:41.632 2 DEBUG oslo_concurrency.lockutils [req-590c0807-5938-4d85-b137-5e5edd1e8cfe req-1233771b-a788-429a-9e0a-9a9c57cf689e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:41 np0005466031 nova_compute[235803]: 2025-10-02 12:17:41.633 2 DEBUG oslo_concurrency.lockutils [req-590c0807-5938-4d85-b137-5e5edd1e8cfe req-1233771b-a788-429a-9e0a-9a9c57cf689e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:41 np0005466031 nova_compute[235803]: 2025-10-02 12:17:41.633 2 DEBUG nova.compute.manager [req-590c0807-5938-4d85-b137-5e5edd1e8cfe req-1233771b-a788-429a-9e0a-9a9c57cf689e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:41 np0005466031 nova_compute[235803]: 2025-10-02 12:17:41.633 2 WARNING nova.compute.manager [req-590c0807-5938-4d85-b137-5e5edd1e8cfe req-1233771b-a788-429a-9e0a-9a9c57cf689e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:42 np0005466031 nova_compute[235803]: 2025-10-02 12:17:42.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:43.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:17:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:17:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:43.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:43 np0005466031 nova_compute[235803]: 2025-10-02 12:17:43.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:43 np0005466031 nova_compute[235803]: 2025-10-02 12:17:43.955 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:43 np0005466031 nova_compute[235803]: 2025-10-02 12:17:43.955 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:43 np0005466031 nova_compute[235803]: 2025-10-02 12:17:43.955 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.002 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.003 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.003 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.004 2 DEBUG nova.compute.resource_tracker [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.004 2 DEBUG oslo_concurrency.processutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2688491066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.476 2 DEBUG oslo_concurrency.processutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.660 2 WARNING nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.661 2 DEBUG nova.compute.resource_tracker [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4774MB free_disk=20.921714782714844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.662 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.662 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.701 2 DEBUG nova.compute.resource_tracker [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration for instance ecee1ec0-1a8d-4d67-b996-205a942120ae refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.723 2 DEBUG nova.compute.resource_tracker [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.766 2 DEBUG nova.compute.resource_tracker [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration c51f2e80-99c7-4848-8e64-214b4d6d2c8b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.766 2 DEBUG nova.compute.resource_tracker [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.767 2 DEBUG nova.compute.resource_tracker [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:17:44 np0005466031 podman[249267]: 2025-10-02 12:17:44.792691443 +0000 UTC m=+0.045590227 container create f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gates, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 08:17:44 np0005466031 systemd[1]: Started libpod-conmon-f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1.scope.
Oct  2 08:17:44 np0005466031 nova_compute[235803]: 2025-10-02 12:17:44.847 2 DEBUG oslo_concurrency.processutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:44 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:17:44 np0005466031 podman[249267]: 2025-10-02 12:17:44.773945023 +0000 UTC m=+0.026843817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:17:44 np0005466031 podman[249267]: 2025-10-02 12:17:44.88112007 +0000 UTC m=+0.134018894 container init f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:17:44 np0005466031 podman[249267]: 2025-10-02 12:17:44.888250966 +0000 UTC m=+0.141149760 container start f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 08:17:44 np0005466031 podman[249267]: 2025-10-02 12:17:44.891404247 +0000 UTC m=+0.144303071 container attach f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 08:17:44 np0005466031 nifty_gates[249284]: 167 167
Oct  2 08:17:44 np0005466031 systemd[1]: libpod-f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1.scope: Deactivated successfully.
Oct  2 08:17:44 np0005466031 podman[249267]: 2025-10-02 12:17:44.898832162 +0000 UTC m=+0.151730936 container died f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 08:17:44 np0005466031 systemd[1]: var-lib-containers-storage-overlay-204eaa8553f4e6747c6c084a27d7a42654573f029165fcdc69c23c69a725248b-merged.mount: Deactivated successfully.
Oct  2 08:17:44 np0005466031 podman[249267]: 2025-10-02 12:17:44.937581362 +0000 UTC m=+0.190480126 container remove f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gates, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 08:17:44 np0005466031 systemd[1]: libpod-conmon-f33be1cd493a9bdae079360554a16808fe1f09ae7d40bb90167136da0369cce1.scope: Deactivated successfully.
Oct  2 08:17:45 np0005466031 podman[249327]: 2025-10-02 12:17:45.091368378 +0000 UTC m=+0.044460726 container create 563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jones, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 08:17:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:45.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:45 np0005466031 systemd[1]: Started libpod-conmon-563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7.scope.
Oct  2 08:17:45 np0005466031 podman[249327]: 2025-10-02 12:17:45.073471701 +0000 UTC m=+0.026564059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:17:45 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:17:45 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40c523c4f11e559574ad7b0e6081efcf4af69575a857ea1639b90b936a4c73b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 08:17:45 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40c523c4f11e559574ad7b0e6081efcf4af69575a857ea1639b90b936a4c73b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 08:17:45 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40c523c4f11e559574ad7b0e6081efcf4af69575a857ea1639b90b936a4c73b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 08:17:45 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40c523c4f11e559574ad7b0e6081efcf4af69575a857ea1639b90b936a4c73b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 08:17:45 np0005466031 podman[249327]: 2025-10-02 12:17:45.193372387 +0000 UTC m=+0.146464755 container init 563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jones, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 08:17:45 np0005466031 podman[249327]: 2025-10-02 12:17:45.202895852 +0000 UTC m=+0.155988200 container start 563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:17:45 np0005466031 podman[249327]: 2025-10-02 12:17:45.205880709 +0000 UTC m=+0.158973087 container attach 563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jones, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 08:17:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1617248047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:45 np0005466031 nova_compute[235803]: 2025-10-02 12:17:45.306 2 DEBUG oslo_concurrency.processutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:45 np0005466031 nova_compute[235803]: 2025-10-02 12:17:45.314 2 DEBUG nova.compute.provider_tree [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:45 np0005466031 nova_compute[235803]: 2025-10-02 12:17:45.332 2 DEBUG nova.scheduler.client.report [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:45 np0005466031 nova_compute[235803]: 2025-10-02 12:17:45.356 2 DEBUG nova.compute.resource_tracker [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:17:45 np0005466031 nova_compute[235803]: 2025-10-02 12:17:45.356 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:45 np0005466031 nova_compute[235803]: 2025-10-02 12:17:45.362 2 INFO nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migrating instance to compute-1.ctlplane.example.com finished successfully.#033[00m
Oct  2 08:17:45 np0005466031 nova_compute[235803]: 2025-10-02 12:17:45.449 2 INFO nova.scheduler.client.report [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Deleted allocation for migration c51f2e80-99c7-4848-8e64-214b4d6d2c8b#033[00m
Oct  2 08:17:45 np0005466031 nova_compute[235803]: 2025-10-02 12:17:45.450 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct  2 08:17:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:45.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:46 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  2 08:17:46 np0005466031 adoring_jones[249343]: [
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:    {
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        "available": false,
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        "ceph_device": false,
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        "lsm_data": {},
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        "lvs": [],
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        "path": "/dev/sr0",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        "rejected_reasons": [
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "Has a FileSystem",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "Insufficient space (<5GB)"
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        ],
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        "sys_api": {
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "actuators": null,
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "device_nodes": "sr0",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "devname": "sr0",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "human_readable_size": "482.00 KB",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "id_bus": "ata",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "model": "QEMU DVD-ROM",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "nr_requests": "2",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "parent": "/dev/sr0",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "partitions": {},
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "path": "/dev/sr0",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "removable": "1",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "rev": "2.5+",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "ro": "0",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "rotational": "0",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "sas_address": "",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "sas_device_handle": "",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "scheduler_mode": "mq-deadline",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "sectors": 0,
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "sectorsize": "2048",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "size": 493568.0,
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "support_discard": "2048",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "type": "disk",
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:            "vendor": "QEMU"
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:        }
Oct  2 08:17:46 np0005466031 adoring_jones[249343]:    }
Oct  2 08:17:46 np0005466031 adoring_jones[249343]: ]
Oct  2 08:17:46 np0005466031 systemd[1]: libpod-563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7.scope: Deactivated successfully.
Oct  2 08:17:46 np0005466031 systemd[1]: libpod-563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7.scope: Consumed 1.293s CPU time.
Oct  2 08:17:46 np0005466031 podman[249327]: 2025-10-02 12:17:46.494730468 +0000 UTC m=+1.447822836 container died 563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jones, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 08:17:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b40c523c4f11e559574ad7b0e6081efcf4af69575a857ea1639b90b936a4c73b-merged.mount: Deactivated successfully.
Oct  2 08:17:46 np0005466031 podman[249327]: 2025-10-02 12:17:46.562815686 +0000 UTC m=+1.515908074 container remove 563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jones, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 08:17:46 np0005466031 systemd[1]: libpod-conmon-563c02600dfda3550481b4f06530a8ca11c730a3a7c9590ab01d99be1983bae7.scope: Deactivated successfully.
Oct  2 08:17:46 np0005466031 nova_compute[235803]: 2025-10-02 12:17:46.574 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating tmpfile /var/lib/nova/instances/tmp10f21u5k to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct  2 08:17:46 np0005466031 nova_compute[235803]: 2025-10-02 12:17:46.576 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct  2 08:17:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:46 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:47.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:47 np0005466031 nova_compute[235803]: 2025-10-02 12:17:47.624 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct  2 08:17:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:47.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:47 np0005466031 nova_compute[235803]: 2025-10-02 12:17:47.661 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:47 np0005466031 nova_compute[235803]: 2025-10-02 12:17:47.661 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:47 np0005466031 nova_compute[235803]: 2025-10-02 12:17:47.662 2 DEBUG nova.network.neutron [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:17:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:17:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:17:47 np0005466031 nova_compute[235803]: 2025-10-02 12:17:47.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:48 np0005466031 nova_compute[235803]: 2025-10-02 12:17:48.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:49.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:49.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.745 2 DEBUG nova.network.neutron [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.770 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.772 2 DEBUG os_brick.utils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.774 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.787 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.787 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[7d85ede8-d114-44fd-9181-07a624ff9aa8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.789 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.798 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.799 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[88b3ca30-5183-4033-bb4d-4a4b12e61f28]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.801 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.811 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.812 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[f982c5e1-feb6-4d86-85d8-9acc52ac6f10]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.813 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[cf28d908-25b9-4afb-a0e6-713faddc18c2]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.814 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.841 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.844 2 DEBUG os_brick.initiator.connectors.lightos [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.844 2 DEBUG os_brick.initiator.connectors.lightos [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.844 2 DEBUG os_brick.initiator.connectors.lightos [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:17:49 np0005466031 nova_compute[235803]: 2025-10-02 12:17:49.845 2 DEBUG os_brick.utils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.001 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.002 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.021 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.086 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.087 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.096 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.096 2 INFO nova.compute.claims [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.228 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3030137041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.735 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.746 2 DEBUG nova.compute.provider_tree [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.775 2 DEBUG nova.scheduler.client.report [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.816 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.818 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.868 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.869 2 DEBUG nova.network.neutron [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.901 2 INFO nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:17:50 np0005466031 nova_compute[235803]: 2025-10-02 12:17:50.926 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.041 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.044 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.044 2 INFO nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Creating image(s)#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.086 2 DEBUG nova.storage.rbd_utils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.120 2 DEBUG nova.storage.rbd_utils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.151 2 DEBUG nova.storage.rbd_utils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.156 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.177 2 DEBUG nova.policy [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed8b6a2129742dfb3b8a0d9f044ac24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.218 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.218 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.219 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.219 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.246 2 DEBUG nova.storage.rbd_utils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.250 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.379 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='ff6e5128-91a9-4273-b9c7-d3a4c775f1fa'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.380 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating instance directory: /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.381 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Ensure instance console log exists: /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.381 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.385 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.386 2 DEBUG nova.virt.libvirt.vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:43Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.386 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.387 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.387 2 DEBUG os_vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7539c03e-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7539c03e-c9, col_values=(('external_ids', {'iface-id': '7539c03e-c932-4473-8d75-729cbed6008a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:5e:ba', 'vm-uuid': 'ecee1ec0-1a8d-4d67-b996-205a942120ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:51 np0005466031 NetworkManager[44907]: <info>  [1759407471.3973] manager: (tap7539c03e-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.405 2 INFO os_vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9')#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.411 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.412 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='ff6e5128-91a9-4273-b9c7-d3a4c775f1fa'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.559 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.625 2 DEBUG nova.storage.rbd_utils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] resizing rbd image 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:17:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:51.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.745 2 DEBUG nova.objects.instance [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'migration_context' on Instance uuid 93e1fa10-4ba0-4715-be09-0d7dae7a5484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.761 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.762 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Ensure instance console log exists: /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.762 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.762 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.763 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:51 np0005466031 nova_compute[235803]: 2025-10-02 12:17:51.833 2 DEBUG nova.network.neutron [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Successfully created port: f73e5355-2f7c-48f8-bc9f-fd14478616c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.236 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407457.23104, ecee1ec0-1a8d-4d67-b996-205a942120ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.236 2 INFO nova.compute.manager [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.270 2 DEBUG nova.compute.manager [None req-b16142c1-5c31-48e4-a386-fa6305933457 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.735 2 DEBUG nova.network.neutron [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Successfully updated port: f73e5355-2f7c-48f8-bc9f-fd14478616c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.758 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.759 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquired lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.759 2 DEBUG nova.network.neutron [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.770 2 DEBUG nova.network.neutron [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Port 7539c03e-c932-4473-8d75-729cbed6008a updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.921 2 DEBUG nova.compute.manager [req-66871924-8106-4fb1-80da-3be1f37b9bb8 req-040b627d-addc-4c9f-89ed-5748b27b7329 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-changed-f73e5355-2f7c-48f8-bc9f-fd14478616c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.922 2 DEBUG nova.compute.manager [req-66871924-8106-4fb1-80da-3be1f37b9bb8 req-040b627d-addc-4c9f-89ed-5748b27b7329 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Refreshing instance network info cache due to event network-changed-f73e5355-2f7c-48f8-bc9f-fd14478616c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:17:52 np0005466031 nova_compute[235803]: 2025-10-02 12:17:52.922 2 DEBUG oslo_concurrency.lockutils [req-66871924-8106-4fb1-80da-3be1f37b9bb8 req-040b627d-addc-4c9f-89ed-5748b27b7329 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.077 2 DEBUG nova.network.neutron [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.182 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='ff6e5128-91a9-4273-b9c7-d3a4c775f1fa'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct  2 08:17:53 np0005466031 systemd[1]: Starting libvirt proxy daemon...
Oct  2 08:17:53 np0005466031 systemd[1]: Started libvirt proxy daemon.
Oct  2 08:17:53 np0005466031 podman[250940]: 2025-10-02 12:17:53.389663854 +0000 UTC m=+0.083058812 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 08:17:53 np0005466031 podman[250941]: 2025-10-02 12:17:53.449454833 +0000 UTC m=+0.148142194 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:17:53 np0005466031 NetworkManager[44907]: <info>  [1759407473.4535] manager: (tap7539c03e-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Oct  2 08:17:53 np0005466031 kernel: tap7539c03e-c9: entered promiscuous mode
Oct  2 08:17:53 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:53Z|00081|binding|INFO|Claiming lport 7539c03e-c932-4473-8d75-729cbed6008a for this additional chassis.
Oct  2 08:17:53 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:53Z|00082|binding|INFO|7539c03e-c932-4473-8d75-729cbed6008a: Claiming fa:16:3e:0e:5e:ba 10.100.0.9
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:53 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:53Z|00083|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a ovn-installed in OVS
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:53 np0005466031 systemd-udevd[251014]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:17:53 np0005466031 systemd-machined[192227]: New machine qemu-13-instance-0000001f.
Oct  2 08:17:53 np0005466031 systemd[1]: Started Virtual Machine qemu-13-instance-0000001f.
Oct  2 08:17:53 np0005466031 NetworkManager[44907]: <info>  [1759407473.5078] device (tap7539c03e-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:17:53 np0005466031 NetworkManager[44907]: <info>  [1759407473.5099] device (tap7539c03e-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:17:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.887 2 DEBUG nova.network.neutron [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updating instance_info_cache with network_info: [{"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.913 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Releasing lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.914 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Instance network_info: |[{"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.914 2 DEBUG oslo_concurrency.lockutils [req-66871924-8106-4fb1-80da-3be1f37b9bb8 req-040b627d-addc-4c9f-89ed-5748b27b7329 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.914 2 DEBUG nova.network.neutron [req-66871924-8106-4fb1-80da-3be1f37b9bb8 req-040b627d-addc-4c9f-89ed-5748b27b7329 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Refreshing network info cache for port f73e5355-2f7c-48f8-bc9f-fd14478616c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.917 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Start _get_guest_xml network_info=[{"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.925 2 WARNING nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.937 2 DEBUG nova.virt.libvirt.host [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.938 2 DEBUG nova.virt.libvirt.host [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.943 2 DEBUG nova.virt.libvirt.host [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.944 2 DEBUG nova.virt.libvirt.host [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.945 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.946 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.946 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.947 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.947 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.947 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.948 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.948 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.948 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.949 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.949 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.949 2 DEBUG nova.virt.hardware [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:17:53 np0005466031 nova_compute[235803]: 2025-10-02 12:17:53.953 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:54 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1585577951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.427 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.464 2 DEBUG nova.storage.rbd_utils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.469 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:54 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/796225185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.941 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.945 2 DEBUG nova.virt.libvirt.vif [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-309652729',display_name='tempest-SecurityGroupsTestJSON-server-309652729',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-309652729',id=33,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-2vmynmxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:50Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=93e1fa10-4ba0-4715-be09-0d7dae7a5484,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.947 2 DEBUG nova.network.os_vif_util [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.948 2 DEBUG nova.network.os_vif_util [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:5c:a4,bridge_name='br-int',has_traffic_filtering=True,id=f73e5355-2f7c-48f8-bc9f-fd14478616c3,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf73e5355-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.952 2 DEBUG nova.objects.instance [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'pci_devices' on Instance uuid 93e1fa10-4ba0-4715-be09-0d7dae7a5484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.975 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <uuid>93e1fa10-4ba0-4715-be09-0d7dae7a5484</uuid>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <name>instance-00000021</name>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <nova:name>tempest-SecurityGroupsTestJSON-server-309652729</nova:name>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:17:53</nova:creationTime>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <nova:user uuid="2ed8b6a2129742dfb3b8a0d9f044ac24">tempest-SecurityGroupsTestJSON-1241678427-project-member</nova:user>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <nova:project uuid="f0bd0c6232b84d03a010ba8cf85bda46">tempest-SecurityGroupsTestJSON-1241678427</nova:project>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <nova:port uuid="f73e5355-2f7c-48f8-bc9f-fd14478616c3">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <entry name="serial">93e1fa10-4ba0-4715-be09-0d7dae7a5484</entry>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <entry name="uuid">93e1fa10-4ba0-4715-be09-0d7dae7a5484</entry>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk.config">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:1b:5c:a4"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <target dev="tapf73e5355-2f"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484/console.log" append="off"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:17:54 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:17:54 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:17:54 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:17:54 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.978 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Preparing to wait for external event network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.979 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.980 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.980 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.981 2 DEBUG nova.virt.libvirt.vif [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-309652729',display_name='tempest-SecurityGroupsTestJSON-server-309652729',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-309652729',id=33,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-2vmynmxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:50Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=93e1fa10-4ba0-4715-be09-0d7dae7a5484,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.982 2 DEBUG nova.network.os_vif_util [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.983 2 DEBUG nova.network.os_vif_util [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:5c:a4,bridge_name='br-int',has_traffic_filtering=True,id=f73e5355-2f7c-48f8-bc9f-fd14478616c3,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf73e5355-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.984 2 DEBUG os_vif [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:5c:a4,bridge_name='br-int',has_traffic_filtering=True,id=f73e5355-2f7c-48f8-bc9f-fd14478616c3,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf73e5355-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.996 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf73e5355-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:54 np0005466031 nova_compute[235803]: 2025-10-02 12:17:54.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf73e5355-2f, col_values=(('external_ids', {'iface-id': 'f73e5355-2f7c-48f8-bc9f-fd14478616c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:5c:a4', 'vm-uuid': '93e1fa10-4ba0-4715-be09-0d7dae7a5484'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:55 np0005466031 NetworkManager[44907]: <info>  [1759407475.0099] manager: (tapf73e5355-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.019 2 INFO os_vif [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:5c:a4,bridge_name='br-int',has_traffic_filtering=True,id=f73e5355-2f7c-48f8-bc9f-fd14478616c3,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf73e5355-2f')#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.075 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.076 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.077 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No VIF found with MAC fa:16:3e:1b:5c:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.078 2 INFO nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Using config drive#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.112 2 DEBUG nova.storage.rbd_utils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.776 2 INFO nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Creating config drive at /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484/disk.config#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.785 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz90rz5g9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.934 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz90rz5g9" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.968 2 DEBUG nova.storage.rbd_utils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:55 np0005466031 nova_compute[235803]: 2025-10-02 12:17:55.972 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484/disk.config 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.017 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407476.0166066, ecee1ec0-1a8d-4d67-b996-205a942120ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.018 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Started (Lifecycle Event)#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.044 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.189 2 DEBUG nova.network.neutron [req-66871924-8106-4fb1-80da-3be1f37b9bb8 req-040b627d-addc-4c9f-89ed-5748b27b7329 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updated VIF entry in instance network info cache for port f73e5355-2f7c-48f8-bc9f-fd14478616c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.189 2 DEBUG nova.network.neutron [req-66871924-8106-4fb1-80da-3be1f37b9bb8 req-040b627d-addc-4c9f-89ed-5748b27b7329 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updating instance_info_cache with network_info: [{"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.202 2 DEBUG oslo_concurrency.lockutils [req-66871924-8106-4fb1-80da-3be1f37b9bb8 req-040b627d-addc-4c9f-89ed-5748b27b7329 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.229 2 DEBUG oslo_concurrency.processutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484/disk.config 93e1fa10-4ba0-4715-be09-0d7dae7a5484_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.230 2 INFO nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Deleting local config drive /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484/disk.config because it was imported into RBD.#033[00m
Oct  2 08:17:56 np0005466031 kernel: tapf73e5355-2f: entered promiscuous mode
Oct  2 08:17:56 np0005466031 NetworkManager[44907]: <info>  [1759407476.2895] manager: (tapf73e5355-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct  2 08:17:56 np0005466031 NetworkManager[44907]: <info>  [1759407476.3037] device (tapf73e5355-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:17:56 np0005466031 NetworkManager[44907]: <info>  [1759407476.3047] device (tapf73e5355-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:56Z|00084|binding|INFO|Claiming lport f73e5355-2f7c-48f8-bc9f-fd14478616c3 for this chassis.
Oct  2 08:17:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:56Z|00085|binding|INFO|f73e5355-2f7c-48f8-bc9f-fd14478616c3: Claiming fa:16:3e:1b:5c:a4 10.100.0.10
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.396 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:5c:a4 10.100.0.10'], port_security=['fa:16:3e:1b:5c:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '93e1fa10-4ba0-4715-be09-0d7dae7a5484', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=f73e5355-2f7c-48f8-bc9f-fd14478616c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.397 141898 INFO neutron.agent.ovn.metadata.agent [-] Port f73e5355-2f7c-48f8-bc9f-fd14478616c3 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f bound to our chassis#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.400 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cec9cbfc-5dec-4f85-90c5-6104a054547f#033[00m
Oct  2 08:17:56 np0005466031 systemd-machined[192227]: New machine qemu-14-instance-00000021.
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.413 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c93914-23e8-4e38-b6bf-f1a03420408f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.414 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcec9cbfc-51 in ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.416 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcec9cbfc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.416 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e4487760-ed7b-4aea-961b-d7c66257ef6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.417 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1f75a1f3-bbcc-40ee-b394-9817b4263040]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 systemd[1]: Started Virtual Machine qemu-14-instance-00000021.
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.432 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7fc2d3-0ea0-4cd5-801a-37766c659b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.448 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ca2500-ef68-4edb-b091-4fee7f5e51b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:56Z|00086|binding|INFO|Setting lport f73e5355-2f7c-48f8-bc9f-fd14478616c3 ovn-installed in OVS
Oct  2 08:17:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:56Z|00087|binding|INFO|Setting lport f73e5355-2f7c-48f8-bc9f-fd14478616c3 up in Southbound
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.478 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[09a7b2e1-26cb-4d5c-b4c7-63071c4d634d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 NetworkManager[44907]: <info>  [1759407476.4880] manager: (tapcec9cbfc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.486 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[838c9ead-0001-42fd-b578-ca677857d959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 systemd-udevd[251266]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.524 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[10c2a927-86d0-44e4-b3c2-de64ce894de8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.532 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c1227780-f584-4276-8446-648c6b031b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 NetworkManager[44907]: <info>  [1759407476.5528] device (tapcec9cbfc-50): carrier: link connected
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.559 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f51ba5f9-6811-4628-8487-f452e50d55e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.573 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[71158a7b-341c-4fa2-8269-167031bdb5cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533214, 'reachable_time': 30982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251286, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.588 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1346a68a-6bd8-4dd0-bd8a-15e995757f54]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:917'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533214, 'tstamp': 533214}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251287, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.606 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[85d86546-4304-43e0-bf9a-0fc2efa696be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533214, 'reachable_time': 30982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251288, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.609 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407476.6090167, ecee1ec0-1a8d-4d67-b996-205a942120ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.610 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.640 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0981b8-25e9-45a5-9b92-5b60c356c0bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.675 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.680 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.708 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.719 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[14cf652a-2944-41d7-b804-9bac64e0f0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.720 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.720 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.720 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcec9cbfc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:56 np0005466031 NetworkManager[44907]: <info>  [1759407476.7230] manager: (tapcec9cbfc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466031 kernel: tapcec9cbfc-50: entered promiscuous mode
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.730 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcec9cbfc-50, col_values=(('external_ids', {'iface-id': '7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:56Z|00088|binding|INFO|Releasing lport 7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4 from this chassis (sb_readonly=0)
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.734 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.735 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9db6c3-5028-4be5-a7bb-7c8ba8fb0de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.736 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:17:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:56.736 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'env', 'PROCESS_TAG=haproxy-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cec9cbfc-5dec-4f85-90c5-6104a054547f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:17:56 np0005466031 nova_compute[235803]: 2025-10-02 12:17:56.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:57.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:57 np0005466031 podman[251362]: 2025-10-02 12:17:57.146599954 +0000 UTC m=+0.054393114 container create c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.159 2 DEBUG nova.compute.manager [req-0ea44e96-caa6-4a0c-b989-70af18efe842 req-b06e9a66-b1f4-44ba-ad09-674b3cbddc3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.160 2 DEBUG oslo_concurrency.lockutils [req-0ea44e96-caa6-4a0c-b989-70af18efe842 req-b06e9a66-b1f4-44ba-ad09-674b3cbddc3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.160 2 DEBUG oslo_concurrency.lockutils [req-0ea44e96-caa6-4a0c-b989-70af18efe842 req-b06e9a66-b1f4-44ba-ad09-674b3cbddc3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.161 2 DEBUG oslo_concurrency.lockutils [req-0ea44e96-caa6-4a0c-b989-70af18efe842 req-b06e9a66-b1f4-44ba-ad09-674b3cbddc3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.161 2 DEBUG nova.compute.manager [req-0ea44e96-caa6-4a0c-b989-70af18efe842 req-b06e9a66-b1f4-44ba-ad09-674b3cbddc3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Processing event network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:17:57 np0005466031 systemd[1]: Started libpod-conmon-c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93.scope.
Oct  2 08:17:57 np0005466031 podman[251362]: 2025-10-02 12:17:57.121079736 +0000 UTC m=+0.028872896 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:17:57 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:17:57 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca4360ebdef5f16cc1f44eb20609dddcad5988be530b66ffe6703037187dff8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:17:57 np0005466031 podman[251362]: 2025-10-02 12:17:57.249763796 +0000 UTC m=+0.157556976 container init c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:17:57 np0005466031 podman[251362]: 2025-10-02 12:17:57.255223524 +0000 UTC m=+0.163016684 container start c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.256 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407477.2556155, 93e1fa10-4ba0-4715-be09-0d7dae7a5484 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.256 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] VM Started (Lifecycle Event)#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.258 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.262 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.266 2 INFO nova.virt.libvirt.driver [-] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Instance spawned successfully.#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.266 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.274 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:57 np0005466031 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[251377]: [NOTICE]   (251381) : New worker (251383) forked
Oct  2 08:17:57 np0005466031 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[251377]: [NOTICE]   (251381) : Loading success.
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.282 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.287 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.288 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.288 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.289 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.290 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.290 2 DEBUG nova.virt.libvirt.driver [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.318 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.319 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407477.256899, 93e1fa10-4ba0-4715-be09-0d7dae7a5484 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.319 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.361 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.364 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407477.260853, 93e1fa10-4ba0-4715-be09-0d7dae7a5484 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.364 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.369 2 INFO nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Took 6.33 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.369 2 DEBUG nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.396 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.404 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.434 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.448 2 INFO nova.compute.manager [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Took 7.38 seconds to build instance.#033[00m
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.486 2 DEBUG oslo_concurrency.lockutils [None req-d1801ca9-4861-43e0-8409-30e6a47dee99 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:17:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:57.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:17:57 np0005466031 nova_compute[235803]: 2025-10-02 12:17:57.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:58Z|00089|binding|INFO|Claiming lport 7539c03e-c932-4473-8d75-729cbed6008a for this chassis.
Oct  2 08:17:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:58Z|00090|binding|INFO|7539c03e-c932-4473-8d75-729cbed6008a: Claiming fa:16:3e:0e:5e:ba 10.100.0.9
Oct  2 08:17:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:58Z|00091|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a up in Southbound
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.611 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:5e:ba 10.100.0.9'], port_security=['fa:16:3e:0e:5e:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '21', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7539c03e-c932-4473-8d75-729cbed6008a) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.612 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7539c03e-c932-4473-8d75-729cbed6008a in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e bound to our chassis#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.614 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.624 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d5d0c1-0444-4741-9003-fd2aeb01c961]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.625 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5989958f-c1 in ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.627 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5989958f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.627 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebd7246-3bec-43af-867f-83c1cf4fb4c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.628 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c6937b41-5230-4114-82c7-aab066254c7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.639 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[8605471c-e167-4bf5-9a7e-7b602e41d829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.662 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[17c96a9e-b011-4929-95b1-b692e8d9d6c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3679030386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.690 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bad8b6a3-c350-4261-ab6a-2fbcfffd92fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 NetworkManager[44907]: <info>  [1759407478.6972] manager: (tap5989958f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Oct  2 08:17:58 np0005466031 systemd-udevd[251277]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.701 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[eda9f60d-ef57-483b-8d06-d3b6b4c14e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.736 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7cb7cd-3e48-431c-8dc3-668088b72acd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.739 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f20c79-415a-4038-a9f2-62bad94c9941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 NetworkManager[44907]: <info>  [1759407478.7609] device (tap5989958f-c0): carrier: link connected
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.768 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8557b4-460e-43f5-bcdf-e7f27787c939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.789 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[014ec5fa-03cd-4ad0-8226-42b112440cb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533435, 'reachable_time': 23368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251403, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 nova_compute[235803]: 2025-10-02 12:17:58.799 2 INFO nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Post operation of migration started#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.805 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdf5c01-eb4e-468e-8dbc-ebcaef1edea3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:d212'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533435, 'tstamp': 533435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251404, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.821 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[44178cbf-61d8-49ee-b218-38b47dc682be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533435, 'reachable_time': 23368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251405, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.851 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6be173-62f2-46b4-a1e2-932661ec646a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.910 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b18e1e00-8ceb-4fc2-8699-3379b2ca5025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.911 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.911 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.912 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:58 np0005466031 nova_compute[235803]: 2025-10-02 12:17:58.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466031 NetworkManager[44907]: <info>  [1759407478.9145] manager: (tap5989958f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct  2 08:17:58 np0005466031 kernel: tap5989958f-c0: entered promiscuous mode
Oct  2 08:17:58 np0005466031 nova_compute[235803]: 2025-10-02 12:17:58.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.917 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:58 np0005466031 nova_compute[235803]: 2025-10-02 12:17:58.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:17:58Z|00092|binding|INFO|Releasing lport c7d8e124-cc34-42e6-82ac-6fdf057166bf from this chassis (sb_readonly=0)
Oct  2 08:17:58 np0005466031 nova_compute[235803]: 2025-10-02 12:17:58.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.932 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.933 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3129f183-4b5c-42ae-9b72-677c1b9f3c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.934 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:17:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:17:58.935 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'env', 'PROCESS_TAG=haproxy-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5989958f-ccbb-4db4-8dcb-18563aa2418e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:17:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:59.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:59 np0005466031 podman[251438]: 2025-10-02 12:17:59.292413537 +0000 UTC m=+0.051236013 container create 0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.308 2 DEBUG nova.compute.manager [req-1fdfaa2b-5473-4b97-bde6-02ab2f2e626a req-b77101aa-2718-4d48-a0b5-74bf572a370e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.309 2 DEBUG oslo_concurrency.lockutils [req-1fdfaa2b-5473-4b97-bde6-02ab2f2e626a req-b77101aa-2718-4d48-a0b5-74bf572a370e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.310 2 DEBUG oslo_concurrency.lockutils [req-1fdfaa2b-5473-4b97-bde6-02ab2f2e626a req-b77101aa-2718-4d48-a0b5-74bf572a370e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.310 2 DEBUG oslo_concurrency.lockutils [req-1fdfaa2b-5473-4b97-bde6-02ab2f2e626a req-b77101aa-2718-4d48-a0b5-74bf572a370e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.310 2 DEBUG nova.compute.manager [req-1fdfaa2b-5473-4b97-bde6-02ab2f2e626a req-b77101aa-2718-4d48-a0b5-74bf572a370e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] No waiting events found dispatching network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.310 2 WARNING nova.compute.manager [req-1fdfaa2b-5473-4b97-bde6-02ab2f2e626a req-b77101aa-2718-4d48-a0b5-74bf572a370e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received unexpected event network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.322 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.322 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:59 np0005466031 nova_compute[235803]: 2025-10-02 12:17:59.323 2 DEBUG nova.network.neutron [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:59 np0005466031 systemd[1]: Started libpod-conmon-0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8.scope.
Oct  2 08:17:59 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:17:59 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168d6f9c58a8248a2c9adf11128dfa54b237f0eddba31c9c4aad5eb86d9a997b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:17:59 np0005466031 podman[251438]: 2025-10-02 12:17:59.269648338 +0000 UTC m=+0.028470854 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:17:59 np0005466031 podman[251438]: 2025-10-02 12:17:59.369784653 +0000 UTC m=+0.128607139 container init 0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:17:59 np0005466031 podman[251438]: 2025-10-02 12:17:59.376177148 +0000 UTC m=+0.134999634 container start 0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:17:59 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[251454]: [NOTICE]   (251458) : New worker (251460) forked
Oct  2 08:17:59 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[251454]: [NOTICE]   (251458) : Loading success.
Oct  2 08:17:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:17:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:59.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:00 np0005466031 nova_compute[235803]: 2025-10-02 12:18:00.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:00 np0005466031 nova_compute[235803]: 2025-10-02 12:18:00.382 2 DEBUG nova.compute.manager [req-10201f94-3ff0-453d-8da2-99c09585cb5c req-4ede0543-4631-4543-b1ee-2c01b621e8af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-changed-f73e5355-2f7c-48f8-bc9f-fd14478616c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:00 np0005466031 nova_compute[235803]: 2025-10-02 12:18:00.382 2 DEBUG nova.compute.manager [req-10201f94-3ff0-453d-8da2-99c09585cb5c req-4ede0543-4631-4543-b1ee-2c01b621e8af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Refreshing instance network info cache due to event network-changed-f73e5355-2f7c-48f8-bc9f-fd14478616c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:18:00 np0005466031 nova_compute[235803]: 2025-10-02 12:18:00.383 2 DEBUG oslo_concurrency.lockutils [req-10201f94-3ff0-453d-8da2-99c09585cb5c req-4ede0543-4631-4543-b1ee-2c01b621e8af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:00 np0005466031 nova_compute[235803]: 2025-10-02 12:18:00.383 2 DEBUG oslo_concurrency.lockutils [req-10201f94-3ff0-453d-8da2-99c09585cb5c req-4ede0543-4631-4543-b1ee-2c01b621e8af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:00 np0005466031 nova_compute[235803]: 2025-10-02 12:18:00.383 2 DEBUG nova.network.neutron [req-10201f94-3ff0-453d-8da2-99c09585cb5c req-4ede0543-4631-4543-b1ee-2c01b621e8af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Refreshing network info cache for port f73e5355-2f7c-48f8-bc9f-fd14478616c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:18:00 np0005466031 podman[251470]: 2025-10-02 12:18:00.62221383 +0000 UTC m=+0.053789036 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:18:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:01.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:01 np0005466031 nova_compute[235803]: 2025-10-02 12:18:01.479 2 DEBUG nova.network.neutron [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:01 np0005466031 nova_compute[235803]: 2025-10-02 12:18:01.520 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:01 np0005466031 nova_compute[235803]: 2025-10-02 12:18:01.535 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:01 np0005466031 nova_compute[235803]: 2025-10-02 12:18:01.536 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:01 np0005466031 nova_compute[235803]: 2025-10-02 12:18:01.536 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:01 np0005466031 nova_compute[235803]: 2025-10-02 12:18:01.540 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct  2 08:18:01 np0005466031 virtqemud[235323]: Domain id=13 name='instance-0000001f' uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae is tainted: custom-monitor
Oct  2 08:18:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:01.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:02 np0005466031 nova_compute[235803]: 2025-10-02 12:18:02.549 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct  2 08:18:02 np0005466031 nova_compute[235803]: 2025-10-02 12:18:02.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:02 np0005466031 nova_compute[235803]: 2025-10-02 12:18:02.838 2 DEBUG nova.compute.manager [req-2f43f3e6-08eb-4a7b-b9fa-ce60ff335ff4 req-93f2404f-4f76-464a-8f7a-bf84f82fe07d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-changed-f73e5355-2f7c-48f8-bc9f-fd14478616c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:02 np0005466031 nova_compute[235803]: 2025-10-02 12:18:02.839 2 DEBUG nova.compute.manager [req-2f43f3e6-08eb-4a7b-b9fa-ce60ff335ff4 req-93f2404f-4f76-464a-8f7a-bf84f82fe07d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Refreshing instance network info cache due to event network-changed-f73e5355-2f7c-48f8-bc9f-fd14478616c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:18:02 np0005466031 nova_compute[235803]: 2025-10-02 12:18:02.839 2 DEBUG oslo_concurrency.lockutils [req-2f43f3e6-08eb-4a7b-b9fa-ce60ff335ff4 req-93f2404f-4f76-464a-8f7a-bf84f82fe07d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:03.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:03 np0005466031 nova_compute[235803]: 2025-10-02 12:18:03.310 2 DEBUG nova.network.neutron [req-10201f94-3ff0-453d-8da2-99c09585cb5c req-4ede0543-4631-4543-b1ee-2c01b621e8af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updated VIF entry in instance network info cache for port f73e5355-2f7c-48f8-bc9f-fd14478616c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:18:03 np0005466031 nova_compute[235803]: 2025-10-02 12:18:03.312 2 DEBUG nova.network.neutron [req-10201f94-3ff0-453d-8da2-99c09585cb5c req-4ede0543-4631-4543-b1ee-2c01b621e8af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updating instance_info_cache with network_info: [{"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:03 np0005466031 nova_compute[235803]: 2025-10-02 12:18:03.339 2 DEBUG oslo_concurrency.lockutils [req-10201f94-3ff0-453d-8da2-99c09585cb5c req-4ede0543-4631-4543-b1ee-2c01b621e8af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:03 np0005466031 nova_compute[235803]: 2025-10-02 12:18:03.341 2 DEBUG oslo_concurrency.lockutils [req-2f43f3e6-08eb-4a7b-b9fa-ce60ff335ff4 req-93f2404f-4f76-464a-8f7a-bf84f82fe07d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:03 np0005466031 nova_compute[235803]: 2025-10-02 12:18:03.341 2 DEBUG nova.network.neutron [req-2f43f3e6-08eb-4a7b-b9fa-ce60ff335ff4 req-93f2404f-4f76-464a-8f7a-bf84f82fe07d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Refreshing network info cache for port f73e5355-2f7c-48f8-bc9f-fd14478616c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:18:03 np0005466031 nova_compute[235803]: 2025-10-02 12:18:03.558 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct  2 08:18:03 np0005466031 nova_compute[235803]: 2025-10-02 12:18:03.566 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:03 np0005466031 nova_compute[235803]: 2025-10-02 12:18:03.595 2 DEBUG nova.objects.instance [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:18:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:03.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:05 np0005466031 nova_compute[235803]: 2025-10-02 12:18:05.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:05.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:05 np0005466031 nova_compute[235803]: 2025-10-02 12:18:05.284 2 DEBUG nova.network.neutron [req-2f43f3e6-08eb-4a7b-b9fa-ce60ff335ff4 req-93f2404f-4f76-464a-8f7a-bf84f82fe07d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updated VIF entry in instance network info cache for port f73e5355-2f7c-48f8-bc9f-fd14478616c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:18:05 np0005466031 nova_compute[235803]: 2025-10-02 12:18:05.284 2 DEBUG nova.network.neutron [req-2f43f3e6-08eb-4a7b-b9fa-ce60ff335ff4 req-93f2404f-4f76-464a-8f7a-bf84f82fe07d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updating instance_info_cache with network_info: [{"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:05 np0005466031 nova_compute[235803]: 2025-10-02 12:18:05.310 2 DEBUG oslo_concurrency.lockutils [req-2f43f3e6-08eb-4a7b-b9fa-ce60ff335ff4 req-93f2404f-4f76-464a-8f7a-bf84f82fe07d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:05.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:06 np0005466031 podman[251494]: 2025-10-02 12:18:06.630132893 +0000 UTC m=+0.059049328 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:18:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.329 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.330 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.330 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.331 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.332 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.334 2 INFO nova.compute.manager [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Terminating instance#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.336 2 DEBUG nova.compute.manager [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:18:07 np0005466031 kernel: tap7539c03e-c9 (unregistering): left promiscuous mode
Oct  2 08:18:07 np0005466031 NetworkManager[44907]: <info>  [1759407487.4518] device (tap7539c03e-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:18:07Z|00093|binding|INFO|Releasing lport 7539c03e-c932-4473-8d75-729cbed6008a from this chassis (sb_readonly=0)
Oct  2 08:18:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:18:07Z|00094|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a down in Southbound
Oct  2 08:18:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:18:07Z|00095|binding|INFO|Removing iface tap7539c03e-c9 ovn-installed in OVS
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.471 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:5e:ba 10.100.0.9'], port_security=['fa:16:3e:0e:5e:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '23', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7539c03e-c932-4473-8d75-729cbed6008a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.473 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7539c03e-c932-4473-8d75-729cbed6008a in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.488 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.489 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee0ae26-9fff-4617-b0a8-0261cb0b577b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.489 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace which is not needed anymore#033[00m
Oct  2 08:18:07 np0005466031 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct  2 08:18:07 np0005466031 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001f.scope: Consumed 3.272s CPU time.
Oct  2 08:18:07 np0005466031 systemd-machined[192227]: Machine qemu-13-instance-0000001f terminated.
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.568 2 INFO nova.virt.libvirt.driver [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Instance destroyed successfully.#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.569 2 DEBUG nova.objects.instance [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lazy-loading 'resources' on Instance uuid ecee1ec0-1a8d-4d67-b996-205a942120ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.589 2 DEBUG nova.virt.libvirt.vif [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:03Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.590 2 DEBUG nova.network.os_vif_util [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.595 2 DEBUG nova.network.os_vif_util [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.595 2 DEBUG os_vif [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7539c03e-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.641 2 INFO os_vif [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9')#033[00m
Oct  2 08:18:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:07.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:07 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[251454]: [NOTICE]   (251458) : haproxy version is 2.8.14-c23fe91
Oct  2 08:18:07 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[251454]: [NOTICE]   (251458) : path to executable is /usr/sbin/haproxy
Oct  2 08:18:07 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[251454]: [WARNING]  (251458) : Exiting Master process...
Oct  2 08:18:07 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[251454]: [WARNING]  (251458) : Exiting Master process...
Oct  2 08:18:07 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[251454]: [ALERT]    (251458) : Current worker (251460) exited with code 143 (Terminated)
Oct  2 08:18:07 np0005466031 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[251454]: [WARNING]  (251458) : All workers exited. Exiting... (0)
Oct  2 08:18:07 np0005466031 systemd[1]: libpod-0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8.scope: Deactivated successfully.
Oct  2 08:18:07 np0005466031 podman[251549]: 2025-10-02 12:18:07.716672344 +0000 UTC m=+0.062668693 container died 0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8-userdata-shm.mount: Deactivated successfully.
Oct  2 08:18:07 np0005466031 systemd[1]: var-lib-containers-storage-overlay-168d6f9c58a8248a2c9adf11128dfa54b237f0eddba31c9c4aad5eb86d9a997b-merged.mount: Deactivated successfully.
Oct  2 08:18:07 np0005466031 podman[251549]: 2025-10-02 12:18:07.772529789 +0000 UTC m=+0.118526138 container cleanup 0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:18:07 np0005466031 systemd[1]: libpod-conmon-0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8.scope: Deactivated successfully.
Oct  2 08:18:07 np0005466031 podman[251598]: 2025-10-02 12:18:07.834525851 +0000 UTC m=+0.041903182 container remove 0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.840 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d757eecd-7dc4-46e1-980d-d195528b96ae]: (4, ('Thu Oct  2 12:18:07 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8)\n0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8\nThu Oct  2 12:18:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8)\n0b31ab86ce0e76c7369dfe53abc34a746bc561f408fd99d3820cb06cc43e72d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.842 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[35f0a734-8ca1-4359-89c4-84e1e64ab45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.843 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:07 np0005466031 kernel: tap5989958f-c0: left promiscuous mode
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.877 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b4cc3787-930b-47f1-84bd-79aba12d084e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.904 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[32d68fe5-2414-46dc-a406-40a541fcf38a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.905 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[44baa96e-259f-4872-a3c7-c5d0f19be723]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.920 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4660b6f5-a06e-4e0f-82e9-c6cd27091a4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533427, 'reachable_time': 32295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251613, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:07 np0005466031 systemd[1]: run-netns-ovnmeta\x2d5989958f\x2dccbb\x2d4db4\x2d8dcb\x2d18563aa2418e.mount: Deactivated successfully.
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.924 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:18:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:07.924 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[0822e46e-e3a6-41b2-ad4a-4c936c60d45a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.979 2 DEBUG nova.compute.manager [req-f3297aa3-c956-4ab5-aebf-db1076e9dc40 req-ccb9d163-4dea-428f-a109-85b571bb32a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.980 2 DEBUG oslo_concurrency.lockutils [req-f3297aa3-c956-4ab5-aebf-db1076e9dc40 req-ccb9d163-4dea-428f-a109-85b571bb32a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.980 2 DEBUG oslo_concurrency.lockutils [req-f3297aa3-c956-4ab5-aebf-db1076e9dc40 req-ccb9d163-4dea-428f-a109-85b571bb32a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.980 2 DEBUG oslo_concurrency.lockutils [req-f3297aa3-c956-4ab5-aebf-db1076e9dc40 req-ccb9d163-4dea-428f-a109-85b571bb32a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.981 2 DEBUG nova.compute.manager [req-f3297aa3-c956-4ab5-aebf-db1076e9dc40 req-ccb9d163-4dea-428f-a109-85b571bb32a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:07 np0005466031 nova_compute[235803]: 2025-10-02 12:18:07.981 2 DEBUG nova.compute.manager [req-f3297aa3-c956-4ab5-aebf-db1076e9dc40 req-ccb9d163-4dea-428f-a109-85b571bb32a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:18:08 np0005466031 nova_compute[235803]: 2025-10-02 12:18:08.167 2 INFO nova.virt.libvirt.driver [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deleting instance files /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae_del#033[00m
Oct  2 08:18:08 np0005466031 nova_compute[235803]: 2025-10-02 12:18:08.168 2 INFO nova.virt.libvirt.driver [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deletion of /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae_del complete#033[00m
Oct  2 08:18:08 np0005466031 nova_compute[235803]: 2025-10-02 12:18:08.215 2 INFO nova.compute.manager [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:18:08 np0005466031 nova_compute[235803]: 2025-10-02 12:18:08.215 2 DEBUG oslo.service.loopingcall [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:18:08 np0005466031 nova_compute[235803]: 2025-10-02 12:18:08.215 2 DEBUG nova.compute.manager [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:18:08 np0005466031 nova_compute[235803]: 2025-10-02 12:18:08.216 2 DEBUG nova.network.neutron [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:18:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:09.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:18:09Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:5c:a4 10.100.0.10
Oct  2 08:18:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:18:09Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:5c:a4 10.100.0.10
Oct  2 08:18:09 np0005466031 nova_compute[235803]: 2025-10-02 12:18:09.505 2 DEBUG nova.network.neutron [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:09 np0005466031 nova_compute[235803]: 2025-10-02 12:18:09.541 2 INFO nova.compute.manager [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Took 1.32 seconds to deallocate network for instance.#033[00m
Oct  2 08:18:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:09.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:09 np0005466031 nova_compute[235803]: 2025-10-02 12:18:09.819 2 INFO nova.compute.manager [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Took 0.28 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:18:09 np0005466031 nova_compute[235803]: 2025-10-02 12:18:09.821 2 DEBUG nova.compute.manager [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deleting volume: 20a19061-0239-43b4-b9d7-980e7acde072 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.091 2 DEBUG nova.compute.manager [req-93350715-69b2-4133-adf0-5fbfea9662d8 req-3f80d4ed-5b49-40a0-b273-f7eab6a0efe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.092 2 DEBUG oslo_concurrency.lockutils [req-93350715-69b2-4133-adf0-5fbfea9662d8 req-3f80d4ed-5b49-40a0-b273-f7eab6a0efe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.092 2 DEBUG oslo_concurrency.lockutils [req-93350715-69b2-4133-adf0-5fbfea9662d8 req-3f80d4ed-5b49-40a0-b273-f7eab6a0efe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.093 2 DEBUG oslo_concurrency.lockutils [req-93350715-69b2-4133-adf0-5fbfea9662d8 req-3f80d4ed-5b49-40a0-b273-f7eab6a0efe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.093 2 DEBUG nova.compute.manager [req-93350715-69b2-4133-adf0-5fbfea9662d8 req-3f80d4ed-5b49-40a0-b273-f7eab6a0efe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.094 2 WARNING nova.compute.manager [req-93350715-69b2-4133-adf0-5fbfea9662d8 req-3f80d4ed-5b49-40a0-b273-f7eab6a0efe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.094 2 DEBUG nova.compute.manager [req-93350715-69b2-4133-adf0-5fbfea9662d8 req-3f80d4ed-5b49-40a0-b273-f7eab6a0efe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-deleted-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.119 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.119 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.125 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.180 2 INFO nova.scheduler.client.report [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Deleted allocations for instance ecee1ec0-1a8d-4d67-b996-205a942120ae#033[00m
Oct  2 08:18:10 np0005466031 nova_compute[235803]: 2025-10-02 12:18:10.266 2 DEBUG oslo_concurrency.lockutils [None req-4265688e-f91d-4d76-9884-ebf0519db985 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:18:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594510805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:18:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:18:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594510805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:18:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:11.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:11.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:12 np0005466031 nova_compute[235803]: 2025-10-02 12:18:12.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:12 np0005466031 nova_compute[235803]: 2025-10-02 12:18:12.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:13.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:15.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:15.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:16.741 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:16 np0005466031 nova_compute[235803]: 2025-10-02 12:18:16.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:16.743 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:18:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:17.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:17 np0005466031 nova_compute[235803]: 2025-10-02 12:18:17.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:17.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:17 np0005466031 nova_compute[235803]: 2025-10-02 12:18:17.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:19.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:19.745 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:21.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:21.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:22 np0005466031 nova_compute[235803]: 2025-10-02 12:18:22.568 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407487.5653446, ecee1ec0-1a8d-4d67-b996-205a942120ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:22 np0005466031 nova_compute[235803]: 2025-10-02 12:18:22.568 2 INFO nova.compute.manager [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:18:22 np0005466031 nova_compute[235803]: 2025-10-02 12:18:22.611 2 DEBUG nova.compute.manager [None req-6ac6b4c1-92e1-4d59-bf66-8f1d78860922 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:22 np0005466031 nova_compute[235803]: 2025-10-02 12:18:22.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:22 np0005466031 nova_compute[235803]: 2025-10-02 12:18:22.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:23.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:23 np0005466031 podman[251672]: 2025-10-02 12:18:23.645458402 +0000 UTC m=+0.078941413 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:18:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:23.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:23 np0005466031 podman[251673]: 2025-10-02 12:18:23.706077855 +0000 UTC m=+0.139271577 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:18:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:25.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:25.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:25.824 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:25.825 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:18:25.826 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:27.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:27 np0005466031 nova_compute[235803]: 2025-10-02 12:18:27.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:27 np0005466031 nova_compute[235803]: 2025-10-02 12:18:27.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:29.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:29.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:18:29Z|00096|binding|INFO|Releasing lport 7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4 from this chassis (sb_readonly=0)
Oct  2 08:18:29 np0005466031 nova_compute[235803]: 2025-10-02 12:18:29.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:31.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:31 np0005466031 podman[251770]: 2025-10-02 12:18:31.631526029 +0000 UTC m=+0.067361398 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:18:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:31.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:32 np0005466031 nova_compute[235803]: 2025-10-02 12:18:32.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:32 np0005466031 nova_compute[235803]: 2025-10-02 12:18:32.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:33.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:33 np0005466031 nova_compute[235803]: 2025-10-02 12:18:33.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:33 np0005466031 nova_compute[235803]: 2025-10-02 12:18:33.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:18:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:33.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:18:34 np0005466031 nova_compute[235803]: 2025-10-02 12:18:34.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:34 np0005466031 nova_compute[235803]: 2025-10-02 12:18:34.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:35.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:35.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:36 np0005466031 nova_compute[235803]: 2025-10-02 12:18:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:36 np0005466031 nova_compute[235803]: 2025-10-02 12:18:36.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:18:36 np0005466031 nova_compute[235803]: 2025-10-02 12:18:36.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:18:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:37.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:37 np0005466031 podman[251796]: 2025-10-02 12:18:37.622532826 +0000 UTC m=+0.052294463 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid)
Oct  2 08:18:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:37.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:37 np0005466031 nova_compute[235803]: 2025-10-02 12:18:37.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:37 np0005466031 nova_compute[235803]: 2025-10-02 12:18:37.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:37 np0005466031 nova_compute[235803]: 2025-10-02 12:18:37.756 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:37 np0005466031 nova_compute[235803]: 2025-10-02 12:18:37.757 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:37 np0005466031 nova_compute[235803]: 2025-10-02 12:18:37.757 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:18:37 np0005466031 nova_compute[235803]: 2025-10-02 12:18:37.757 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 93e1fa10-4ba0-4715-be09-0d7dae7a5484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:39.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.155 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updating instance_info_cache with network_info: [{"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.225 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-93e1fa10-4ba0-4715-be09-0d7dae7a5484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.226 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.226 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.227 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.362 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.363 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.363 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.364 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.364 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:40 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1177830465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:40 np0005466031 nova_compute[235803]: 2025-10-02 12:18:40.817 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.056 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.057 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:41.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.222 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.224 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4625MB free_disk=20.907501220703125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.224 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.224 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.362 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 93e1fa10-4ba0-4715-be09-0d7dae7a5484 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.363 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.363 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.394 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:41.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3657979213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.827 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.834 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:18:41 np0005466031 nova_compute[235803]: 2025-10-02 12:18:41.931 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:18:42 np0005466031 nova_compute[235803]: 2025-10-02 12:18:42.113 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:18:42 np0005466031 nova_compute[235803]: 2025-10-02 12:18:42.113 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:42 np0005466031 nova_compute[235803]: 2025-10-02 12:18:42.523 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:42 np0005466031 nova_compute[235803]: 2025-10-02 12:18:42.523 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:42 np0005466031 nova_compute[235803]: 2025-10-02 12:18:42.524 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:18:42 np0005466031 nova_compute[235803]: 2025-10-02 12:18:42.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:42 np0005466031 nova_compute[235803]: 2025-10-02 12:18:42.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:43.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:43.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:45.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:45.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:47.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:47.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:47 np0005466031 nova_compute[235803]: 2025-10-02 12:18:47.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466031 nova_compute[235803]: 2025-10-02 12:18:47.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:49.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:49.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:51.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:51.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:52 np0005466031 nova_compute[235803]: 2025-10-02 12:18:52.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:18:52 np0005466031 nova_compute[235803]: 2025-10-02 12:18:52.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:18:52 np0005466031 nova_compute[235803]: 2025-10-02 12:18:52.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 08:18:52 np0005466031 nova_compute[235803]: 2025-10-02 12:18:52.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:18:52 np0005466031 nova_compute[235803]: 2025-10-02 12:18:52.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:52 np0005466031 nova_compute[235803]: 2025-10-02 12:18:52.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:18:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:18:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:53.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:18:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:53.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:54 np0005466031 podman[251920]: 2025-10-02 12:18:54.633427848 +0000 UTC m=+0.065368879 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 08:18:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:54 np0005466031 podman[251921]: 2025-10-02 12:18:54.686380117 +0000 UTC m=+0.113508899 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 08:18:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:55.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:55.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:57.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:57.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:57 np0005466031 nova_compute[235803]: 2025-10-02 12:18:57.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:18:58 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:18:58 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:18:58 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:18:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:59.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:18:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:01.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:01.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:02 np0005466031 podman[252101]: 2025-10-02 12:19:02.637341426 +0000 UTC m=+0.066415029 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:19:02 np0005466031 nova_compute[235803]: 2025-10-02 12:19:02.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:19:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:03.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:03.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:05.716 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:05 np0005466031 nova_compute[235803]: 2025-10-02 12:19:05.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:05.718 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:19:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:05.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:07.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:07 np0005466031 nova_compute[235803]: 2025-10-02 12:19:07.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:08 np0005466031 podman[252125]: 2025-10-02 12:19:08.632732867 +0000 UTC m=+0.055832033 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:19:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:09.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:09.720 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:09.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:11.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:11.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:12 np0005466031 nova_compute[235803]: 2025-10-02 12:19:12.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:13.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:19:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:19:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:13.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.493 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.494 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.494 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.494 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.495 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.496 2 INFO nova.compute.manager [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Terminating instance#033[00m
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.497 2 DEBUG nova.compute.manager [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:19:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:14 np0005466031 kernel: tapf73e5355-2f (unregistering): left promiscuous mode
Oct  2 08:19:14 np0005466031 NetworkManager[44907]: <info>  [1759407554.7368] device (tapf73e5355-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:19:14Z|00097|binding|INFO|Releasing lport f73e5355-2f7c-48f8-bc9f-fd14478616c3 from this chassis (sb_readonly=0)
Oct  2 08:19:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:19:14Z|00098|binding|INFO|Setting lport f73e5355-2f7c-48f8-bc9f-fd14478616c3 down in Southbound
Oct  2 08:19:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:19:14Z|00099|binding|INFO|Removing iface tapf73e5355-2f ovn-installed in OVS
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:14 np0005466031 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct  2 08:19:14 np0005466031 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000021.scope: Consumed 15.740s CPU time.
Oct  2 08:19:14 np0005466031 systemd-machined[192227]: Machine qemu-14-instance-00000021 terminated.
Oct  2 08:19:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:14.866 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:5c:a4 10.100.0.10'], port_security=['fa:16:3e:1b:5c:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '93e1fa10-4ba0-4715-be09-0d7dae7a5484', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 4faf2e2c-2b67-431c-8294-7b01e684b9fb a17915dc-9c95-483e-affe-c7c02d284e11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=f73e5355-2f7c-48f8-bc9f-fd14478616c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:14.868 141898 INFO neutron.agent.ovn.metadata.agent [-] Port f73e5355-2f7c-48f8-bc9f-fd14478616c3 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis#033[00m
Oct  2 08:19:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:14.869 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:19:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:14.872 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[274d321c-d47a-4b1a-a4e7-ba3c77ad7485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:14.872 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace which is not needed anymore#033[00m
Oct  2 08:19:14 np0005466031 kernel: tapf73e5355-2f: entered promiscuous mode
Oct  2 08:19:14 np0005466031 NetworkManager[44907]: <info>  [1759407554.9165] manager: (tapf73e5355-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Oct  2 08:19:14 np0005466031 systemd-udevd[252249]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:19:14 np0005466031 kernel: tapf73e5355-2f (unregistering): left promiscuous mode
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.937 2 INFO nova.virt.libvirt.driver [-] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Instance destroyed successfully.#033[00m
Oct  2 08:19:14 np0005466031 nova_compute[235803]: 2025-10-02 12:19:14.938 2 DEBUG nova.objects.instance [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'resources' on Instance uuid 93e1fa10-4ba0-4715-be09-0d7dae7a5484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:15 np0005466031 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[251377]: [NOTICE]   (251381) : haproxy version is 2.8.14-c23fe91
Oct  2 08:19:15 np0005466031 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[251377]: [NOTICE]   (251381) : path to executable is /usr/sbin/haproxy
Oct  2 08:19:15 np0005466031 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[251377]: [ALERT]    (251381) : Current worker (251383) exited with code 143 (Terminated)
Oct  2 08:19:15 np0005466031 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[251377]: [WARNING]  (251381) : All workers exited. Exiting... (0)
Oct  2 08:19:15 np0005466031 systemd[1]: libpod-c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93.scope: Deactivated successfully.
Oct  2 08:19:15 np0005466031 podman[252274]: 2025-10-02 12:19:15.048215813 +0000 UTC m=+0.075594964 container died c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:19:15 np0005466031 systemd[1]: var-lib-containers-storage-overlay-aca4360ebdef5f16cc1f44eb20609dddcad5988be530b66ffe6703037187dff8-merged.mount: Deactivated successfully.
Oct  2 08:19:15 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93-userdata-shm.mount: Deactivated successfully.
Oct  2 08:19:15 np0005466031 podman[252274]: 2025-10-02 12:19:15.089754373 +0000 UTC m=+0.117133524 container cleanup c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:19:15 np0005466031 systemd[1]: libpod-conmon-c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93.scope: Deactivated successfully.
Oct  2 08:19:15 np0005466031 podman[252305]: 2025-10-02 12:19:15.150758455 +0000 UTC m=+0.040690296 container remove c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.158 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e39347fd-6ddd-41da-b80b-f3f2cfc9ecb7]: (4, ('Thu Oct  2 12:19:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93)\nc40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93\nThu Oct  2 12:19:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (c40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93)\nc40968f816aa496e3746d3c616ebefa91d885d36e1bf036318022ca0c5929e93\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.160 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ed899a6f-176f-4c02-a4e5-d4c8997a4a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.161 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:15 np0005466031 kernel: tapcec9cbfc-50: left promiscuous mode
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.188 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8923e088-35e7-4872-ba6d-a3da4cb4d45b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:15.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.216 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2ad0c7-a8bc-48c5-a37a-60e41f1ffd36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.218 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fa80579c-89c7-494e-987c-5ce746171dca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.234 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fb50123f-9d98-4eda-9f40-0113e5c3ac66]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533206, 'reachable_time': 28767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252324, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.236 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:19:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:15.237 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[63753b13-6e1a-4e35-b9bb-67f676eedf2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:15 np0005466031 systemd[1]: run-netns-ovnmeta\x2dcec9cbfc\x2d5dec\x2d4f85\x2d90c5\x2d6104a054547f.mount: Deactivated successfully.
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.238 2 DEBUG nova.virt.libvirt.vif [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-309652729',display_name='tempest-SecurityGroupsTestJSON-server-309652729',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-309652729',id=33,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-2vmynmxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:57Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=93e1fa10-4ba0-4715-be09-0d7dae7a5484,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.239 2 DEBUG nova.network.os_vif_util [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "address": "fa:16:3e:1b:5c:a4", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf73e5355-2f", "ovs_interfaceid": "f73e5355-2f7c-48f8-bc9f-fd14478616c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.240 2 DEBUG nova.network.os_vif_util [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:5c:a4,bridge_name='br-int',has_traffic_filtering=True,id=f73e5355-2f7c-48f8-bc9f-fd14478616c3,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf73e5355-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.240 2 DEBUG os_vif [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:5c:a4,bridge_name='br-int',has_traffic_filtering=True,id=f73e5355-2f7c-48f8-bc9f-fd14478616c3,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf73e5355-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.242 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf73e5355-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.249 2 INFO os_vif [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:5c:a4,bridge_name='br-int',has_traffic_filtering=True,id=f73e5355-2f7c-48f8-bc9f-fd14478616c3,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf73e5355-2f')#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.745 2 DEBUG nova.compute.manager [req-b2ea13a1-4488-4812-a526-3b6d2a1771c0 req-b4e08c32-205c-4996-84bb-22beb9d2ca22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-vif-unplugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.746 2 DEBUG oslo_concurrency.lockutils [req-b2ea13a1-4488-4812-a526-3b6d2a1771c0 req-b4e08c32-205c-4996-84bb-22beb9d2ca22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.747 2 DEBUG oslo_concurrency.lockutils [req-b2ea13a1-4488-4812-a526-3b6d2a1771c0 req-b4e08c32-205c-4996-84bb-22beb9d2ca22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.747 2 DEBUG oslo_concurrency.lockutils [req-b2ea13a1-4488-4812-a526-3b6d2a1771c0 req-b4e08c32-205c-4996-84bb-22beb9d2ca22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.747 2 DEBUG nova.compute.manager [req-b2ea13a1-4488-4812-a526-3b6d2a1771c0 req-b4e08c32-205c-4996-84bb-22beb9d2ca22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] No waiting events found dispatching network-vif-unplugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:19:15 np0005466031 nova_compute[235803]: 2025-10-02 12:19:15.747 2 DEBUG nova.compute.manager [req-b2ea13a1-4488-4812-a526-3b6d2a1771c0 req-b4e08c32-205c-4996-84bb-22beb9d2ca22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-vif-unplugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:19:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:15.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:16 np0005466031 nova_compute[235803]: 2025-10-02 12:19:16.762 2 INFO nova.virt.libvirt.driver [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Deleting instance files /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484_del#033[00m
Oct  2 08:19:16 np0005466031 nova_compute[235803]: 2025-10-02 12:19:16.762 2 INFO nova.virt.libvirt.driver [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Deletion of /var/lib/nova/instances/93e1fa10-4ba0-4715-be09-0d7dae7a5484_del complete#033[00m
Oct  2 08:19:17 np0005466031 nova_compute[235803]: 2025-10-02 12:19:17.201 2 INFO nova.compute.manager [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Took 2.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:19:17 np0005466031 nova_compute[235803]: 2025-10-02 12:19:17.202 2 DEBUG oslo.service.loopingcall [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:19:17 np0005466031 nova_compute[235803]: 2025-10-02 12:19:17.202 2 DEBUG nova.compute.manager [-] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:19:17 np0005466031 nova_compute[235803]: 2025-10-02 12:19:17.202 2 DEBUG nova.network.neutron [-] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:19:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:17.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:17.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:17 np0005466031 nova_compute[235803]: 2025-10-02 12:19:17.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:18 np0005466031 nova_compute[235803]: 2025-10-02 12:19:18.086 2 DEBUG nova.compute.manager [req-e12e9088-36fd-4f85-baa4-44e5a8077497 req-53948d35-f7af-4464-804d-051ca93cdfe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:18 np0005466031 nova_compute[235803]: 2025-10-02 12:19:18.086 2 DEBUG oslo_concurrency.lockutils [req-e12e9088-36fd-4f85-baa4-44e5a8077497 req-53948d35-f7af-4464-804d-051ca93cdfe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:18 np0005466031 nova_compute[235803]: 2025-10-02 12:19:18.086 2 DEBUG oslo_concurrency.lockutils [req-e12e9088-36fd-4f85-baa4-44e5a8077497 req-53948d35-f7af-4464-804d-051ca93cdfe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:18 np0005466031 nova_compute[235803]: 2025-10-02 12:19:18.086 2 DEBUG oslo_concurrency.lockutils [req-e12e9088-36fd-4f85-baa4-44e5a8077497 req-53948d35-f7af-4464-804d-051ca93cdfe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:18 np0005466031 nova_compute[235803]: 2025-10-02 12:19:18.086 2 DEBUG nova.compute.manager [req-e12e9088-36fd-4f85-baa4-44e5a8077497 req-53948d35-f7af-4464-804d-051ca93cdfe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] No waiting events found dispatching network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:19:18 np0005466031 nova_compute[235803]: 2025-10-02 12:19:18.087 2 WARNING nova.compute.manager [req-e12e9088-36fd-4f85-baa4-44e5a8077497 req-53948d35-f7af-4464-804d-051ca93cdfe6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received unexpected event network-vif-plugged-f73e5355-2f7c-48f8-bc9f-fd14478616c3 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:19:18 np0005466031 nova_compute[235803]: 2025-10-02 12:19:18.970 2 DEBUG nova.network.neutron [-] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:19 np0005466031 nova_compute[235803]: 2025-10-02 12:19:19.014 2 INFO nova.compute.manager [-] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Took 1.81 seconds to deallocate network for instance.#033[00m
Oct  2 08:19:19 np0005466031 nova_compute[235803]: 2025-10-02 12:19:19.135 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:19 np0005466031 nova_compute[235803]: 2025-10-02 12:19:19.136 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:19 np0005466031 nova_compute[235803]: 2025-10-02 12:19:19.192 2 DEBUG oslo_concurrency.processutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:19.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2501839545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:19 np0005466031 nova_compute[235803]: 2025-10-02 12:19:19.650 2 DEBUG oslo_concurrency.processutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:19 np0005466031 nova_compute[235803]: 2025-10-02 12:19:19.658 2 DEBUG nova.compute.provider_tree [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:19 np0005466031 nova_compute[235803]: 2025-10-02 12:19:19.703 2 DEBUG nova.scheduler.client.report [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:19.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:19 np0005466031 nova_compute[235803]: 2025-10-02 12:19:19.970 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:20 np0005466031 nova_compute[235803]: 2025-10-02 12:19:20.092 2 INFO nova.scheduler.client.report [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Deleted allocations for instance 93e1fa10-4ba0-4715-be09-0d7dae7a5484#033[00m
Oct  2 08:19:20 np0005466031 nova_compute[235803]: 2025-10-02 12:19:20.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:21.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:21 np0005466031 nova_compute[235803]: 2025-10-02 12:19:21.545 2 DEBUG oslo_concurrency.lockutils [None req-6f515230-0fda-41c2-9f84-8802daf4fc2d 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "93e1fa10-4ba0-4715-be09-0d7dae7a5484" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:21.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:22 np0005466031 nova_compute[235803]: 2025-10-02 12:19:22.235 2 DEBUG nova.compute.manager [req-01a509fa-2724-4783-9365-aa1773f74145 req-191c1c76-420e-45fe-b617-082ad3a05091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Received event network-vif-deleted-f73e5355-2f7c-48f8-bc9f-fd14478616c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:22 np0005466031 nova_compute[235803]: 2025-10-02 12:19:22.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:23.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:23.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:25.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:25 np0005466031 nova_compute[235803]: 2025-10-02 12:19:25.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:25 np0005466031 podman[252371]: 2025-10-02 12:19:25.619439144 +0000 UTC m=+0.049422359 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:19:25 np0005466031 podman[252372]: 2025-10-02 12:19:25.671815297 +0000 UTC m=+0.091575797 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 08:19:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:25.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:25.826 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:25.827 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:25.827 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:27.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:27 np0005466031 nova_compute[235803]: 2025-10-02 12:19:27.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:29.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:29.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:29 np0005466031 nova_compute[235803]: 2025-10-02 12:19:29.936 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407554.9342763, 93e1fa10-4ba0-4715-be09-0d7dae7a5484 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:29 np0005466031 nova_compute[235803]: 2025-10-02 12:19:29.937 2 INFO nova.compute.manager [-] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:19:30 np0005466031 nova_compute[235803]: 2025-10-02 12:19:30.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:30 np0005466031 nova_compute[235803]: 2025-10-02 12:19:30.315 2 DEBUG nova.compute.manager [None req-a2b803db-32f1-4649-82c8-9bf722974556 - - - - - -] [instance: 93e1fa10-4ba0-4715-be09-0d7dae7a5484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:31.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:31.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:32 np0005466031 nova_compute[235803]: 2025-10-02 12:19:32.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:33.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:33 np0005466031 podman[252472]: 2025-10-02 12:19:33.633496053 +0000 UTC m=+0.060636323 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 08:19:33 np0005466031 nova_compute[235803]: 2025-10-02 12:19:33.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:33.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:34 np0005466031 nova_compute[235803]: 2025-10-02 12:19:34.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:34 np0005466031 nova_compute[235803]: 2025-10-02 12:19:34.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:34 np0005466031 nova_compute[235803]: 2025-10-02 12:19:34.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:34 np0005466031 nova_compute[235803]: 2025-10-02 12:19:34.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:35.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:35 np0005466031 nova_compute[235803]: 2025-10-02 12:19:35.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:35.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:37.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:37 np0005466031 nova_compute[235803]: 2025-10-02 12:19:37.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:37 np0005466031 nova_compute[235803]: 2025-10-02 12:19:37.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:19:37 np0005466031 nova_compute[235803]: 2025-10-02 12:19:37.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:19:37 np0005466031 nova_compute[235803]: 2025-10-02 12:19:37.774 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:19:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:37.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:37 np0005466031 nova_compute[235803]: 2025-10-02 12:19:37.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:38 np0005466031 nova_compute[235803]: 2025-10-02 12:19:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:38 np0005466031 nova_compute[235803]: 2025-10-02 12:19:38.676 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:38 np0005466031 nova_compute[235803]: 2025-10-02 12:19:38.676 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:38 np0005466031 nova_compute[235803]: 2025-10-02 12:19:38.677 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:38 np0005466031 nova_compute[235803]: 2025-10-02 12:19:38.677 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:19:38 np0005466031 nova_compute[235803]: 2025-10-02 12:19:38.677 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1347628948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.095 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:39 np0005466031 podman[252519]: 2025-10-02 12:19:39.206846355 +0000 UTC m=+0.073555666 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:19:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:39.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.284 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.286 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4817MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.286 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.287 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.466 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.467 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.487 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.506 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.506 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.527 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.570 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:19:39 np0005466031 nova_compute[235803]: 2025-10-02 12:19:39.616 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:39.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:40 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/528161419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:40 np0005466031 nova_compute[235803]: 2025-10-02 12:19:40.019 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:40 np0005466031 nova_compute[235803]: 2025-10-02 12:19:40.027 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:40 np0005466031 nova_compute[235803]: 2025-10-02 12:19:40.078 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:40 np0005466031 nova_compute[235803]: 2025-10-02 12:19:40.132 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:19:40 np0005466031 nova_compute[235803]: 2025-10-02 12:19:40.133 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:40 np0005466031 nova_compute[235803]: 2025-10-02 12:19:40.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:41 np0005466031 nova_compute[235803]: 2025-10-02 12:19:41.133 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:41.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:41 np0005466031 nova_compute[235803]: 2025-10-02 12:19:41.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:41 np0005466031 nova_compute[235803]: 2025-10-02 12:19:41.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:19:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:41.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:42 np0005466031 nova_compute[235803]: 2025-10-02 12:19:42.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:42 np0005466031 nova_compute[235803]: 2025-10-02 12:19:42.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:43.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:43 np0005466031 nova_compute[235803]: 2025-10-02 12:19:43.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:43.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:45.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:45 np0005466031 nova_compute[235803]: 2025-10-02 12:19:45.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:45.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:47.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:47.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:47 np0005466031 nova_compute[235803]: 2025-10-02 12:19:47.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:49.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:49.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:50 np0005466031 nova_compute[235803]: 2025-10-02 12:19:50.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:51.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:51 np0005466031 nova_compute[235803]: 2025-10-02 12:19:51.640 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquiring lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:51 np0005466031 nova_compute[235803]: 2025-10-02 12:19:51.640 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:51 np0005466031 nova_compute[235803]: 2025-10-02 12:19:51.657 2 DEBUG nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:19:51 np0005466031 nova_compute[235803]: 2025-10-02 12:19:51.744 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:51 np0005466031 nova_compute[235803]: 2025-10-02 12:19:51.744 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:51 np0005466031 nova_compute[235803]: 2025-10-02 12:19:51.759 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:19:51 np0005466031 nova_compute[235803]: 2025-10-02 12:19:51.760 2 INFO nova.compute.claims [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:19:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:51.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:51 np0005466031 nova_compute[235803]: 2025-10-02 12:19:51.847 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2702662155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.297 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.305 2 DEBUG nova.compute.provider_tree [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.319 2 DEBUG nova.scheduler.client.report [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.346 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.347 2 DEBUG nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.400 2 DEBUG nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.401 2 DEBUG nova.network.neutron [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.424 2 INFO nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.442 2 DEBUG nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.584 2 DEBUG nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.586 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.587 2 INFO nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Creating image(s)#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.617 2 DEBUG nova.storage.rbd_utils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] rbd image e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.652 2 DEBUG nova.storage.rbd_utils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] rbd image e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.686 2 DEBUG nova.storage.rbd_utils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] rbd image e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.690 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.766 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.767 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.768 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.768 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.793 2 DEBUG nova.storage.rbd_utils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] rbd image e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.796 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:52 np0005466031 nova_compute[235803]: 2025-10-02 12:19:52.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.024 2 DEBUG nova.network.neutron [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.025 2 DEBUG nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:19:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:53.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.498 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.703s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.569 2 DEBUG nova.storage.rbd_utils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] resizing rbd image e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.672 2 DEBUG nova.objects.instance [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lazy-loading 'migration_context' on Instance uuid e7c655a4-0c56-4d90-a0c4-8730a0baf3cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.696 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.696 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Ensure instance console log exists: /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.697 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.697 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.697 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.698 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.702 2 WARNING nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.709 2 DEBUG nova.virt.libvirt.host [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.709 2 DEBUG nova.virt.libvirt.host [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.713 2 DEBUG nova.virt.libvirt.host [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.713 2 DEBUG nova.virt.libvirt.host [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.714 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.714 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.715 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.715 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.715 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.715 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.715 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.715 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.716 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.716 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.716 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.716 2 DEBUG nova.virt.hardware [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:19:53 np0005466031 nova_compute[235803]: 2025-10-02 12:19:53.718 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:54 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/818748125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.176 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.208 2 DEBUG nova.storage.rbd_utils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] rbd image e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.212 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:54.346 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:19:54.347 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:54 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/442017125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.618 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.621 2 DEBUG nova.objects.instance [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lazy-loading 'pci_devices' on Instance uuid e7c655a4-0c56-4d90-a0c4-8730a0baf3cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.646 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <uuid>e7c655a4-0c56-4d90-a0c4-8730a0baf3cb</uuid>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <name>instance-00000028</name>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerExternalEventsTest-server-1846840168</nova:name>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:19:53</nova:creationTime>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <nova:user uuid="f6bb8780a2674a139c7dce031c7afa1b">tempest-ServerExternalEventsTest-1230692377-project-member</nova:user>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <nova:project uuid="c6a80a62ef3945adb3a158602488057f">tempest-ServerExternalEventsTest-1230692377</nova:project>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <entry name="serial">e7c655a4-0c56-4d90-a0c4-8730a0baf3cb</entry>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <entry name="uuid">e7c655a4-0c56-4d90-a0c4-8730a0baf3cb</entry>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk.config">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb/console.log" append="off"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:19:54 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:19:54 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:19:54 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:19:54 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:19:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.713 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.713 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.714 2 INFO nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Using config drive#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.737 2 DEBUG nova.storage.rbd_utils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] rbd image e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.884 2 INFO nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Creating config drive at /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb/disk.config#033[00m
Oct  2 08:19:54 np0005466031 nova_compute[235803]: 2025-10-02 12:19:54.891 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gn3a2zy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:55 np0005466031 nova_compute[235803]: 2025-10-02 12:19:55.017 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gn3a2zy" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:55 np0005466031 nova_compute[235803]: 2025-10-02 12:19:55.044 2 DEBUG nova.storage.rbd_utils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] rbd image e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:55 np0005466031 nova_compute[235803]: 2025-10-02 12:19:55.047 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb/disk.config e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:55 np0005466031 nova_compute[235803]: 2025-10-02 12:19:55.188 2 DEBUG oslo_concurrency.processutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb/disk.config e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:55 np0005466031 nova_compute[235803]: 2025-10-02 12:19:55.189 2 INFO nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Deleting local config drive /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb/disk.config because it was imported into RBD.#033[00m
Oct  2 08:19:55 np0005466031 systemd-machined[192227]: New machine qemu-15-instance-00000028.
Oct  2 08:19:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:55.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:55 np0005466031 systemd[1]: Started Virtual Machine qemu-15-instance-00000028.
Oct  2 08:19:55 np0005466031 nova_compute[235803]: 2025-10-02 12:19:55.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:55.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.026 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407596.0260878, e7c655a4-0c56-4d90-a0c4-8730a0baf3cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.027 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.033 2 DEBUG nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.034 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.038 2 INFO nova.virt.libvirt.driver [-] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Instance spawned successfully.#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.039 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.055 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.060 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.067 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.067 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.068 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.068 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.068 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.069 2 DEBUG nova.virt.libvirt.driver [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.090 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.090 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407596.0315738, e7c655a4-0c56-4d90-a0c4-8730a0baf3cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.090 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] VM Started (Lifecycle Event)#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.116 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.119 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.129 2 INFO nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Took 3.54 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.129 2 DEBUG nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.138 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.180 2 INFO nova.compute.manager [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Took 4.47 seconds to build instance.#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.197 2 DEBUG oslo_concurrency.lockutils [None req-54c5a142-d2e3-4590-95d9-5f0fe2f1a58e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:56 np0005466031 podman[252986]: 2025-10-02 12:19:56.637866811 +0000 UTC m=+0.058983374 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:19:56 np0005466031 podman[252987]: 2025-10-02 12:19:56.67277301 +0000 UTC m=+0.093973166 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.822 2 DEBUG nova.compute.manager [None req-f216eb26-b08f-47bd-933f-c61cf97bc647 82092ade274e49db9a10c3b7cadc6c3f 3ff61c12dace43fa9eaf2013407b73d9 - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.823 2 DEBUG nova.compute.manager [None req-f216eb26-b08f-47bd-933f-c61cf97bc647 82092ade274e49db9a10c3b7cadc6c3f 3ff61c12dace43fa9eaf2013407b73d9 - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.823 2 DEBUG oslo_concurrency.lockutils [None req-f216eb26-b08f-47bd-933f-c61cf97bc647 82092ade274e49db9a10c3b7cadc6c3f 3ff61c12dace43fa9eaf2013407b73d9 - - default default] Acquiring lock "refresh_cache-e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.823 2 DEBUG oslo_concurrency.lockutils [None req-f216eb26-b08f-47bd-933f-c61cf97bc647 82092ade274e49db9a10c3b7cadc6c3f 3ff61c12dace43fa9eaf2013407b73d9 - - default default] Acquired lock "refresh_cache-e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:19:56 np0005466031 nova_compute[235803]: 2025-10-02 12:19:56.823 2 DEBUG nova.network.neutron [None req-f216eb26-b08f-47bd-933f-c61cf97bc647 82092ade274e49db9a10c3b7cadc6c3f 3ff61c12dace43fa9eaf2013407b73d9 - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.053 2 DEBUG nova.network.neutron [None req-f216eb26-b08f-47bd-933f-c61cf97bc647 82092ade274e49db9a10c3b7cadc6c3f 3ff61c12dace43fa9eaf2013407b73d9 - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:19:57 np0005466031 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.078 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquiring lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.079 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.079 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquiring lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.080 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.080 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.081 2 INFO nova.compute.manager [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Terminating instance#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.082 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquiring lock "refresh_cache-e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:19:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:19:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:57.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.478 2 DEBUG nova.network.neutron [None req-f216eb26-b08f-47bd-933f-c61cf97bc647 82092ade274e49db9a10c3b7cadc6c3f 3ff61c12dace43fa9eaf2013407b73d9 - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.500 2 DEBUG oslo_concurrency.lockutils [None req-f216eb26-b08f-47bd-933f-c61cf97bc647 82092ade274e49db9a10c3b7cadc6c3f 3ff61c12dace43fa9eaf2013407b73d9 - - default default] Releasing lock "refresh_cache-e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.501 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquired lock "refresh_cache-e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.501 2 DEBUG nova.network.neutron [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.794 2 DEBUG nova.network.neutron [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:19:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:57.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:57 np0005466031 nova_compute[235803]: 2025-10-02 12:19:57.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:58 np0005466031 nova_compute[235803]: 2025-10-02 12:19:58.200 2 DEBUG nova.network.neutron [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:58 np0005466031 nova_compute[235803]: 2025-10-02 12:19:58.213 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Releasing lock "refresh_cache-e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:19:58 np0005466031 nova_compute[235803]: 2025-10-02 12:19:58.214 2 DEBUG nova.compute.manager [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:19:58 np0005466031 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct  2 08:19:58 np0005466031 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000028.scope: Consumed 3.001s CPU time.
Oct  2 08:19:58 np0005466031 systemd-machined[192227]: Machine qemu-15-instance-00000028 terminated.
Oct  2 08:19:58 np0005466031 nova_compute[235803]: 2025-10-02 12:19:58.433 2 INFO nova.virt.libvirt.driver [-] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Instance destroyed successfully.#033[00m
Oct  2 08:19:58 np0005466031 nova_compute[235803]: 2025-10-02 12:19:58.435 2 DEBUG nova.objects.instance [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lazy-loading 'resources' on Instance uuid e7c655a4-0c56-4d90-a0c4-8730a0baf3cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.235 2 INFO nova.virt.libvirt.driver [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Deleting instance files /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_del#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.235 2 INFO nova.virt.libvirt.driver [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Deletion of /var/lib/nova/instances/e7c655a4-0c56-4d90-a0c4-8730a0baf3cb_del complete#033[00m
Oct  2 08:19:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:19:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:59.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.297 2 INFO nova.compute.manager [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.298 2 DEBUG oslo.service.loopingcall [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.298 2 DEBUG nova.compute.manager [-] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.299 2 DEBUG nova.network.neutron [-] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.423 2 DEBUG nova.network.neutron [-] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.438 2 DEBUG nova.network.neutron [-] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.451 2 INFO nova.compute.manager [-] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Took 0.15 seconds to deallocate network for instance.#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.507 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.509 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.560 2 DEBUG oslo_concurrency.processutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:19:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:59 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3795866088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.966 2 DEBUG oslo_concurrency.processutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:59 np0005466031 nova_compute[235803]: 2025-10-02 12:19:59.972 2 DEBUG nova.compute.provider_tree [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:00 np0005466031 nova_compute[235803]: 2025-10-02 12:20:00.166 2 DEBUG nova.scheduler.client.report [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:00 np0005466031 nova_compute[235803]: 2025-10-02 12:20:00.191 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:00 np0005466031 nova_compute[235803]: 2025-10-02 12:20:00.221 2 INFO nova.scheduler.client.report [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Deleted allocations for instance e7c655a4-0c56-4d90-a0c4-8730a0baf3cb#033[00m
Oct  2 08:20:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508859862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:00 np0005466031 nova_compute[235803]: 2025-10-02 12:20:00.280 2 DEBUG oslo_concurrency.lockutils [None req-cebd0b6c-17a8-441a-90e5-c94d9dd6e04e f6bb8780a2674a139c7dce031c7afa1b c6a80a62ef3945adb3a158602488057f - - default default] Lock "e7c655a4-0c56-4d90-a0c4-8730a0baf3cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:00 np0005466031 nova_compute[235803]: 2025-10-02 12:20:00.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 08:20:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:01.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:01.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:02 np0005466031 nova_compute[235803]: 2025-10-02 12:20:02.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:03.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:20:03.349 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:03.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:04 np0005466031 podman[253078]: 2025-10-02 12:20:04.681516813 +0000 UTC m=+0.103157521 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:20:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:20:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792077436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:20:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:20:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792077436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:20:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:05.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:05 np0005466031 nova_compute[235803]: 2025-10-02 12:20:05.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:05.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:07.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:07.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:07 np0005466031 nova_compute[235803]: 2025-10-02 12:20:07.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:09.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:09 np0005466031 podman[253100]: 2025-10-02 12:20:09.62561132 +0000 UTC m=+0.061585020 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid)
Oct  2 08:20:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:09.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:10 np0005466031 nova_compute[235803]: 2025-10-02 12:20:10.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:11.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:11.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:12 np0005466031 nova_compute[235803]: 2025-10-02 12:20:12.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:13.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.370 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.371 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.395 2 DEBUG nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.432 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407598.4306698, e7c655a4-0c56-4d90-a0c4-8730a0baf3cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.433 2 INFO nova.compute.manager [-] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.480 2 DEBUG nova.compute.manager [None req-bb743359-a7fe-497d-b708-ae986d94efb8 - - - - - -] [instance: e7c655a4-0c56-4d90-a0c4-8730a0baf3cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.485 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.485 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.492 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.493 2 INFO nova.compute.claims [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:20:13 np0005466031 nova_compute[235803]: 2025-10-02 12:20:13.591 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2996451535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:20:14Z|00100|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.019 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.025 2 DEBUG nova.compute.provider_tree [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.043 2 DEBUG nova.scheduler.client.report [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.066 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.067 2 DEBUG nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.116 2 DEBUG nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.132 2 INFO nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.153 2 DEBUG nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.237 2 DEBUG nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.238 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.238 2 INFO nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating image(s)#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.264 2 DEBUG nova.storage.rbd_utils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.297 2 DEBUG nova.storage.rbd_utils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.329 2 DEBUG nova.storage.rbd_utils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.335 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.392 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.393 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.394 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.394 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.420 2 DEBUG nova.storage.rbd_utils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.424 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e170 e170: 3 total, 3 up, 3 in
Oct  2 08:20:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:14 np0005466031 nova_compute[235803]: 2025-10-02 12:20:14.918 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.012 2 DEBUG nova.storage.rbd_utils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] resizing rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.168 2 DEBUG nova.objects.instance [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lazy-loading 'migration_context' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.181 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.181 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Ensure instance console log exists: /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.181 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.182 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.182 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.183 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.188 2 WARNING nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.192 2 DEBUG nova.virt.libvirt.host [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.192 2 DEBUG nova.virt.libvirt.host [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.195 2 DEBUG nova.virt.libvirt.host [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.196 2 DEBUG nova.virt.libvirt.host [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.197 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.197 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.197 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.198 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.198 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.198 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.198 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.198 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.199 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.199 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.199 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.199 2 DEBUG nova.virt.hardware [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.201 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:15.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/956283661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.622 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.647 2 DEBUG nova.storage.rbd_utils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:15 np0005466031 nova_compute[235803]: 2025-10-02 12:20:15.649 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:15.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2270279448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.066 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.069 2 DEBUG nova.objects.instance [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.087 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <uuid>7beacac0-65ce-4e15-a73c-9b50a50f968e</uuid>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <name>instance-0000002b</name>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1498353913</nova:name>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:20:15</nova:creationTime>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <nova:user uuid="93167a5206ba42b28aa96a676d3edb6d">tempest-UnshelveToHostMultiNodesTest-2076784560-project-member</nova:user>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <nova:project uuid="aaf2805394aa4c4cb7977f6433aabf56">tempest-UnshelveToHostMultiNodesTest-2076784560</nova:project>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <entry name="serial">7beacac0-65ce-4e15-a73c-9b50a50f968e</entry>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <entry name="uuid">7beacac0-65ce-4e15-a73c-9b50a50f968e</entry>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/console.log" append="off"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:20:16 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:20:16 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:20:16 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:20:16 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.180 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.180 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.181 2 INFO nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Using config drive#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.214 2 DEBUG nova.storage.rbd_utils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.448 2 INFO nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating config drive at /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.453 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0nk9nx4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.594 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0nk9nx4i" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.644 2 DEBUG nova.storage.rbd_utils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:16 np0005466031 nova_compute[235803]: 2025-10-02 12:20:16.650 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e171 e171: 3 total, 3 up, 3 in
Oct  2 08:20:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:20:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:20:17 np0005466031 nova_compute[235803]: 2025-10-02 12:20:17.100 2 DEBUG oslo_concurrency.processutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:17 np0005466031 nova_compute[235803]: 2025-10-02 12:20:17.102 2 INFO nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deleting local config drive /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config because it was imported into RBD.#033[00m
Oct  2 08:20:17 np0005466031 systemd-machined[192227]: New machine qemu-16-instance-0000002b.
Oct  2 08:20:17 np0005466031 systemd[1]: Started Virtual Machine qemu-16-instance-0000002b.
Oct  2 08:20:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:17.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e172 e172: 3 total, 3 up, 3 in
Oct  2 08:20:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:17 np0005466031 nova_compute[235803]: 2025-10-02 12:20:17.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.058 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407618.0578294, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.058 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.061 2 DEBUG nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.061 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.065 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance spawned successfully.#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.065 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.086 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.092 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.094 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.095 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.095 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.096 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.096 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.096 2 DEBUG nova.virt.libvirt.driver [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.141 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.142 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407618.0580695, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.142 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.182 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.185 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.195 2 INFO nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Took 3.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.195 2 DEBUG nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.221 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.252 2 INFO nova.compute.manager [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Took 4.80 seconds to build instance.#033[00m
Oct  2 08:20:18 np0005466031 nova_compute[235803]: 2025-10-02 12:20:18.275 2 DEBUG oslo_concurrency.lockutils [None req-82de6e35-b643-4ba9-84f3-77fa8cba5e55 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:19.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:20 np0005466031 nova_compute[235803]: 2025-10-02 12:20:20.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:20 np0005466031 nova_compute[235803]: 2025-10-02 12:20:20.461 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:20 np0005466031 nova_compute[235803]: 2025-10-02 12:20:20.462 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:20 np0005466031 nova_compute[235803]: 2025-10-02 12:20:20.463 2 INFO nova.compute.manager [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Shelving#033[00m
Oct  2 08:20:20 np0005466031 nova_compute[235803]: 2025-10-02 12:20:20.491 2 DEBUG nova.virt.libvirt.driver [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:20:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e173 e173: 3 total, 3 up, 3 in
Oct  2 08:20:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:21.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:22 np0005466031 nova_compute[235803]: 2025-10-02 12:20:22.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:23.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:24 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:24 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e174 e174: 3 total, 3 up, 3 in
Oct  2 08:20:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:25.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:25 np0005466031 nova_compute[235803]: 2025-10-02 12:20:25.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:20:25.826 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:20:25.827 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:20:25.827 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:27.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:27 np0005466031 podman[253728]: 2025-10-02 12:20:27.64037305 +0000 UTC m=+0.065521593 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Oct  2 08:20:27 np0005466031 podman[253729]: 2025-10-02 12:20:27.66734404 +0000 UTC m=+0.091709480 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:20:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:27 np0005466031 nova_compute[235803]: 2025-10-02 12:20:27.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:30 np0005466031 nova_compute[235803]: 2025-10-02 12:20:30.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e175 e175: 3 total, 3 up, 3 in
Oct  2 08:20:30 np0005466031 nova_compute[235803]: 2025-10-02 12:20:30.532 2 DEBUG nova.virt.libvirt.driver [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:20:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:31.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:20:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:31.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:20:32 np0005466031 nova_compute[235803]: 2025-10-02 12:20:32.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:33.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:34 np0005466031 nova_compute[235803]: 2025-10-02 12:20:34.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:34 np0005466031 nova_compute[235803]: 2025-10-02 12:20:34.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:35.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:35 np0005466031 nova_compute[235803]: 2025-10-02 12:20:35.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:35 np0005466031 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct  2 08:20:35 np0005466031 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000002b.scope: Consumed 13.031s CPU time.
Oct  2 08:20:35 np0005466031 systemd-machined[192227]: Machine qemu-16-instance-0000002b terminated.
Oct  2 08:20:35 np0005466031 podman[253826]: 2025-10-02 12:20:35.55753824 +0000 UTC m=+0.053824116 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:20:35 np0005466031 nova_compute[235803]: 2025-10-02 12:20:35.670 2 INFO nova.virt.libvirt.driver [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance shutdown successfully after 15 seconds.#033[00m
Oct  2 08:20:35 np0005466031 nova_compute[235803]: 2025-10-02 12:20:35.675 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance destroyed successfully.#033[00m
Oct  2 08:20:35 np0005466031 nova_compute[235803]: 2025-10-02 12:20:35.675 2 DEBUG nova.objects.instance [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:35.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:36 np0005466031 nova_compute[235803]: 2025-10-02 12:20:36.122 2 INFO nova.virt.libvirt.driver [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Beginning cold snapshot process#033[00m
Oct  2 08:20:36 np0005466031 nova_compute[235803]: 2025-10-02 12:20:36.492 2 DEBUG nova.virt.libvirt.imagebackend [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:20:36 np0005466031 nova_compute[235803]: 2025-10-02 12:20:36.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:36 np0005466031 nova_compute[235803]: 2025-10-02 12:20:36.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:36 np0005466031 nova_compute[235803]: 2025-10-02 12:20:36.812 2 DEBUG nova.storage.rbd_utils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] creating snapshot(179a971dff76479dab2d19fc2d79dfe1) on rbd image(7beacac0-65ce-4e15-a73c-9b50a50f968e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:37.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:37 np0005466031 nova_compute[235803]: 2025-10-02 12:20:37.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:37 np0005466031 nova_compute[235803]: 2025-10-02 12:20:37.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:20:37 np0005466031 nova_compute[235803]: 2025-10-02 12:20:37.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:20:37 np0005466031 nova_compute[235803]: 2025-10-02 12:20:37.661 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:37 np0005466031 nova_compute[235803]: 2025-10-02 12:20:37.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:37 np0005466031 nova_compute[235803]: 2025-10-02 12:20:37.662 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:20:37 np0005466031 nova_compute[235803]: 2025-10-02 12:20:37.662 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:37 np0005466031 nova_compute[235803]: 2025-10-02 12:20:37.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e176 e176: 3 total, 3 up, 3 in
Oct  2 08:20:38 np0005466031 nova_compute[235803]: 2025-10-02 12:20:38.054 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:20:38 np0005466031 nova_compute[235803]: 2025-10-02 12:20:38.746 2 DEBUG nova.storage.rbd_utils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] cloning vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk@179a971dff76479dab2d19fc2d79dfe1 to images/f573b04f-b850-41b8-8b32-8ba00d6690cd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:20:38 np0005466031 nova_compute[235803]: 2025-10-02 12:20:38.818 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.227 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.227 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.228 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.229 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.282 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:20:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:39.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.690 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.691 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.846 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.847 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.847 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.847 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:20:39 np0005466031 nova_compute[235803]: 2025-10-02 12:20:39.848 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:39.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:40 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1887875243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.267 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.346 2 DEBUG nova.storage.rbd_utils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] flattening images/f573b04f-b850-41b8-8b32-8ba00d6690cd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:20:40 np0005466031 podman[253962]: 2025-10-02 12:20:40.372789168 +0000 UTC m=+0.053132386 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.625 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.626 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.813 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.814 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4827MB free_disk=20.876529693603516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.814 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:40 np0005466031 nova_compute[235803]: 2025-10-02 12:20:40.815 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.069 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 7beacac0-65ce-4e15-a73c-9b50a50f968e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.070 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.070 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.207 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:41.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3085986660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.692 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.697 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.742 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.792 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:20:41 np0005466031 nova_compute[235803]: 2025-10-02 12:20:41.793 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:41.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:42 np0005466031 nova_compute[235803]: 2025-10-02 12:20:42.007 2 DEBUG nova.storage.rbd_utils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] removing snapshot(179a971dff76479dab2d19fc2d79dfe1) on rbd image(7beacac0-65ce-4e15-a73c-9b50a50f968e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:20:42 np0005466031 nova_compute[235803]: 2025-10-02 12:20:42.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:42 np0005466031 nova_compute[235803]: 2025-10-02 12:20:42.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:20:42 np0005466031 nova_compute[235803]: 2025-10-02 12:20:42.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:42 np0005466031 nova_compute[235803]: 2025-10-02 12:20:42.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e177 e177: 3 total, 3 up, 3 in
Oct  2 08:20:43 np0005466031 nova_compute[235803]: 2025-10-02 12:20:43.063 2 DEBUG nova.storage.rbd_utils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] creating snapshot(snap) on rbd image(f573b04f-b850-41b8-8b32-8ba00d6690cd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:43.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:43 np0005466031 nova_compute[235803]: 2025-10-02 12:20:43.671 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:43.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e178 e178: 3 total, 3 up, 3 in
Oct  2 08:20:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:45.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:45 np0005466031 nova_compute[235803]: 2025-10-02 12:20:45.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:45.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.006 2 INFO nova.virt.libvirt.driver [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Snapshot image upload complete#033[00m
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.006 2 DEBUG nova.compute.manager [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.156660) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646156730, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2678, "num_deletes": 504, "total_data_size": 5543065, "memory_usage": 5620768, "flush_reason": "Manual Compaction"}
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.168 2 INFO nova.compute.manager [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Shelve offloading#033[00m
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646171070, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3266205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28292, "largest_seqno": 30965, "table_properties": {"data_size": 3256394, "index_size": 5601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25651, "raw_average_key_size": 20, "raw_value_size": 3233993, "raw_average_value_size": 2583, "num_data_blocks": 243, "num_entries": 1252, "num_filter_entries": 1252, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407459, "oldest_key_time": 1759407459, "file_creation_time": 1759407646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 14475 microseconds, and 6940 cpu microseconds.
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.171140) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3266205 bytes OK
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.171171) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.173255) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.173291) EVENT_LOG_v1 {"time_micros": 1759407646173282, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.173314) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5530283, prev total WAL file size 5530283, number of live WAL files 2.
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.174 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance destroyed successfully.#033[00m
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.175 2 DEBUG nova.compute.manager [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.175304) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3189KB)], [57(10MB)]
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646175350, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 14072314, "oldest_snapshot_seqno": -1}
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.177 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.177 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.177 2 DEBUG nova.network.neutron [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5408 keys, 8629895 bytes, temperature: kUnknown
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646214698, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 8629895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8593963, "index_size": 21294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138016, "raw_average_key_size": 25, "raw_value_size": 8496812, "raw_average_value_size": 1571, "num_data_blocks": 857, "num_entries": 5408, "num_filter_entries": 5408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759407646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.215050) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8629895 bytes
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.217308) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 356.8 rd, 218.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(7.0) write-amplify(2.6) OK, records in: 6416, records dropped: 1008 output_compression: NoCompression
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.217339) EVENT_LOG_v1 {"time_micros": 1759407646217323, "job": 34, "event": "compaction_finished", "compaction_time_micros": 39436, "compaction_time_cpu_micros": 21055, "output_level": 6, "num_output_files": 1, "total_output_size": 8629895, "num_input_records": 6416, "num_output_records": 5408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646218583, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646221201, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.174642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.221308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.221315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.221317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.221318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:20:46.221320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.371 2 DEBUG nova.network.neutron [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.682 2 DEBUG nova.network.neutron [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.715 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.721 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance destroyed successfully.#033[00m
Oct  2 08:20:46 np0005466031 nova_compute[235803]: 2025-10-02 12:20:46.721 2 DEBUG nova.objects.instance [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lazy-loading 'resources' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:47.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:47 np0005466031 nova_compute[235803]: 2025-10-02 12:20:47.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:47.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:49 np0005466031 nova_compute[235803]: 2025-10-02 12:20:49.254 2 INFO nova.virt.libvirt.driver [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deleting instance files /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e_del#033[00m
Oct  2 08:20:49 np0005466031 nova_compute[235803]: 2025-10-02 12:20:49.255 2 INFO nova.virt.libvirt.driver [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deletion of /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e_del complete#033[00m
Oct  2 08:20:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:49.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:49 np0005466031 nova_compute[235803]: 2025-10-02 12:20:49.429 2 INFO nova.scheduler.client.report [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Deleted allocations for instance 7beacac0-65ce-4e15-a73c-9b50a50f968e#033[00m
Oct  2 08:20:49 np0005466031 nova_compute[235803]: 2025-10-02 12:20:49.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:49 np0005466031 nova_compute[235803]: 2025-10-02 12:20:49.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:20:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:49 np0005466031 nova_compute[235803]: 2025-10-02 12:20:49.836 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:49 np0005466031 nova_compute[235803]: 2025-10-02 12:20:49.837 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:49 np0005466031 nova_compute[235803]: 2025-10-02 12:20:49.881 2 DEBUG oslo_concurrency.processutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:49.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e179 e179: 3 total, 3 up, 3 in
Oct  2 08:20:50 np0005466031 nova_compute[235803]: 2025-10-02 12:20:50.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3095567087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:50 np0005466031 nova_compute[235803]: 2025-10-02 12:20:50.375 2 DEBUG oslo_concurrency.processutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:50 np0005466031 nova_compute[235803]: 2025-10-02 12:20:50.383 2 DEBUG nova.compute.provider_tree [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:50 np0005466031 nova_compute[235803]: 2025-10-02 12:20:50.448 2 DEBUG nova.scheduler.client.report [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:50 np0005466031 nova_compute[235803]: 2025-10-02 12:20:50.570 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:50 np0005466031 nova_compute[235803]: 2025-10-02 12:20:50.671 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407635.670038, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:50 np0005466031 nova_compute[235803]: 2025-10-02 12:20:50.672 2 INFO nova.compute.manager [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:20:51 np0005466031 nova_compute[235803]: 2025-10-02 12:20:51.009 2 DEBUG nova.compute.manager [None req-759127f7-3a9b-4c45-b6b9-dc3cb0b0a7a3 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:51 np0005466031 nova_compute[235803]: 2025-10-02 12:20:51.206 2 DEBUG oslo_concurrency.lockutils [None req-3dfa30b8-4f1c-413f-9b98-1d3445161571 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 30.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:51.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:51.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:52 np0005466031 nova_compute[235803]: 2025-10-02 12:20:52.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:53.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:53.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:55.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:55 np0005466031 nova_compute[235803]: 2025-10-02 12:20:55.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:55 np0005466031 nova_compute[235803]: 2025-10-02 12:20:55.908 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:55 np0005466031 nova_compute[235803]: 2025-10-02 12:20:55.909 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:55 np0005466031 nova_compute[235803]: 2025-10-02 12:20:55.910 2 INFO nova.compute.manager [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Unshelving#033[00m
Oct  2 08:20:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:55.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:56 np0005466031 nova_compute[235803]: 2025-10-02 12:20:56.257 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:56 np0005466031 nova_compute[235803]: 2025-10-02 12:20:56.258 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:56 np0005466031 nova_compute[235803]: 2025-10-02 12:20:56.262 2 DEBUG nova.objects.instance [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'pci_requests' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:56 np0005466031 nova_compute[235803]: 2025-10-02 12:20:56.306 2 DEBUG nova.objects.instance [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'numa_topology' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:56 np0005466031 nova_compute[235803]: 2025-10-02 12:20:56.517 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:20:56 np0005466031 nova_compute[235803]: 2025-10-02 12:20:56.518 2 INFO nova.compute.claims [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:20:56 np0005466031 nova_compute[235803]: 2025-10-02 12:20:56.826 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2057428937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:57 np0005466031 nova_compute[235803]: 2025-10-02 12:20:57.246 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:57 np0005466031 nova_compute[235803]: 2025-10-02 12:20:57.252 2 DEBUG nova.compute.provider_tree [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:57.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:57 np0005466031 nova_compute[235803]: 2025-10-02 12:20:57.318 2 DEBUG nova.scheduler.client.report [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:57 np0005466031 nova_compute[235803]: 2025-10-02 12:20:57.415 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:57 np0005466031 nova_compute[235803]: 2025-10-02 12:20:57.844 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:57 np0005466031 nova_compute[235803]: 2025-10-02 12:20:57.845 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:57 np0005466031 nova_compute[235803]: 2025-10-02 12:20:57.846 2 DEBUG nova.network.neutron [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:20:57 np0005466031 nova_compute[235803]: 2025-10-02 12:20:57.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:57.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:58 np0005466031 nova_compute[235803]: 2025-10-02 12:20:58.054 2 DEBUG nova.network.neutron [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:20:58 np0005466031 podman[254181]: 2025-10-02 12:20:58.623706677 +0000 UTC m=+0.054812585 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:20:58 np0005466031 podman[254182]: 2025-10-02 12:20:58.664300049 +0000 UTC m=+0.088790186 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:20:58 np0005466031 nova_compute[235803]: 2025-10-02 12:20:58.826 2 DEBUG nova.network.neutron [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:58 np0005466031 nova_compute[235803]: 2025-10-02 12:20:58.889 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:58 np0005466031 nova_compute[235803]: 2025-10-02 12:20:58.892 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:20:58 np0005466031 nova_compute[235803]: 2025-10-02 12:20:58.892 2 INFO nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating image(s)#033[00m
Oct  2 08:20:58 np0005466031 nova_compute[235803]: 2025-10-02 12:20:58.928 2 DEBUG nova.storage.rbd_utils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:58 np0005466031 nova_compute[235803]: 2025-10-02 12:20:58.932 2 DEBUG nova.objects.instance [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.034 2 DEBUG nova.storage.rbd_utils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.065 2 DEBUG nova.storage.rbd_utils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.070 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "6369318509ff11c8d8e1aaf0041e96183f2af220" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.071 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "6369318509ff11c8d8e1aaf0041e96183f2af220" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:20:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:59.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.362 2 DEBUG nova.virt.libvirt.imagebackend [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/f573b04f-b850-41b8-8b32-8ba00d6690cd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/f573b04f-b850-41b8-8b32-8ba00d6690cd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.417 2 DEBUG nova.virt.libvirt.imagebackend [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/f573b04f-b850-41b8-8b32-8ba00d6690cd/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.417 2 DEBUG nova.storage.rbd_utils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] cloning images/f573b04f-b850-41b8-8b32-8ba00d6690cd@snap to None/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.520 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "6369318509ff11c8d8e1aaf0041e96183f2af220" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.636 2 DEBUG nova.objects.instance [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'migration_context' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:59 np0005466031 nova_compute[235803]: 2025-10-02 12:20:59.712 2 DEBUG nova.storage.rbd_utils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] flattening vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:20:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:20:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:59.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.238 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Image rbd:vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.239 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.239 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Ensure instance console log exists: /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.239 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.240 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.240 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.241 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:20:20Z,direct_url=<?>,disk_format='raw',id=f573b04f-b850-41b8-8b32-8ba00d6690cd,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1498353913-shelved',owner='aaf2805394aa4c4cb7977f6433aabf56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:20:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.244 2 WARNING nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.249 2 DEBUG nova.virt.libvirt.host [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.250 2 DEBUG nova.virt.libvirt.host [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.253 2 DEBUG nova.virt.libvirt.host [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.254 2 DEBUG nova.virt.libvirt.host [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.255 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.255 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:20:20Z,direct_url=<?>,disk_format='raw',id=f573b04f-b850-41b8-8b32-8ba00d6690cd,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1498353913-shelved',owner='aaf2805394aa4c4cb7977f6433aabf56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:20:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.255 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.255 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.255 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.256 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.256 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.256 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.256 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.256 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.256 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.257 2 DEBUG nova.virt.hardware [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.257 2 DEBUG nova.objects.instance [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:00 np0005466031 nova_compute[235803]: 2025-10-02 12:21:00.606 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:21:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/287197055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.080 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.109 2 DEBUG nova.storage.rbd_utils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.113 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:01.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:21:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1740980608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.581 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.585 2 DEBUG nova.objects.instance [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'pci_devices' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.636 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <uuid>7beacac0-65ce-4e15-a73c-9b50a50f968e</uuid>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <name>instance-0000002b</name>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1498353913</nova:name>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:21:00</nova:creationTime>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <nova:user uuid="93167a5206ba42b28aa96a676d3edb6d">tempest-UnshelveToHostMultiNodesTest-2076784560-project-member</nova:user>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <nova:project uuid="aaf2805394aa4c4cb7977f6433aabf56">tempest-UnshelveToHostMultiNodesTest-2076784560</nova:project>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="f573b04f-b850-41b8-8b32-8ba00d6690cd"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <entry name="serial">7beacac0-65ce-4e15-a73c-9b50a50f968e</entry>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <entry name="uuid">7beacac0-65ce-4e15-a73c-9b50a50f968e</entry>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/console.log" append="off"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:21:01 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:21:01 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:21:01 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:21:01 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.732 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.732 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.733 2 INFO nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Using config drive#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.759 2 DEBUG nova.storage.rbd_utils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:01 np0005466031 nova_compute[235803]: 2025-10-02 12:21:01.889 2 DEBUG nova.objects.instance [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:01.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:02 np0005466031 nova_compute[235803]: 2025-10-02 12:21:02.262 2 DEBUG nova.objects.instance [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'keypairs' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:02 np0005466031 nova_compute[235803]: 2025-10-02 12:21:02.731 2 INFO nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating config drive at /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config#033[00m
Oct  2 08:21:02 np0005466031 nova_compute[235803]: 2025-10-02 12:21:02.741 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzyr5kxjb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:02 np0005466031 nova_compute[235803]: 2025-10-02 12:21:02.875 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzyr5kxjb" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:02 np0005466031 nova_compute[235803]: 2025-10-02 12:21:02.910 2 DEBUG nova.storage.rbd_utils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:02 np0005466031 nova_compute[235803]: 2025-10-02 12:21:02.914 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:02 np0005466031 nova_compute[235803]: 2025-10-02 12:21:02.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:03 np0005466031 nova_compute[235803]: 2025-10-02 12:21:03.126 2 DEBUG oslo_concurrency.processutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:03 np0005466031 nova_compute[235803]: 2025-10-02 12:21:03.127 2 INFO nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deleting local config drive /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config because it was imported into RBD.#033[00m
Oct  2 08:21:03 np0005466031 systemd-machined[192227]: New machine qemu-17-instance-0000002b.
Oct  2 08:21:03 np0005466031 systemd[1]: Started Virtual Machine qemu-17-instance-0000002b.
Oct  2 08:21:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:03.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:04.325 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:04.330 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.399 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407664.399292, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.400 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.403 2 DEBUG nova.compute.manager [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.404 2 DEBUG nova.virt.libvirt.driver [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.409 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance spawned successfully.#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.434 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.440 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.481 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.482 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407664.4006004, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.482 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.521 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.526 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:04 np0005466031 nova_compute[235803]: 2025-10-02 12:21:04.586 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:21:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:21:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/251241660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:21:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:21:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/251241660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:21:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e180 e180: 3 total, 3 up, 3 in
Oct  2 08:21:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:05.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:05 np0005466031 nova_compute[235803]: 2025-10-02 12:21:05.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:05.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:06.333 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:06 np0005466031 podman[254617]: 2025-10-02 12:21:06.645256821 +0000 UTC m=+0.064370860 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 08:21:06 np0005466031 nova_compute[235803]: 2025-10-02 12:21:06.964 2 DEBUG nova.compute.manager [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:07.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:07 np0005466031 nova_compute[235803]: 2025-10-02 12:21:07.390 2 DEBUG oslo_concurrency.lockutils [None req-971a5bf9-08ae-4b79-9909-b8c317bb9645 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:07 np0005466031 nova_compute[235803]: 2025-10-02 12:21:07.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:07.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:09.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:09.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:10 np0005466031 nova_compute[235803]: 2025-10-02 12:21:10.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:10 np0005466031 podman[254663]: 2025-10-02 12:21:10.556211776 +0000 UTC m=+0.084143842 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:21:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e181 e181: 3 total, 3 up, 3 in
Oct  2 08:21:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:11.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:11.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:12 np0005466031 nova_compute[235803]: 2025-10-02 12:21:12.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:13 np0005466031 nova_compute[235803]: 2025-10-02 12:21:13.012 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:13 np0005466031 nova_compute[235803]: 2025-10-02 12:21:13.012 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:13 np0005466031 nova_compute[235803]: 2025-10-02 12:21:13.013 2 INFO nova.compute.manager [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Shelving#033[00m
Oct  2 08:21:13 np0005466031 nova_compute[235803]: 2025-10-02 12:21:13.206 2 DEBUG nova.virt.libvirt.driver [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:21:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:13.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e182 e182: 3 total, 3 up, 3 in
Oct  2 08:21:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:21:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:13.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:21:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e183 e183: 3 total, 3 up, 3 in
Oct  2 08:21:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:15.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:15 np0005466031 nova_compute[235803]: 2025-10-02 12:21:15.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e184 e184: 3 total, 3 up, 3 in
Oct  2 08:21:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:21:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:15.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:21:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:17.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:17 np0005466031 nova_compute[235803]: 2025-10-02 12:21:17.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:17.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:19.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:19.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:20 np0005466031 nova_compute[235803]: 2025-10-02 12:21:20.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e185 e185: 3 total, 3 up, 3 in
Oct  2 08:21:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:21.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:22 np0005466031 nova_compute[235803]: 2025-10-02 12:21:22.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:23 np0005466031 nova_compute[235803]: 2025-10-02 12:21:23.254 2 DEBUG nova.virt.libvirt.driver [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:21:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:21:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:23.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:21:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:25 np0005466031 nova_compute[235803]: 2025-10-02 12:21:25.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:25.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:25 np0005466031 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct  2 08:21:25 np0005466031 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000002b.scope: Consumed 13.904s CPU time.
Oct  2 08:21:25 np0005466031 systemd-machined[192227]: Machine qemu-17-instance-0000002b terminated.
Oct  2 08:21:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:25.828 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:25.828 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:25.828 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:25.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:26 np0005466031 nova_compute[235803]: 2025-10-02 12:21:26.269 2 INFO nova.virt.libvirt.driver [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:21:26 np0005466031 nova_compute[235803]: 2025-10-02 12:21:26.273 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance destroyed successfully.#033[00m
Oct  2 08:21:26 np0005466031 nova_compute[235803]: 2025-10-02 12:21:26.274 2 DEBUG nova.objects.instance [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:21:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:21:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:21:26 np0005466031 nova_compute[235803]: 2025-10-02 12:21:26.554 2 INFO nova.virt.libvirt.driver [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Beginning cold snapshot process#033[00m
Oct  2 08:21:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:27.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e186 e186: 3 total, 3 up, 3 in
Oct  2 08:21:27 np0005466031 nova_compute[235803]: 2025-10-02 12:21:27.680 2 DEBUG nova.virt.libvirt.imagebackend [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:21:27 np0005466031 nova_compute[235803]: 2025-10-02 12:21:27.854 2 DEBUG nova.storage.rbd_utils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] creating snapshot(536849b54358429fb56f6c6331531656) on rbd image(7beacac0-65ce-4e15-a73c-9b50a50f968e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:21:27 np0005466031 nova_compute[235803]: 2025-10-02 12:21:27.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:27.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e187 e187: 3 total, 3 up, 3 in
Oct  2 08:21:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:29.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:29 np0005466031 podman[254905]: 2025-10-02 12:21:29.652588509 +0000 UTC m=+0.075997076 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:21:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:29 np0005466031 podman[254906]: 2025-10-02 12:21:29.753169114 +0000 UTC m=+0.165700117 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  2 08:21:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:29.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:30 np0005466031 nova_compute[235803]: 2025-10-02 12:21:30.314 2 DEBUG nova.storage.rbd_utils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] cloning vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk@536849b54358429fb56f6c6331531656 to images/45a729a3-bfb9-4ba4-a275-e4201ada93ed clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:21:30 np0005466031 nova_compute[235803]: 2025-10-02 12:21:30.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:30 np0005466031 nova_compute[235803]: 2025-10-02 12:21:30.435 2 DEBUG nova.storage.rbd_utils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] flattening images/45a729a3-bfb9-4ba4-a275-e4201ada93ed flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:21:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:31.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:31.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:32 np0005466031 nova_compute[235803]: 2025-10-02 12:21:32.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:33.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:33 np0005466031 nova_compute[235803]: 2025-10-02 12:21:33.643 2 DEBUG nova.storage.rbd_utils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] removing snapshot(536849b54358429fb56f6c6331531656) on rbd image(7beacac0-65ce-4e15-a73c-9b50a50f968e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:21:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:33.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e188 e188: 3 total, 3 up, 3 in
Oct  2 08:21:34 np0005466031 nova_compute[235803]: 2025-10-02 12:21:34.641 2 DEBUG nova.storage.rbd_utils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] creating snapshot(snap) on rbd image(45a729a3-bfb9-4ba4-a275-e4201ada93ed) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:21:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:34 np0005466031 nova_compute[235803]: 2025-10-02 12:21:34.869 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:35 np0005466031 nova_compute[235803]: 2025-10-02 12:21:35.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:35.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:35 np0005466031 nova_compute[235803]: 2025-10-02 12:21:35.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e189 e189: 3 total, 3 up, 3 in
Oct  2 08:21:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:35.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:37.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:37 np0005466031 nova_compute[235803]: 2025-10-02 12:21:37.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:37 np0005466031 nova_compute[235803]: 2025-10-02 12:21:37.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:21:37 np0005466031 nova_compute[235803]: 2025-10-02 12:21:37.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:21:37 np0005466031 podman[255094]: 2025-10-02 12:21:37.641727986 +0000 UTC m=+0.069177949 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:21:37 np0005466031 nova_compute[235803]: 2025-10-02 12:21:37.665 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:37 np0005466031 nova_compute[235803]: 2025-10-02 12:21:37.665 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:37 np0005466031 nova_compute[235803]: 2025-10-02 12:21:37.665 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:21:37 np0005466031 nova_compute[235803]: 2025-10-02 12:21:37.666 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:37 np0005466031 nova_compute[235803]: 2025-10-02 12:21:37.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:38 np0005466031 nova_compute[235803]: 2025-10-02 12:21:38.352 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:21:38 np0005466031 nova_compute[235803]: 2025-10-02 12:21:38.994 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.037 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.037 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.037 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.038 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.227 2 INFO nova.virt.libvirt.driver [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Snapshot image upload complete#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.227 2 DEBUG nova.compute.manager [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:39.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.712 2 INFO nova.compute.manager [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Shelve offloading#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.719 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance destroyed successfully.#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.719 2 DEBUG nova.compute.manager [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.721 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.721 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.721 2 DEBUG nova.network.neutron [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:21:39 np0005466031 nova_compute[235803]: 2025-10-02 12:21:39.931 2 DEBUG nova.network.neutron [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:21:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:39.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.020 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.020 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.160 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.399 2 DEBUG nova.network.neutron [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.422 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.422 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.430 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.430 2 INFO nova.compute.claims [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.433 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.439 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance destroyed successfully.#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.440 2 DEBUG nova.objects.instance [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lazy-loading 'resources' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:21:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:40 np0005466031 podman[255160]: 2025-10-02 12:21:40.82596191 +0000 UTC m=+0.089502236 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.841 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407685.8401892, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:40 np0005466031 nova_compute[235803]: 2025-10-02 12:21:40.842 2 INFO nova.compute.manager [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:21:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:41 np0005466031 nova_compute[235803]: 2025-10-02 12:21:41.874 2 INFO nova.virt.libvirt.driver [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deleting instance files /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e_del#033[00m
Oct  2 08:21:41 np0005466031 nova_compute[235803]: 2025-10-02 12:21:41.875 2 INFO nova.virt.libvirt.driver [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deletion of /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e_del complete#033[00m
Oct  2 08:21:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:42.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:42 np0005466031 nova_compute[235803]: 2025-10-02 12:21:42.555 2 DEBUG nova.compute.manager [None req-dfa9d58e-dbac-49d2-85b7-3ec7cb28038a - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:42 np0005466031 nova_compute[235803]: 2025-10-02 12:21:42.559 2 DEBUG nova.compute.manager [None req-dfa9d58e-dbac-49d2-85b7-3ec7cb28038a - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:42 np0005466031 nova_compute[235803]: 2025-10-02 12:21:42.564 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:42 np0005466031 nova_compute[235803]: 2025-10-02 12:21:42.641 2 INFO nova.compute.manager [None req-dfa9d58e-dbac-49d2-85b7-3ec7cb28038a - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Oct  2 08:21:42 np0005466031 nova_compute[235803]: 2025-10-02 12:21:42.662 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:42 np0005466031 nova_compute[235803]: 2025-10-02 12:21:42.752 2 INFO nova.scheduler.client.report [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Deleted allocations for instance 7beacac0-65ce-4e15-a73c-9b50a50f968e#033[00m
Oct  2 08:21:42 np0005466031 nova_compute[235803]: 2025-10-02 12:21:42.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:42 np0005466031 nova_compute[235803]: 2025-10-02 12:21:42.945 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/197691806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.153 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.159 2 DEBUG nova.compute.provider_tree [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.225 2 DEBUG nova.scheduler.client.report [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.274 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.275 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.277 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.277 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.277 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.278 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.309 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:43.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.471 2 DEBUG oslo_concurrency.processutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.677 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.678 2 DEBUG nova.network.neutron [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:21:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/106516087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.709 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.741 2 INFO nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.784 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.883 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.884 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4694MB free_disk=20.921981811523438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.884 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/830024931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.907 2 DEBUG oslo_concurrency.processutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.912 2 DEBUG nova.compute.provider_tree [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:43 np0005466031 nova_compute[235803]: 2025-10-02 12:21:43.961 2 DEBUG nova.scheduler.client.report [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:44.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.220 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.222 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.250 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.251 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.252 2 INFO nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Creating image(s)#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.273 2 DEBUG nova.storage.rbd_utils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.300 2 DEBUG nova.storage.rbd_utils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.325 2 DEBUG nova.storage.rbd_utils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.329 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.373 2 DEBUG oslo_concurrency.lockutils [None req-1f62644e-54e8-400c-9b36-290c050ba3a8 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 31.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.395 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.395 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.396 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.396 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.419 2 DEBUG nova.storage.rbd_utils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.422 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.791 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.792 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.792 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.904 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:44 np0005466031 nova_compute[235803]: 2025-10-02 12:21:44.957 2 DEBUG nova.policy [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'afacfeac9efc4e6fbb83ebe4fe9a8f38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.319 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.897s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/497092318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.376 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.381 2 DEBUG nova.storage.rbd_utils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] resizing rbd image 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.408 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.500 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.765 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.766 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:45 np0005466031 nova_compute[235803]: 2025-10-02 12:21:45.804 2 DEBUG nova.objects.instance [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e190 e190: 3 total, 3 up, 3 in
Oct  2 08:21:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:46.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.032 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.033 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Ensure instance console log exists: /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.033 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.034 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.034 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.390 2 DEBUG nova.network.neutron [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Successfully created port: fea4f9b0-603f-4c16-9ab4-97ccfd3a720b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.766 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.874 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.874 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.874 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:46 np0005466031 nova_compute[235803]: 2025-10-02 12:21:46.874 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:21:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:47.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:47 np0005466031 nova_compute[235803]: 2025-10-02 12:21:47.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:48.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:49.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:49 np0005466031 nova_compute[235803]: 2025-10-02 12:21:49.454 2 DEBUG nova.network.neutron [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Successfully updated port: fea4f9b0-603f-4c16-9ab4-97ccfd3a720b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:21:49 np0005466031 nova_compute[235803]: 2025-10-02 12:21:49.477 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "refresh_cache-2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:49 np0005466031 nova_compute[235803]: 2025-10-02 12:21:49.477 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquired lock "refresh_cache-2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:49 np0005466031 nova_compute[235803]: 2025-10-02 12:21:49.478 2 DEBUG nova.network.neutron [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:21:49 np0005466031 nova_compute[235803]: 2025-10-02 12:21:49.572 2 DEBUG nova.compute.manager [req-78e08db3-12d7-4783-836a-1860323bafe9 req-3a299bf1-a2ca-4bbf-b0e2-8200136ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received event network-changed-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:21:49 np0005466031 nova_compute[235803]: 2025-10-02 12:21:49.572 2 DEBUG nova.compute.manager [req-78e08db3-12d7-4783-836a-1860323bafe9 req-3a299bf1-a2ca-4bbf-b0e2-8200136ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Refreshing instance network info cache due to event network-changed-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:21:49 np0005466031 nova_compute[235803]: 2025-10-02 12:21:49.572 2 DEBUG oslo_concurrency.lockutils [req-78e08db3-12d7-4783-836a-1860323bafe9 req-3a299bf1-a2ca-4bbf-b0e2-8200136ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:49 np0005466031 nova_compute[235803]: 2025-10-02 12:21:49.869 2 DEBUG nova.network.neutron [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:21:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:50.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:50 np0005466031 nova_compute[235803]: 2025-10-02 12:21:50.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:51.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:52.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:52 np0005466031 nova_compute[235803]: 2025-10-02 12:21:52.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:53.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:54.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.209 2 DEBUG nova.network.neutron [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Updating instance_info_cache with network_info: [{"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.289 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Releasing lock "refresh_cache-2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.290 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Instance network_info: |[{"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.290 2 DEBUG oslo_concurrency.lockutils [req-78e08db3-12d7-4783-836a-1860323bafe9 req-3a299bf1-a2ca-4bbf-b0e2-8200136ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.290 2 DEBUG nova.network.neutron [req-78e08db3-12d7-4783-836a-1860323bafe9 req-3a299bf1-a2ca-4bbf-b0e2-8200136ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Refreshing network info cache for port fea4f9b0-603f-4c16-9ab4-97ccfd3a720b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.293 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Start _get_guest_xml network_info=[{"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.297 2 WARNING nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.301 2 DEBUG nova.virt.libvirt.host [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.301 2 DEBUG nova.virt.libvirt.host [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.304 2 DEBUG nova.virt.libvirt.host [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.305 2 DEBUG nova.virt.libvirt.host [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.306 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.306 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.307 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.307 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.308 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.308 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.308 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.309 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.309 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.309 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.310 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.310 2 DEBUG nova.virt.hardware [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.313 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:21:54 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2041755104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.835 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.862 2 DEBUG nova.storage.rbd_utils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:54 np0005466031 nova_compute[235803]: 2025-10-02 12:21:54.866 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:21:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3924843442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.307 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.309 2 DEBUG nova.virt.libvirt.vif [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2004388015',display_name='tempest-ImagesTestJSON-server-2004388015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-2004388015',id=46,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-1t9f0bwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:21:43Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=2f6accd4-eaf1-4307-9c43-b732c3dd0b3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.310 2 DEBUG nova.network.os_vif_util [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.311 2 DEBUG nova.network.os_vif_util [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:67:b9,bridge_name='br-int',has_traffic_filtering=True,id=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea4f9b0-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.312 2 DEBUG nova.objects.instance [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:55.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.377 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <uuid>2f6accd4-eaf1-4307-9c43-b732c3dd0b3d</uuid>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <name>instance-0000002e</name>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <nova:name>tempest-ImagesTestJSON-server-2004388015</nova:name>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:21:54</nova:creationTime>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <nova:user uuid="afacfeac9efc4e6fbb83ebe4fe9a8f38">tempest-ImagesTestJSON-1681256609-project-member</nova:user>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <nova:project uuid="d0ebb2827cb241e499606ce3a3c67d24">tempest-ImagesTestJSON-1681256609</nova:project>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <nova:port uuid="fea4f9b0-603f-4c16-9ab4-97ccfd3a720b">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <entry name="serial">2f6accd4-eaf1-4307-9c43-b732c3dd0b3d</entry>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <entry name="uuid">2f6accd4-eaf1-4307-9c43-b732c3dd0b3d</entry>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk.config">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:4b:67:b9"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <target dev="tapfea4f9b0-60"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d/console.log" append="off"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:21:55 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:21:55 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:21:55 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:21:55 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.379 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Preparing to wait for external event network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.379 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.380 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.380 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.381 2 DEBUG nova.virt.libvirt.vif [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2004388015',display_name='tempest-ImagesTestJSON-server-2004388015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-2004388015',id=46,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-1t9f0bwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:21:43Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=2f6accd4-eaf1-4307-9c43-b732c3dd0b3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.382 2 DEBUG nova.network.os_vif_util [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.382 2 DEBUG nova.network.os_vif_util [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:67:b9,bridge_name='br-int',has_traffic_filtering=True,id=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea4f9b0-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.383 2 DEBUG os_vif [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:67:b9,bridge_name='br-int',has_traffic_filtering=True,id=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea4f9b0-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfea4f9b0-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfea4f9b0-60, col_values=(('external_ids', {'iface-id': 'fea4f9b0-603f-4c16-9ab4-97ccfd3a720b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:67:b9', 'vm-uuid': '2f6accd4-eaf1-4307-9c43-b732c3dd0b3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:55 np0005466031 NetworkManager[44907]: <info>  [1759407715.3953] manager: (tapfea4f9b0-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.401 2 INFO os_vif [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:67:b9,bridge_name='br-int',has_traffic_filtering=True,id=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea4f9b0-60')#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.586 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.587 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.588 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No VIF found with MAC fa:16:3e:4b:67:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.589 2 INFO nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Using config drive#033[00m
Oct  2 08:21:55 np0005466031 nova_compute[235803]: 2025-10-02 12:21:55.625 2 DEBUG nova.storage.rbd_utils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:56.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:56 np0005466031 nova_compute[235803]: 2025-10-02 12:21:56.278 2 INFO nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Creating config drive at /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d/disk.config#033[00m
Oct  2 08:21:56 np0005466031 nova_compute[235803]: 2025-10-02 12:21:56.284 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2k5gzrhz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:56 np0005466031 nova_compute[235803]: 2025-10-02 12:21:56.417 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2k5gzrhz" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:56 np0005466031 nova_compute[235803]: 2025-10-02 12:21:56.446 2 DEBUG nova.storage.rbd_utils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:56 np0005466031 nova_compute[235803]: 2025-10-02 12:21:56.450 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d/disk.config 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:56 np0005466031 nova_compute[235803]: 2025-10-02 12:21:56.699 2 DEBUG nova.network.neutron [req-78e08db3-12d7-4783-836a-1860323bafe9 req-3a299bf1-a2ca-4bbf-b0e2-8200136ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Updated VIF entry in instance network info cache for port fea4f9b0-603f-4c16-9ab4-97ccfd3a720b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:21:56 np0005466031 nova_compute[235803]: 2025-10-02 12:21:56.700 2 DEBUG nova.network.neutron [req-78e08db3-12d7-4783-836a-1860323bafe9 req-3a299bf1-a2ca-4bbf-b0e2-8200136ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Updating instance_info_cache with network_info: [{"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:56 np0005466031 nova_compute[235803]: 2025-10-02 12:21:56.729 2 DEBUG oslo_concurrency.lockutils [req-78e08db3-12d7-4783-836a-1860323bafe9 req-3a299bf1-a2ca-4bbf-b0e2-8200136ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:57.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:57 np0005466031 nova_compute[235803]: 2025-10-02 12:21:57.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:58.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:58 np0005466031 nova_compute[235803]: 2025-10-02 12:21:58.375 2 DEBUG oslo_concurrency.processutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d/disk.config 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:58 np0005466031 nova_compute[235803]: 2025-10-02 12:21:58.376 2 INFO nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Deleting local config drive /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d/disk.config because it was imported into RBD.#033[00m
Oct  2 08:21:58 np0005466031 kernel: tapfea4f9b0-60: entered promiscuous mode
Oct  2 08:21:58 np0005466031 NetworkManager[44907]: <info>  [1759407718.4219] manager: (tapfea4f9b0-60): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct  2 08:21:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:21:58Z|00101|binding|INFO|Claiming lport fea4f9b0-603f-4c16-9ab4-97ccfd3a720b for this chassis.
Oct  2 08:21:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:21:58Z|00102|binding|INFO|fea4f9b0-603f-4c16-9ab4-97ccfd3a720b: Claiming fa:16:3e:4b:67:b9 10.100.0.11
Oct  2 08:21:58 np0005466031 nova_compute[235803]: 2025-10-02 12:21:58.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:58 np0005466031 nova_compute[235803]: 2025-10-02 12:21:58.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:58 np0005466031 systemd-udevd[255657]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:21:58 np0005466031 systemd-machined[192227]: New machine qemu-18-instance-0000002e.
Oct  2 08:21:58 np0005466031 NetworkManager[44907]: <info>  [1759407718.4631] device (tapfea4f9b0-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:21:58 np0005466031 NetworkManager[44907]: <info>  [1759407718.4642] device (tapfea4f9b0-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:21:58 np0005466031 systemd[1]: Started Virtual Machine qemu-18-instance-0000002e.
Oct  2 08:21:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:21:58Z|00103|binding|INFO|Setting lport fea4f9b0-603f-4c16-9ab4-97ccfd3a720b ovn-installed in OVS
Oct  2 08:21:58 np0005466031 nova_compute[235803]: 2025-10-02 12:21:58.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:21:58Z|00104|binding|INFO|Setting lport fea4f9b0-603f-4c16-9ab4-97ccfd3a720b up in Southbound
Oct  2 08:21:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:58.953 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:67:b9 10.100.0.11'], port_security=['fa:16:3e:4b:67:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2f6accd4-eaf1-4307-9c43-b732c3dd0b3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:21:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:58.954 141898 INFO neutron.agent.ovn.metadata.agent [-] Port fea4f9b0-603f-4c16-9ab4-97ccfd3a720b in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 bound to our chassis#033[00m
Oct  2 08:21:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:58.955 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d68ff9e0-aff2-4eda-8590-74da7cfc5671#033[00m
Oct  2 08:21:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:58.973 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7123d2ad-369a-42c6-9a93-fa06505afbc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:58.974 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd68ff9e0-a1 in ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:21:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:58.975 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd68ff9e0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:21:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:58.976 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d720491c-38e2-4dcd-9a71-c6ab6cc42668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:58.977 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[383d4f6f-6446-4130-bbbe-d3bf57649516]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.001 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbb4b00-e0f6-4713-b0d9-86bc89cdf542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.032 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[500b0990-416f-40e0-a31c-04cdc95bc0b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.075 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d464f7-a642-40da-8f1f-2c57fd98b614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.083 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8e6f46-0139-4328-83a0-ef48167b14fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 NetworkManager[44907]: <info>  [1759407719.0846] manager: (tapd68ff9e0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.121 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[74e6467e-7086-4902-b5d6-876e9ec2fa48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.125 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bcee85bd-2bff-4767-b04a-90cf73be74e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 NetworkManager[44907]: <info>  [1759407719.1614] device (tapd68ff9e0-a0): carrier: link connected
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.169 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a364e3cc-ca7e-43cd-adf5-e5d601cffe68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.187 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ee13f9ae-02fd-44dc-848c-d4f3eb9c8485]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557475, 'reachable_time': 40392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255708, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.206 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[993bbafc-dc79-4ace-82c0-43e560f9e33f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557475, 'tstamp': 557475}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255716, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.223 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0c069e50-ca1a-41d1-8c02-81dfc060b057]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557475, 'reachable_time': 40392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255726, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.270 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7501511b-1766-4261-a39f-a2782f09b5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.344 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[da8c078d-5824-4960-a58b-11d55ae6fc59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.346 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.346 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.347 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd68ff9e0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:59 np0005466031 kernel: tapd68ff9e0-a0: entered promiscuous mode
Oct  2 08:21:59 np0005466031 NetworkManager[44907]: <info>  [1759407719.3494] manager: (tapd68ff9e0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.353 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd68ff9e0-a0, col_values=(('external_ids', {'iface-id': 'c0382cb4-7e26-44bc-8951-80e73f21067a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:21:59Z|00105|binding|INFO|Releasing lport c0382cb4-7e26-44bc-8951-80e73f21067a from this chassis (sb_readonly=0)
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.356 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.357 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9b6925-7141-4538-a597-cf9ecedd03f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.359 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:21:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:21:59.360 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'env', 'PROCESS_TAG=haproxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d68ff9e0-aff2-4eda-8590-74da7cfc5671.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.369 2 DEBUG nova.compute.manager [req-aaf85e59-2cf2-4119-8f2e-25fec17c2e82 req-495272d2-89cb-44c8-9ae6-4cd5e49be81f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received event network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.370 2 DEBUG oslo_concurrency.lockutils [req-aaf85e59-2cf2-4119-8f2e-25fec17c2e82 req-495272d2-89cb-44c8-9ae6-4cd5e49be81f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.371 2 DEBUG oslo_concurrency.lockutils [req-aaf85e59-2cf2-4119-8f2e-25fec17c2e82 req-495272d2-89cb-44c8-9ae6-4cd5e49be81f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.371 2 DEBUG oslo_concurrency.lockutils [req-aaf85e59-2cf2-4119-8f2e-25fec17c2e82 req-495272d2-89cb-44c8-9ae6-4cd5e49be81f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.371 2 DEBUG nova.compute.manager [req-aaf85e59-2cf2-4119-8f2e-25fec17c2e82 req-495272d2-89cb-44c8-9ae6-4cd5e49be81f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Processing event network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:21:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:21:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:59.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:21:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:59 np0005466031 podman[255764]: 2025-10-02 12:21:59.728640507 +0000 UTC m=+0.045603808 container create f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.760 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:21:59 np0005466031 systemd[1]: Started libpod-conmon-f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c.scope.
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.763 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407719.7622354, 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.763 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] VM Started (Lifecycle Event)#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.765 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.781 2 INFO nova.virt.libvirt.driver [-] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Instance spawned successfully.#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.781 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:21:59 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:21:59 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b62eccbdd26ccb911d43e22c212ba40485695d0b2342ee9bc55bfea8a75c0ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:21:59 np0005466031 podman[255764]: 2025-10-02 12:21:59.705907521 +0000 UTC m=+0.022870852 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.804 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:59 np0005466031 podman[255764]: 2025-10-02 12:21:59.812701965 +0000 UTC m=+0.129665306 container init f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.818 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:59 np0005466031 podman[255764]: 2025-10-02 12:21:59.820575603 +0000 UTC m=+0.137538904 container start f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.824 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.825 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.825 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.826 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.826 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.826 2 DEBUG nova.virt.libvirt.driver [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:59 np0005466031 podman[255777]: 2025-10-02 12:21:59.833221678 +0000 UTC m=+0.072467514 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 08:21:59 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[255785]: [NOTICE]   (255804) : New worker (255812) forked
Oct  2 08:21:59 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[255785]: [NOTICE]   (255804) : Loading success.
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.875 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.875 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407719.7623582, 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.875 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:21:59 np0005466031 podman[255793]: 2025-10-02 12:21:59.899011608 +0000 UTC m=+0.087552009 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.924 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.927 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407719.7654421, 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.927 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.972 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:59 np0005466031 nova_compute[235803]: 2025-10-02 12:21:59.974 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:00 np0005466031 nova_compute[235803]: 2025-10-02 12:22:00.000 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:00 np0005466031 nova_compute[235803]: 2025-10-02 12:22:00.007 2 INFO nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Took 15.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:22:00 np0005466031 nova_compute[235803]: 2025-10-02 12:22:00.008 2 DEBUG nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:00.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:00 np0005466031 nova_compute[235803]: 2025-10-02 12:22:00.084 2 INFO nova.compute.manager [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Took 19.71 seconds to build instance.#033[00m
Oct  2 08:22:00 np0005466031 nova_compute[235803]: 2025-10-02 12:22:00.103 2 DEBUG oslo_concurrency.lockutils [None req-8026dc03-78bc-4bc4-a983-eb9ac3629e67 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:00 np0005466031 nova_compute[235803]: 2025-10-02 12:22:00.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.122 2 DEBUG nova.objects.instance [None req-ae24c48b-c64b-42db-8c21-3e9fa06dd78d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.143 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407721.1437306, 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.144 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.168 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.173 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.197 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 08:22:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:01.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:01 np0005466031 kernel: tapfea4f9b0-60 (unregistering): left promiscuous mode
Oct  2 08:22:01 np0005466031 NetworkManager[44907]: <info>  [1759407721.4050] device (tapfea4f9b0-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:22:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:22:01Z|00106|binding|INFO|Releasing lport fea4f9b0-603f-4c16-9ab4-97ccfd3a720b from this chassis (sb_readonly=0)
Oct  2 08:22:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:22:01Z|00107|binding|INFO|Setting lport fea4f9b0-603f-4c16-9ab4-97ccfd3a720b down in Southbound
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:22:01Z|00108|binding|INFO|Removing iface tapfea4f9b0-60 ovn-installed in OVS
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.424 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:67:b9 10.100.0.11'], port_security=['fa:16:3e:4b:67:b9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2f6accd4-eaf1-4307-9c43-b732c3dd0b3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.426 141898 INFO neutron.agent.ovn.metadata.agent [-] Port fea4f9b0-603f-4c16-9ab4-97ccfd3a720b in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.427 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.428 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7db85544-11d4-4534-a345-0322d23772d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.428 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace which is not needed anymore#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.465 2 DEBUG nova.compute.manager [req-a2355ee1-6c13-46a0-963c-7d71cec5d448 req-e5e29d3e-40a0-4e68-9998-9cfd7a82eb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received event network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.465 2 DEBUG oslo_concurrency.lockutils [req-a2355ee1-6c13-46a0-963c-7d71cec5d448 req-e5e29d3e-40a0-4e68-9998-9cfd7a82eb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.466 2 DEBUG oslo_concurrency.lockutils [req-a2355ee1-6c13-46a0-963c-7d71cec5d448 req-e5e29d3e-40a0-4e68-9998-9cfd7a82eb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.466 2 DEBUG oslo_concurrency.lockutils [req-a2355ee1-6c13-46a0-963c-7d71cec5d448 req-e5e29d3e-40a0-4e68-9998-9cfd7a82eb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.466 2 DEBUG nova.compute.manager [req-a2355ee1-6c13-46a0-963c-7d71cec5d448 req-e5e29d3e-40a0-4e68-9998-9cfd7a82eb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] No waiting events found dispatching network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.466 2 WARNING nova.compute.manager [req-a2355ee1-6c13-46a0-963c-7d71cec5d448 req-e5e29d3e-40a0-4e68-9998-9cfd7a82eb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received unexpected event network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b for instance with vm_state active and task_state suspending.#033[00m
Oct  2 08:22:01 np0005466031 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Oct  2 08:22:01 np0005466031 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002e.scope: Consumed 2.698s CPU time.
Oct  2 08:22:01 np0005466031 systemd-machined[192227]: Machine qemu-18-instance-0000002e terminated.
Oct  2 08:22:01 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[255785]: [NOTICE]   (255804) : haproxy version is 2.8.14-c23fe91
Oct  2 08:22:01 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[255785]: [NOTICE]   (255804) : path to executable is /usr/sbin/haproxy
Oct  2 08:22:01 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[255785]: [ALERT]    (255804) : Current worker (255812) exited with code 143 (Terminated)
Oct  2 08:22:01 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[255785]: [WARNING]  (255804) : All workers exited. Exiting... (0)
Oct  2 08:22:01 np0005466031 systemd[1]: libpod-f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c.scope: Deactivated successfully.
Oct  2 08:22:01 np0005466031 conmon[255785]: conmon f21d763962f3d3cd7d98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c.scope/container/memory.events
Oct  2 08:22:01 np0005466031 podman[255863]: 2025-10-02 12:22:01.561531889 +0000 UTC m=+0.041899401 container died f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:22:01 np0005466031 NetworkManager[44907]: <info>  [1759407721.5749] manager: (tapfea4f9b0-60): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Oct  2 08:22:01 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c-userdata-shm.mount: Deactivated successfully.
Oct  2 08:22:01 np0005466031 systemd[1]: var-lib-containers-storage-overlay-3b62eccbdd26ccb911d43e22c212ba40485695d0b2342ee9bc55bfea8a75c0ac-merged.mount: Deactivated successfully.
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.592 2 DEBUG nova.compute.manager [None req-ae24c48b-c64b-42db-8c21-3e9fa06dd78d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:01 np0005466031 podman[255863]: 2025-10-02 12:22:01.602514113 +0000 UTC m=+0.082881615 container cleanup f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:22:01 np0005466031 systemd[1]: libpod-conmon-f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c.scope: Deactivated successfully.
Oct  2 08:22:01 np0005466031 podman[255904]: 2025-10-02 12:22:01.659675194 +0000 UTC m=+0.034934300 container remove f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.665 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[67550e11-f684-4638-bb91-ec907589f49a]: (4, ('Thu Oct  2 12:22:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c)\nf21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c\nThu Oct  2 12:22:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (f21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c)\nf21d763962f3d3cd7d98df01889bf81841fd1d242c66f647cbb1ee097ca0de6c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.666 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f061b8-019e-4dc2-ae85-b4799050eea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.667 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:01 np0005466031 kernel: tapd68ff9e0-a0: left promiscuous mode
Oct  2 08:22:01 np0005466031 nova_compute[235803]: 2025-10-02 12:22:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.689 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b69eb906-519f-49d9-bdb7-2a740d8fc6ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.724 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cec761da-e44e-4de3-8fc0-b1c442ee2a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.726 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d774287f-aec0-486e-8495-31011a118f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.740 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6e03b2b7-d10b-42f3-8fd5-d2a825bcc98e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557465, 'reachable_time': 33542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255923, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.742 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:22:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:01.742 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[cf11689a-48ce-4424-ab96-95e524851c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:01 np0005466031 systemd[1]: run-netns-ovnmeta\x2dd68ff9e0\x2daff2\x2d4eda\x2d8590\x2d74da7cfc5671.mount: Deactivated successfully.
Oct  2 08:22:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:02 np0005466031 nova_compute[235803]: 2025-10-02 12:22:02.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:03.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.676 2 DEBUG nova.compute.manager [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received event network-vif-unplugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.677 2 DEBUG oslo_concurrency.lockutils [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.677 2 DEBUG oslo_concurrency.lockutils [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.677 2 DEBUG oslo_concurrency.lockutils [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.678 2 DEBUG nova.compute.manager [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] No waiting events found dispatching network-vif-unplugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.678 2 WARNING nova.compute.manager [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received unexpected event network-vif-unplugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b for instance with vm_state suspended and task_state None.#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.678 2 DEBUG nova.compute.manager [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received event network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.678 2 DEBUG oslo_concurrency.lockutils [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.678 2 DEBUG oslo_concurrency.lockutils [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.679 2 DEBUG oslo_concurrency.lockutils [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.679 2 DEBUG nova.compute.manager [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] No waiting events found dispatching network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:03 np0005466031 nova_compute[235803]: 2025-10-02 12:22:03.679 2 WARNING nova.compute.manager [req-ccc2b342-f455-4870-957d-34a44b1ea9b1 req-93b0f080-04b5-4c8d-a746-d30bd92ba501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received unexpected event network-vif-plugged-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b for instance with vm_state suspended and task_state None.#033[00m
Oct  2 08:22:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:04.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:04 np0005466031 nova_compute[235803]: 2025-10-02 12:22:04.659 2 DEBUG nova.compute.manager [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:04 np0005466031 nova_compute[235803]: 2025-10-02 12:22:04.718 2 INFO nova.compute.manager [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] instance snapshotting#033[00m
Oct  2 08:22:04 np0005466031 nova_compute[235803]: 2025-10-02 12:22:04.719 2 WARNING nova.compute.manager [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct  2 08:22:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e191 e191: 3 total, 3 up, 3 in
Oct  2 08:22:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:22:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952618562' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:22:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:22:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952618562' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:22:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:05.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:05 np0005466031 nova_compute[235803]: 2025-10-02 12:22:05.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:05 np0005466031 nova_compute[235803]: 2025-10-02 12:22:05.660 2 INFO nova.virt.libvirt.driver [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Beginning cold snapshot process#033[00m
Oct  2 08:22:05 np0005466031 nova_compute[235803]: 2025-10-02 12:22:05.840 2 DEBUG nova.virt.libvirt.imagebackend [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:22:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:06.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:06 np0005466031 nova_compute[235803]: 2025-10-02 12:22:06.131 2 DEBUG nova.storage.rbd_utils [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(bf4a30fa047548be91c28b3700860dad) on rbd image(2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:22:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e192 e192: 3 total, 3 up, 3 in
Oct  2 08:22:07 np0005466031 nova_compute[235803]: 2025-10-02 12:22:07.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:08.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:08 np0005466031 nova_compute[235803]: 2025-10-02 12:22:08.554 2 DEBUG nova.storage.rbd_utils [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] cloning vms/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk@bf4a30fa047548be91c28b3700860dad to images/5e6784e9-638e-4f69-b9c1-8a81a55dc0ed clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:22:08 np0005466031 podman[255980]: 2025-10-02 12:22:08.631070586 +0000 UTC m=+0.055019501 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct  2 08:22:09 np0005466031 nova_compute[235803]: 2025-10-02 12:22:09.052 2 DEBUG nova.storage.rbd_utils [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] flattening images/5e6784e9-638e-4f69-b9c1-8a81a55dc0ed flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:22:09 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct  2 08:22:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:09.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:10.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:10 np0005466031 nova_compute[235803]: 2025-10-02 12:22:10.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:10 np0005466031 nova_compute[235803]: 2025-10-02 12:22:10.767 2 DEBUG nova.storage.rbd_utils [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(bf4a30fa047548be91c28b3700860dad) on rbd image(2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:22:11 np0005466031 podman[256099]: 2025-10-02 12:22:11.08589153 +0000 UTC m=+0.081005450 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:22:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e193 e193: 3 total, 3 up, 3 in
Oct  2 08:22:11 np0005466031 nova_compute[235803]: 2025-10-02 12:22:11.966 2 DEBUG nova.storage.rbd_utils [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(snap) on rbd image(5e6784e9-638e-4f69-b9c1-8a81a55dc0ed) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:22:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:12.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2804927787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:12 np0005466031 nova_compute[235803]: 2025-10-02 12:22:12.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e194 e194: 3 total, 3 up, 3 in
Oct  2 08:22:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:14.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:15.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:15 np0005466031 nova_compute[235803]: 2025-10-02 12:22:15.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:15 np0005466031 nova_compute[235803]: 2025-10-02 12:22:15.476 2 INFO nova.virt.libvirt.driver [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Snapshot image upload complete#033[00m
Oct  2 08:22:15 np0005466031 nova_compute[235803]: 2025-10-02 12:22:15.476 2 INFO nova.compute.manager [None req-cd2e9a65-9895-4566-afd7-2ad652bca5a5 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Took 10.76 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:22:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e195 e195: 3 total, 3 up, 3 in
Oct  2 08:22:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:16.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:16 np0005466031 nova_compute[235803]: 2025-10-02 12:22:16.594 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407721.590959, 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:16 np0005466031 nova_compute[235803]: 2025-10-02 12:22:16.594 2 INFO nova.compute.manager [-] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:22:16 np0005466031 nova_compute[235803]: 2025-10-02 12:22:16.631 2 DEBUG nova.compute.manager [None req-d3bdd222-f177-432b-8a0b-e7de79004982 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:16 np0005466031 nova_compute[235803]: 2025-10-02 12:22:16.633 2 DEBUG nova.compute.manager [None req-d3bdd222-f177-432b-8a0b-e7de79004982 - - - - - -] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:17 np0005466031 nova_compute[235803]: 2025-10-02 12:22:17.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:18.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:18.614 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:18 np0005466031 nova_compute[235803]: 2025-10-02 12:22:18.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:18.615 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:22:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e196 e196: 3 total, 3 up, 3 in
Oct  2 08:22:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:20.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:20 np0005466031 nova_compute[235803]: 2025-10-02 12:22:20.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:20.617 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e197 e197: 3 total, 3 up, 3 in
Oct  2 08:22:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:22.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:22 np0005466031 nova_compute[235803]: 2025-10-02 12:22:22.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.128 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.129 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.129 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.129 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.129 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.130 2 INFO nova.compute.manager [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Terminating instance#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.131 2 DEBUG nova.compute.manager [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.137 2 INFO nova.virt.libvirt.driver [-] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Instance destroyed successfully.#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.137 2 DEBUG nova.objects.instance [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'resources' on Instance uuid 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.242 2 DEBUG nova.virt.libvirt.vif [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2004388015',display_name='tempest-ImagesTestJSON-server-2004388015',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-2004388015',id=46,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-1t9f0bwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:15Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=2f6accd4-eaf1-4307-9c43-b732c3dd0b3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.243 2 DEBUG nova.network.os_vif_util [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "address": "fa:16:3e:4b:67:b9", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea4f9b0-60", "ovs_interfaceid": "fea4f9b0-603f-4c16-9ab4-97ccfd3a720b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.243 2 DEBUG nova.network.os_vif_util [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:67:b9,bridge_name='br-int',has_traffic_filtering=True,id=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea4f9b0-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.243 2 DEBUG os_vif [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:67:b9,bridge_name='br-int',has_traffic_filtering=True,id=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea4f9b0-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfea4f9b0-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:23 np0005466031 nova_compute[235803]: 2025-10-02 12:22:23.252 2 INFO os_vif [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:67:b9,bridge_name='br-int',has_traffic_filtering=True,id=fea4f9b0-603f-4c16-9ab4-97ccfd3a720b,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea4f9b0-60')#033[00m
Oct  2 08:22:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:24.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:24 np0005466031 nova_compute[235803]: 2025-10-02 12:22:24.805 2 INFO nova.virt.libvirt.driver [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Deleting instance files /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_del#033[00m
Oct  2 08:22:24 np0005466031 nova_compute[235803]: 2025-10-02 12:22:24.806 2 INFO nova.virt.libvirt.driver [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Deletion of /var/lib/nova/instances/2f6accd4-eaf1-4307-9c43-b732c3dd0b3d_del complete#033[00m
Oct  2 08:22:24 np0005466031 nova_compute[235803]: 2025-10-02 12:22:24.957 2 INFO nova.compute.manager [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Took 1.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:22:24 np0005466031 nova_compute[235803]: 2025-10-02 12:22:24.958 2 DEBUG oslo.service.loopingcall [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:22:24 np0005466031 nova_compute[235803]: 2025-10-02 12:22:24.958 2 DEBUG nova.compute.manager [-] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:22:24 np0005466031 nova_compute[235803]: 2025-10-02 12:22:24.959 2 DEBUG nova.network.neutron [-] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:22:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:25.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:25.829 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:25.830 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:22:25.830 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e198 e198: 3 total, 3 up, 3 in
Oct  2 08:22:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:26.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:26 np0005466031 nova_compute[235803]: 2025-10-02 12:22:26.414 2 DEBUG nova.network.neutron [-] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:26 np0005466031 nova_compute[235803]: 2025-10-02 12:22:26.451 2 INFO nova.compute.manager [-] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Took 1.49 seconds to deallocate network for instance.#033[00m
Oct  2 08:22:26 np0005466031 nova_compute[235803]: 2025-10-02 12:22:26.543 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:26 np0005466031 nova_compute[235803]: 2025-10-02 12:22:26.544 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:26 np0005466031 nova_compute[235803]: 2025-10-02 12:22:26.603 2 DEBUG oslo_concurrency.processutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1984249346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:27 np0005466031 nova_compute[235803]: 2025-10-02 12:22:27.021 2 DEBUG oslo_concurrency.processutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:27 np0005466031 nova_compute[235803]: 2025-10-02 12:22:27.026 2 DEBUG nova.compute.provider_tree [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:27 np0005466031 nova_compute[235803]: 2025-10-02 12:22:27.056 2 DEBUG nova.scheduler.client.report [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:27 np0005466031 nova_compute[235803]: 2025-10-02 12:22:27.141 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:27 np0005466031 nova_compute[235803]: 2025-10-02 12:22:27.308 2 INFO nova.scheduler.client.report [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Deleted allocations for instance 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d#033[00m
Oct  2 08:22:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:27.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:27 np0005466031 nova_compute[235803]: 2025-10-02 12:22:27.429 2 DEBUG oslo_concurrency.lockutils [None req-47f94733-4fbc-4a30-96b0-b55f3356ec36 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2f6accd4-eaf1-4307-9c43-b732c3dd0b3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:27 np0005466031 nova_compute[235803]: 2025-10-02 12:22:27.948 2 DEBUG nova.compute.manager [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2f6accd4-eaf1-4307-9c43-b732c3dd0b3d] Received event network-vif-deleted-fea4f9b0-603f-4c16-9ab4-97ccfd3a720b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:27 np0005466031 nova_compute[235803]: 2025-10-02 12:22:27.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:28.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:28 np0005466031 nova_compute[235803]: 2025-10-02 12:22:28.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:29.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2442339680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:30.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:30 np0005466031 podman[256213]: 2025-10-02 12:22:30.645377219 +0000 UTC m=+0.076544532 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:22:30 np0005466031 podman[256214]: 2025-10-02 12:22:30.680286927 +0000 UTC m=+0.099563476 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:22:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:31.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:32.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:32 np0005466031 nova_compute[235803]: 2025-10-02 12:22:32.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:33 np0005466031 nova_compute[235803]: 2025-10-02 12:22:33.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:33.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:34.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:34 np0005466031 nova_compute[235803]: 2025-10-02 12:22:34.739 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:35.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:22:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:36.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:22:36 np0005466031 nova_compute[235803]: 2025-10-02 12:22:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:37.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:37 np0005466031 nova_compute[235803]: 2025-10-02 12:22:37.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:38.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:38 np0005466031 nova_compute[235803]: 2025-10-02 12:22:38.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:38 np0005466031 nova_compute[235803]: 2025-10-02 12:22:38.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:39 np0005466031 podman[256310]: 2025-10-02 12:22:39.627480369 +0000 UTC m=+0.059530110 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:22:39 np0005466031 nova_compute[235803]: 2025-10-02 12:22:39.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:39 np0005466031 nova_compute[235803]: 2025-10-02 12:22:39.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:22:39 np0005466031 nova_compute[235803]: 2025-10-02 12:22:39.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:22:39 np0005466031 nova_compute[235803]: 2025-10-02 12:22:39.674 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:22:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:40.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:40 np0005466031 nova_compute[235803]: 2025-10-02 12:22:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:40 np0005466031 nova_compute[235803]: 2025-10-02 12:22:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:40 np0005466031 nova_compute[235803]: 2025-10-02 12:22:40.700 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:40 np0005466031 nova_compute[235803]: 2025-10-02 12:22:40.700 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:40 np0005466031 nova_compute[235803]: 2025-10-02 12:22:40.700 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:40 np0005466031 nova_compute[235803]: 2025-10-02 12:22:40.701 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:22:40 np0005466031 nova_compute[235803]: 2025-10-02 12:22:40.701 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3528556499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.108 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:41 np0005466031 podman[256457]: 2025-10-02 12:22:41.219463353 +0000 UTC m=+0.071747743 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.279 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.280 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4695MB free_disk=20.94662857055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.280 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.280 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:41.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.445 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.446 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.472 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3447753627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.900 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.910 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.956 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.991 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:22:41 np0005466031 nova_compute[235803]: 2025-10-02 12:22:41.992 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:42.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:22:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:22:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:22:42 np0005466031 nova_compute[235803]: 2025-10-02 12:22:42.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:43 np0005466031 nova_compute[235803]: 2025-10-02 12:22:43.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:43.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:44.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:44 np0005466031 nova_compute[235803]: 2025-10-02 12:22:44.992 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:44 np0005466031 nova_compute[235803]: 2025-10-02 12:22:44.993 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:44 np0005466031 nova_compute[235803]: 2025-10-02 12:22:44.993 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:22:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:45.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:46.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:47.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:47 np0005466031 nova_compute[235803]: 2025-10-02 12:22:47.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:47 np0005466031 nova_compute[235803]: 2025-10-02 12:22:47.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:48.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:48 np0005466031 nova_compute[235803]: 2025-10-02 12:22:48.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:49.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:50.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:22:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:51.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:22:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:52.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:52 np0005466031 nova_compute[235803]: 2025-10-02 12:22:52.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466031 nova_compute[235803]: 2025-10-02 12:22:53.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:53.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:22:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:54.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:22:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e199 e199: 3 total, 3 up, 3 in
Oct  2 08:22:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:55.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:56.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:57.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:22:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:22:57 np0005466031 nova_compute[235803]: 2025-10-02 12:22:57.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:58.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:58 np0005466031 nova_compute[235803]: 2025-10-02 12:22:58.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:22:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:23:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:00.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:23:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:01.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:01 np0005466031 podman[256638]: 2025-10-02 12:23:01.642457112 +0000 UTC m=+0.074197625 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:23:01 np0005466031 podman[256639]: 2025-10-02 12:23:01.705818492 +0000 UTC m=+0.130514561 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct  2 08:23:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:23:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:02.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:23:02 np0005466031 nova_compute[235803]: 2025-10-02 12:23:02.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e200 e200: 3 total, 3 up, 3 in
Oct  2 08:23:03 np0005466031 nova_compute[235803]: 2025-10-02 12:23:03.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:03.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:04.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e201 e201: 3 total, 3 up, 3 in
Oct  2 08:23:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:23:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:05.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:23:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:06.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:07 np0005466031 nova_compute[235803]: 2025-10-02 12:23:07.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:08.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:08 np0005466031 nova_compute[235803]: 2025-10-02 12:23:08.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:09.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:10.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:10 np0005466031 podman[256691]: 2025-10-02 12:23:10.653676045 +0000 UTC m=+0.075862977 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:23:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:11.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:11 np0005466031 podman[256736]: 2025-10-02 12:23:11.545662754 +0000 UTC m=+0.060853451 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:23:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e202 e202: 3 total, 3 up, 3 in
Oct  2 08:23:11 np0005466031 nova_compute[235803]: 2025-10-02 12:23:11.627 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "39f7159a-e413-4452-a17f-5166c4f788f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:11 np0005466031 nova_compute[235803]: 2025-10-02 12:23:11.627 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:11 np0005466031 nova_compute[235803]: 2025-10-02 12:23:11.659 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:23:11 np0005466031 nova_compute[235803]: 2025-10-02 12:23:11.821 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:11 np0005466031 nova_compute[235803]: 2025-10-02 12:23:11.821 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:11 np0005466031 nova_compute[235803]: 2025-10-02 12:23:11.830 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:23:11 np0005466031 nova_compute[235803]: 2025-10-02 12:23:11.831 2 INFO nova.compute.claims [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.014 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:12.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3526900912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.457 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.463 2 DEBUG nova.compute.provider_tree [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.488 2 DEBUG nova.scheduler.client.report [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.564 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.565 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.676 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.676 2 DEBUG nova.network.neutron [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.734 2 INFO nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.772 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.955 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.956 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.956 2 INFO nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Creating image(s)#033[00m
Oct  2 08:23:12 np0005466031 nova_compute[235803]: 2025-10-02 12:23:12.980 2 DEBUG nova.storage.rbd_utils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 39f7159a-e413-4452-a17f-5166c4f788f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.009 2 DEBUG nova.storage.rbd_utils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 39f7159a-e413-4452-a17f-5166c4f788f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.034 2 DEBUG nova.storage.rbd_utils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 39f7159a-e413-4452-a17f-5166c4f788f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.037 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "2831d52deb1e534c3b403d3bca9c8174d9746d8e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.038 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2831d52deb1e534c3b403d3bca9c8174d9746d8e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.436 2 DEBUG nova.virt.libvirt.imagebackend [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/38906c7c-521b-42f0-a991-950eaae1ee1c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/38906c7c-521b-42f0-a991-950eaae1ee1c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:23:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:13.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.487 2 DEBUG nova.policy [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'afacfeac9efc4e6fbb83ebe4fe9a8f38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.492 2 DEBUG nova.virt.libvirt.imagebackend [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/38906c7c-521b-42f0-a991-950eaae1ee1c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:23:13 np0005466031 nova_compute[235803]: 2025-10-02 12:23:13.493 2 DEBUG nova.storage.rbd_utils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] cloning images/38906c7c-521b-42f0-a991-950eaae1ee1c@snap to None/39f7159a-e413-4452-a17f-5166c4f788f0_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:23:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:14 np0005466031 nova_compute[235803]: 2025-10-02 12:23:14.242 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "2831d52deb1e534c3b403d3bca9c8174d9746d8e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:14 np0005466031 nova_compute[235803]: 2025-10-02 12:23:14.412 2 DEBUG nova.objects.instance [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'migration_context' on Instance uuid 39f7159a-e413-4452-a17f-5166c4f788f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:14 np0005466031 nova_compute[235803]: 2025-10-02 12:23:14.576 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:23:14 np0005466031 nova_compute[235803]: 2025-10-02 12:23:14.576 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Ensure instance console log exists: /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:23:14 np0005466031 nova_compute[235803]: 2025-10-02 12:23:14.577 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:14 np0005466031 nova_compute[235803]: 2025-10-02 12:23:14.577 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:14 np0005466031 nova_compute[235803]: 2025-10-02 12:23:14.577 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:16 np0005466031 nova_compute[235803]: 2025-10-02 12:23:16.072 2 DEBUG nova.network.neutron [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Successfully created port: 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:23:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:16.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:17 np0005466031 nova_compute[235803]: 2025-10-02 12:23:17.232 2 DEBUG nova.network.neutron [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Successfully updated port: 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:23:17 np0005466031 nova_compute[235803]: 2025-10-02 12:23:17.252 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "refresh_cache-39f7159a-e413-4452-a17f-5166c4f788f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:17 np0005466031 nova_compute[235803]: 2025-10-02 12:23:17.252 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquired lock "refresh_cache-39f7159a-e413-4452-a17f-5166c4f788f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:17 np0005466031 nova_compute[235803]: 2025-10-02 12:23:17.252 2 DEBUG nova.network.neutron [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:23:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:17.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:17 np0005466031 nova_compute[235803]: 2025-10-02 12:23:17.489 2 DEBUG nova.compute.manager [req-5f0bdb6c-ad5d-4682-aa75-af8169c5d0c9 req-6b194b35-1371-4cbf-bfe2-64bb31798d00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received event network-changed-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:17 np0005466031 nova_compute[235803]: 2025-10-02 12:23:17.490 2 DEBUG nova.compute.manager [req-5f0bdb6c-ad5d-4682-aa75-af8169c5d0c9 req-6b194b35-1371-4cbf-bfe2-64bb31798d00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Refreshing instance network info cache due to event network-changed-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:23:17 np0005466031 nova_compute[235803]: 2025-10-02 12:23:17.490 2 DEBUG oslo_concurrency.lockutils [req-5f0bdb6c-ad5d-4682-aa75-af8169c5d0c9 req-6b194b35-1371-4cbf-bfe2-64bb31798d00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-39f7159a-e413-4452-a17f-5166c4f788f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:17 np0005466031 nova_compute[235803]: 2025-10-02 12:23:17.660 2 DEBUG nova.network.neutron [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:23:18 np0005466031 nova_compute[235803]: 2025-10-02 12:23:18.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:18.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:18 np0005466031 nova_compute[235803]: 2025-10-02 12:23:18.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.119 2 DEBUG nova.network.neutron [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Updating instance_info_cache with network_info: [{"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.146 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Releasing lock "refresh_cache-39f7159a-e413-4452-a17f-5166c4f788f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.146 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Instance network_info: |[{"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.147 2 DEBUG oslo_concurrency.lockutils [req-5f0bdb6c-ad5d-4682-aa75-af8169c5d0c9 req-6b194b35-1371-4cbf-bfe2-64bb31798d00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-39f7159a-e413-4452-a17f-5166c4f788f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.147 2 DEBUG nova.network.neutron [req-5f0bdb6c-ad5d-4682-aa75-af8169c5d0c9 req-6b194b35-1371-4cbf-bfe2-64bb31798d00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Refreshing network info cache for port 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.149 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Start _get_guest_xml network_info=[{"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:22:50Z,direct_url=<?>,disk_format='raw',id=38906c7c-521b-42f0-a991-950eaae1ee1c,min_disk=1,min_ram=0,name='tempest-test-snap-1018651021',owner='d0ebb2827cb241e499606ce3a3c67d24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:23:06Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '38906c7c-521b-42f0-a991-950eaae1ee1c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.154 2 WARNING nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.170 2 DEBUG nova.virt.libvirt.host [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.171 2 DEBUG nova.virt.libvirt.host [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.177 2 DEBUG nova.virt.libvirt.host [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.178 2 DEBUG nova.virt.libvirt.host [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.179 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.179 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:22:50Z,direct_url=<?>,disk_format='raw',id=38906c7c-521b-42f0-a991-950eaae1ee1c,min_disk=1,min_ram=0,name='tempest-test-snap-1018651021',owner='d0ebb2827cb241e499606ce3a3c67d24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:23:06Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.179 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.179 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.180 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.180 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.180 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.180 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.180 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.180 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.181 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.181 2 DEBUG nova.virt.hardware [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.183 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/291030701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.825 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.868 2 DEBUG nova.storage.rbd_utils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 39f7159a-e413-4452-a17f-5166c4f788f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:19 np0005466031 nova_compute[235803]: 2025-10-02 12:23:19.875 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:20.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3021617743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.349 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.350 2 DEBUG nova.virt.libvirt.vif [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2074003727',display_name='tempest-ImagesTestJSON-server-2074003727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-2074003727',id=51,image_ref='38906c7c-521b-42f0-a991-950eaae1ee1c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-zzpgazf5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='ef0d5be3-6f62-49b2-82fa-b646bea14a10',image_min_disk='1',image_min_ram='0',image_owner_id='d0ebb2827cb241e499606ce3a3c67d24',image_owner_project_name='tempest-ImagesTestJSON-1681256609',image_owner_user_name='tempest-ImagesTestJSON-1681256609-project-member',image_user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:12Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=39f7159a-e413-4452-a17f-5166c4f788f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.351 2 DEBUG nova.network.os_vif_util [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.352 2 DEBUG nova.network.os_vif_util [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:46:b2,bridge_name='br-int',has_traffic_filtering=True,id=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c4e2e1-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.353 2 DEBUG nova.objects.instance [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 39f7159a-e413-4452-a17f-5166c4f788f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.381 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <uuid>39f7159a-e413-4452-a17f-5166c4f788f0</uuid>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <name>instance-00000033</name>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <nova:name>tempest-ImagesTestJSON-server-2074003727</nova:name>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:23:19</nova:creationTime>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <nova:user uuid="afacfeac9efc4e6fbb83ebe4fe9a8f38">tempest-ImagesTestJSON-1681256609-project-member</nova:user>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <nova:project uuid="d0ebb2827cb241e499606ce3a3c67d24">tempest-ImagesTestJSON-1681256609</nova:project>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="38906c7c-521b-42f0-a991-950eaae1ee1c"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <nova:port uuid="16c4e2e1-3ab5-47c7-b4b4-b0b08633e158">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <entry name="serial">39f7159a-e413-4452-a17f-5166c4f788f0</entry>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <entry name="uuid">39f7159a-e413-4452-a17f-5166c4f788f0</entry>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/39f7159a-e413-4452-a17f-5166c4f788f0_disk">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/39f7159a-e413-4452-a17f-5166c4f788f0_disk.config">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:24:46:b2"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <target dev="tap16c4e2e1-3a"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0/console.log" append="off"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:23:20 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:23:20 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:23:20 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:23:20 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.382 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Preparing to wait for external event network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.382 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.383 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.383 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.384 2 DEBUG nova.virt.libvirt.vif [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2074003727',display_name='tempest-ImagesTestJSON-server-2074003727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-2074003727',id=51,image_ref='38906c7c-521b-42f0-a991-950eaae1ee1c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-zzpgazf5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='ef0d5be3-6f62-49b2-82fa-b646bea14a10',image_min_disk='1',image_min_ram='0',image_owner_id='d0ebb2827cb241e499606ce3a3c67d24',image_owner_project_name='tempest-ImagesTestJSON-1681256609',image_owner_user_name='tempest-ImagesTestJSON-1681256609-project-member',image_user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:12Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=39f7159a-e413-4452-a17f-5166c4f788f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.384 2 DEBUG nova.network.os_vif_util [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.384 2 DEBUG nova.network.os_vif_util [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:46:b2,bridge_name='br-int',has_traffic_filtering=True,id=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c4e2e1-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.385 2 DEBUG os_vif [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:46:b2,bridge_name='br-int',has_traffic_filtering=True,id=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c4e2e1-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c4e2e1-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap16c4e2e1-3a, col_values=(('external_ids', {'iface-id': '16c4e2e1-3ab5-47c7-b4b4-b0b08633e158', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:46:b2', 'vm-uuid': '39f7159a-e413-4452-a17f-5166c4f788f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:20 np0005466031 NetworkManager[44907]: <info>  [1759407800.3924] manager: (tap16c4e2e1-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.400 2 INFO os_vif [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:46:b2,bridge_name='br-int',has_traffic_filtering=True,id=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c4e2e1-3a')#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.463 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.463 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.463 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No VIF found with MAC fa:16:3e:24:46:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.464 2 INFO nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Using config drive#033[00m
Oct  2 08:23:20 np0005466031 nova_compute[235803]: 2025-10-02 12:23:20.490 2 DEBUG nova.storage.rbd_utils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 39f7159a-e413-4452-a17f-5166c4f788f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.392 2 INFO nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Creating config drive at /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0/disk.config#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.403 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4a7e3zfi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:21.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.555 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4a7e3zfi" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.579 2 DEBUG nova.storage.rbd_utils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 39f7159a-e413-4452-a17f-5166c4f788f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.581 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0/disk.config 39f7159a-e413-4452-a17f-5166c4f788f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.608 2 DEBUG nova.network.neutron [req-5f0bdb6c-ad5d-4682-aa75-af8169c5d0c9 req-6b194b35-1371-4cbf-bfe2-64bb31798d00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Updated VIF entry in instance network info cache for port 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.609 2 DEBUG nova.network.neutron [req-5f0bdb6c-ad5d-4682-aa75-af8169c5d0c9 req-6b194b35-1371-4cbf-bfe2-64bb31798d00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Updating instance_info_cache with network_info: [{"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.642 2 DEBUG oslo_concurrency.lockutils [req-5f0bdb6c-ad5d-4682-aa75-af8169c5d0c9 req-6b194b35-1371-4cbf-bfe2-64bb31798d00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-39f7159a-e413-4452-a17f-5166c4f788f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.759 2 DEBUG oslo_concurrency.processutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0/disk.config 39f7159a-e413-4452-a17f-5166c4f788f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.760 2 INFO nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Deleting local config drive /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0/disk.config because it was imported into RBD.#033[00m
Oct  2 08:23:21 np0005466031 kernel: tap16c4e2e1-3a: entered promiscuous mode
Oct  2 08:23:21 np0005466031 NetworkManager[44907]: <info>  [1759407801.8026] manager: (tap16c4e2e1-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct  2 08:23:21 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:21Z|00109|binding|INFO|Claiming lport 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 for this chassis.
Oct  2 08:23:21 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:21Z|00110|binding|INFO|16c4e2e1-3ab5-47c7-b4b4-b0b08633e158: Claiming fa:16:3e:24:46:b2 10.100.0.5
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:21 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:21Z|00111|binding|INFO|Setting lport 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 ovn-installed in OVS
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:21 np0005466031 nova_compute[235803]: 2025-10-02 12:23:21.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:21 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:21Z|00112|binding|INFO|Setting lport 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 up in Southbound
Oct  2 08:23:21 np0005466031 systemd-udevd[257122]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.833 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:46:b2 10.100.0.5'], port_security=['fa:16:3e:24:46:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '39f7159a-e413-4452-a17f-5166c4f788f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.835 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 bound to our chassis#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.837 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d68ff9e0-aff2-4eda-8590-74da7cfc5671#033[00m
Oct  2 08:23:21 np0005466031 systemd-machined[192227]: New machine qemu-19-instance-00000033.
Oct  2 08:23:21 np0005466031 NetworkManager[44907]: <info>  [1759407801.8461] device (tap16c4e2e1-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:23:21 np0005466031 NetworkManager[44907]: <info>  [1759407801.8476] device (tap16c4e2e1-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.852 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[634cc8ef-9455-4f71-9826-0e2ef4ea36a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.854 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd68ff9e0-a1 in ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.856 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd68ff9e0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.856 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[390b3156-82d6-4c1d-8183-142f80d41c5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.857 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[75650672-f1c1-45d2-9b74-a8c83c7cba69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 systemd[1]: Started Virtual Machine qemu-19-instance-00000033.
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.873 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[48aca7ae-8a88-46f8-890d-5433c219187e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.895 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[367329a4-5de2-46d0-9703-555749e01199]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.919 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d85e7681-7c9a-403b-8fc9-e08f14bd124e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 NetworkManager[44907]: <info>  [1759407801.9262] manager: (tapd68ff9e0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.925 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b2dfdc-8eed-4cfc-8100-f9562a23a42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.953 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a742c7e3-90c3-4962-a4d5-23d588253192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.956 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[13764cc0-e8a8-4578-88d3-ae83a405ceae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:21 np0005466031 NetworkManager[44907]: <info>  [1759407801.9789] device (tapd68ff9e0-a0): carrier: link connected
Oct  2 08:23:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:21.984 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f69796de-df60-426d-81b5-16dba2febf0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.002 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[df226b0e-e089-4eb8-8d1e-2a4caa5bc236]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565756, 'reachable_time': 33423, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257156, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.018 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe2cf23-db9e-4f04-814b-df8fae1a3d14]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565756, 'tstamp': 565756}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257157, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.034 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aff21f72-4bf0-4f8d-b478-8b68ddccf910]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565756, 'reachable_time': 33423, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257159, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.062 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6adcdc-2a98-4cca-94fa-3ee4f64685eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.106 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.112 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d221e9-3022-40df-8a38-15eee869f3c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.113 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.113 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.113 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd68ff9e0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:22 np0005466031 NetworkManager[44907]: <info>  [1759407802.1152] manager: (tapd68ff9e0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct  2 08:23:22 np0005466031 kernel: tapd68ff9e0-a0: entered promiscuous mode
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.118 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd68ff9e0-a0, col_values=(('external_ids', {'iface-id': 'c0382cb4-7e26-44bc-8951-80e73f21067a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:22Z|00113|binding|INFO|Releasing lport c0382cb4-7e26-44bc-8951-80e73f21067a from this chassis (sb_readonly=1)
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.121 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.122 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[65b4984e-4222-44d2-8cbf-ffe5c20c92f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.122 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.123 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'env', 'PROCESS_TAG=haproxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d68ff9e0-aff2-4eda-8590-74da7cfc5671.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:22.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:22 np0005466031 podman[257232]: 2025-10-02 12:23:22.48778457 +0000 UTC m=+0.057029272 container create 59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:23:22 np0005466031 systemd[1]: Started libpod-conmon-59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0.scope.
Oct  2 08:23:22 np0005466031 podman[257232]: 2025-10-02 12:23:22.456929483 +0000 UTC m=+0.026174175 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:23:22 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:23:22 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26e53c2a934788c5511739e5777f25da3318de86972d631a85174a327e82ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:23:22 np0005466031 podman[257232]: 2025-10-02 12:23:22.600709199 +0000 UTC m=+0.169953911 container init 59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:23:22 np0005466031 podman[257232]: 2025-10-02 12:23:22.612130064 +0000 UTC m=+0.181374726 container start 59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:23:22 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[257247]: [NOTICE]   (257251) : New worker (257253) forked
Oct  2 08:23:22 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[257247]: [NOTICE]   (257251) : Loading success.
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.655 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407802.6544235, 39f7159a-e413-4452-a17f-5166c4f788f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.656 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] VM Started (Lifecycle Event)#033[00m
Oct  2 08:23:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:22.686 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.695 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.700 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407802.6546426, 39f7159a-e413-4452-a17f-5166c4f788f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.701 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.725 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.729 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.799 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.923 2 DEBUG nova.compute.manager [req-f2e2d2da-e367-4597-95a1-1788d869f0b8 req-57b9bd30-ccc9-4ca5-8546-3325230cde36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received event network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.924 2 DEBUG oslo_concurrency.lockutils [req-f2e2d2da-e367-4597-95a1-1788d869f0b8 req-57b9bd30-ccc9-4ca5-8546-3325230cde36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.924 2 DEBUG oslo_concurrency.lockutils [req-f2e2d2da-e367-4597-95a1-1788d869f0b8 req-57b9bd30-ccc9-4ca5-8546-3325230cde36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.925 2 DEBUG oslo_concurrency.lockutils [req-f2e2d2da-e367-4597-95a1-1788d869f0b8 req-57b9bd30-ccc9-4ca5-8546-3325230cde36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.925 2 DEBUG nova.compute.manager [req-f2e2d2da-e367-4597-95a1-1788d869f0b8 req-57b9bd30-ccc9-4ca5-8546-3325230cde36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Processing event network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.927 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.931 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407802.9310832, 39f7159a-e413-4452-a17f-5166c4f788f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.931 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.935 2 DEBUG nova.virt.libvirt.driver [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.940 2 INFO nova.virt.libvirt.driver [-] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Instance spawned successfully.#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.940 2 INFO nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Took 9.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.941 2 DEBUG nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.985 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:22 np0005466031 nova_compute[235803]: 2025-10-02 12:23:22.990 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.060 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.073 2 INFO nova.compute.manager [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Took 11.32 seconds to build instance.#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.094 2 DEBUG oslo_concurrency.lockutils [None req-33830a5e-0ea8-4150-a7a4-7b0320651ba0 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:23.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.977 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "39f7159a-e413-4452-a17f-5166c4f788f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.978 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.979 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.979 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.980 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.981 2 INFO nova.compute.manager [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Terminating instance#033[00m
Oct  2 08:23:23 np0005466031 nova_compute[235803]: 2025-10-02 12:23:23.983 2 DEBUG nova.compute.manager [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:23:24 np0005466031 kernel: tap16c4e2e1-3a (unregistering): left promiscuous mode
Oct  2 08:23:24 np0005466031 NetworkManager[44907]: <info>  [1759407804.0324] device (tap16c4e2e1-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:24Z|00114|binding|INFO|Releasing lport 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 from this chassis (sb_readonly=0)
Oct  2 08:23:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:24Z|00115|binding|INFO|Setting lport 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 down in Southbound
Oct  2 08:23:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:24Z|00116|binding|INFO|Removing iface tap16c4e2e1-3a ovn-installed in OVS
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.055 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:46:b2 10.100.0.5'], port_security=['fa:16:3e:24:46:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '39f7159a-e413-4452-a17f-5166c4f788f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.056 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.058 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.059 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1dd14f-32b6-41f4-97f0-e76d8583faaa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.062 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace which is not needed anymore#033[00m
Oct  2 08:23:24 np0005466031 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct  2 08:23:24 np0005466031 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000033.scope: Consumed 1.794s CPU time.
Oct  2 08:23:24 np0005466031 systemd-machined[192227]: Machine qemu-19-instance-00000033 terminated.
Oct  2 08:23:24 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[257247]: [NOTICE]   (257251) : haproxy version is 2.8.14-c23fe91
Oct  2 08:23:24 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[257247]: [NOTICE]   (257251) : path to executable is /usr/sbin/haproxy
Oct  2 08:23:24 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[257247]: [WARNING]  (257251) : Exiting Master process...
Oct  2 08:23:24 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[257247]: [WARNING]  (257251) : Exiting Master process...
Oct  2 08:23:24 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[257247]: [ALERT]    (257251) : Current worker (257253) exited with code 143 (Terminated)
Oct  2 08:23:24 np0005466031 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[257247]: [WARNING]  (257251) : All workers exited. Exiting... (0)
Oct  2 08:23:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:24.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:24 np0005466031 systemd[1]: libpod-59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0.scope: Deactivated successfully.
Oct  2 08:23:24 np0005466031 kernel: tap16c4e2e1-3a: entered promiscuous mode
Oct  2 08:23:24 np0005466031 podman[257285]: 2025-10-02 12:23:24.199147355 +0000 UTC m=+0.044147246 container died 59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:23:24 np0005466031 NetworkManager[44907]: <info>  [1759407804.2002] manager: (tap16c4e2e1-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct  2 08:23:24 np0005466031 kernel: tap16c4e2e1-3a (unregistering): left promiscuous mode
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:24Z|00117|binding|INFO|Claiming lport 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 for this chassis.
Oct  2 08:23:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:24Z|00118|binding|INFO|16c4e2e1-3ab5-47c7-b4b4-b0b08633e158: Claiming fa:16:3e:24:46:b2 10.100.0.5
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.216 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:46:b2 10.100.0.5'], port_security=['fa:16:3e:24:46:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '39f7159a-e413-4452-a17f-5166c4f788f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:24Z|00119|binding|INFO|Releasing lport 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 from this chassis (sb_readonly=0)
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.228 2 INFO nova.virt.libvirt.driver [-] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Instance destroyed successfully.#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.229 2 DEBUG nova.objects.instance [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'resources' on Instance uuid 39f7159a-e413-4452-a17f-5166c4f788f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:24 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0-userdata-shm.mount: Deactivated successfully.
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.235 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:46:b2 10.100.0.5'], port_security=['fa:16:3e:24:46:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '39f7159a-e413-4452-a17f-5166c4f788f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:24 np0005466031 systemd[1]: var-lib-containers-storage-overlay-5e26e53c2a934788c5511739e5777f25da3318de86972d631a85174a327e82ca-merged.mount: Deactivated successfully.
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.250 2 DEBUG nova.virt.libvirt.vif [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2074003727',display_name='tempest-ImagesTestJSON-server-2074003727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-2074003727',id=51,image_ref='38906c7c-521b-42f0-a991-950eaae1ee1c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-zzpgazf5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='ef0d5be3-6f62-49b2-82fa-b646bea14a10',image_min_disk='1',image_min_ram='0',image_owner_id='d0ebb2827cb241e499606ce3a3c67d24',image_owner_project_name='tempest-ImagesTestJSON-1681256609',image_owner_user_name='tempest-ImagesTestJSON-1681256609-project-member',image_user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:23Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=39f7159a-e413-4452-a17f-5166c4f788f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:24 np0005466031 podman[257285]: 2025-10-02 12:23:24.250599907 +0000 UTC m=+0.095599798 container cleanup 59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.250 2 DEBUG nova.network.os_vif_util [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "address": "fa:16:3e:24:46:b2", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c4e2e1-3a", "ovs_interfaceid": "16c4e2e1-3ab5-47c7-b4b4-b0b08633e158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.251 2 DEBUG nova.network.os_vif_util [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:46:b2,bridge_name='br-int',has_traffic_filtering=True,id=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c4e2e1-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.251 2 DEBUG os_vif [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:46:b2,bridge_name='br-int',has_traffic_filtering=True,id=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c4e2e1-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.254 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c4e2e1-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.260 2 INFO os_vif [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:46:b2,bridge_name='br-int',has_traffic_filtering=True,id=16c4e2e1-3ab5-47c7-b4b4-b0b08633e158,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c4e2e1-3a')#033[00m
Oct  2 08:23:24 np0005466031 systemd[1]: libpod-conmon-59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0.scope: Deactivated successfully.
Oct  2 08:23:24 np0005466031 podman[257318]: 2025-10-02 12:23:24.315301066 +0000 UTC m=+0.039050011 container remove 59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.320 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[429f9317-7baa-45da-aee6-26acc34887e4]: (4, ('Thu Oct  2 12:23:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0)\n59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0\nThu Oct  2 12:23:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0)\n59a28bebf17673023a75280d8f8dc3f019347d239a348e4e7323de4aa0039fc0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.321 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[816eb588-6d0c-4a5e-9874-0ca2142246e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.322 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:24 np0005466031 kernel: tapd68ff9e0-a0: left promiscuous mode
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005466031 nova_compute[235803]: 2025-10-02 12:23:24.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.338 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[acc339bd-555a-4d74-8c7e-4a0483647b17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.376 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[338e4a04-4e56-464a-a8d3-1a269841edb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.377 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e31712f7-e305-40fa-94e7-5438161136a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.392 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ce9652-1673-4b8c-ba9b-e113a0c5d5bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565750, 'reachable_time': 18644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257353, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 systemd[1]: run-netns-ovnmeta\x2dd68ff9e0\x2daff2\x2d4eda\x2d8590\x2d74da7cfc5671.mount: Deactivated successfully.
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.395 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.395 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[5c999866-b8ec-4f25-9661-f8fbcfb588df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.395 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.397 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.398 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[792130a6-b12a-4123-9f96-37b80a278628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.398 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.400 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:24.400 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[88e209d1-728a-490a-b848-396dd875377b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e203 e203: 3 total, 3 up, 3 in
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.042 2 DEBUG nova.compute.manager [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received event network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.043 2 DEBUG oslo_concurrency.lockutils [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.044 2 DEBUG oslo_concurrency.lockutils [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.044 2 DEBUG oslo_concurrency.lockutils [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.045 2 DEBUG nova.compute.manager [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] No waiting events found dispatching network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.045 2 WARNING nova.compute.manager [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received unexpected event network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.045 2 DEBUG nova.compute.manager [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received event network-vif-unplugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.046 2 DEBUG oslo_concurrency.lockutils [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.046 2 DEBUG oslo_concurrency.lockutils [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.046 2 DEBUG oslo_concurrency.lockutils [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.047 2 DEBUG nova.compute.manager [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] No waiting events found dispatching network-vif-unplugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:25 np0005466031 nova_compute[235803]: 2025-10-02 12:23:25.047 2 DEBUG nova.compute.manager [req-ccec85a5-15ec-43f8-bc4a-31e30bf8c6bf req-2b11e4cc-f6a4-4ddb-80f1-815bb4f3d2a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received event network-vif-unplugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:23:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:25.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:25.830 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:25.831 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:25.831 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:26.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:26.688 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:27 np0005466031 nova_compute[235803]: 2025-10-02 12:23:27.186 2 DEBUG nova.compute.manager [req-cae9e6e5-89a0-403b-bb4f-2bdc4e5d7c85 req-053aba1c-d980-4949-9754-6cc1f826b35e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received event network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:27 np0005466031 nova_compute[235803]: 2025-10-02 12:23:27.187 2 DEBUG oslo_concurrency.lockutils [req-cae9e6e5-89a0-403b-bb4f-2bdc4e5d7c85 req-053aba1c-d980-4949-9754-6cc1f826b35e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:27 np0005466031 nova_compute[235803]: 2025-10-02 12:23:27.187 2 DEBUG oslo_concurrency.lockutils [req-cae9e6e5-89a0-403b-bb4f-2bdc4e5d7c85 req-053aba1c-d980-4949-9754-6cc1f826b35e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:27 np0005466031 nova_compute[235803]: 2025-10-02 12:23:27.187 2 DEBUG oslo_concurrency.lockutils [req-cae9e6e5-89a0-403b-bb4f-2bdc4e5d7c85 req-053aba1c-d980-4949-9754-6cc1f826b35e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:27 np0005466031 nova_compute[235803]: 2025-10-02 12:23:27.187 2 DEBUG nova.compute.manager [req-cae9e6e5-89a0-403b-bb4f-2bdc4e5d7c85 req-053aba1c-d980-4949-9754-6cc1f826b35e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] No waiting events found dispatching network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:27 np0005466031 nova_compute[235803]: 2025-10-02 12:23:27.187 2 WARNING nova.compute.manager [req-cae9e6e5-89a0-403b-bb4f-2bdc4e5d7c85 req-053aba1c-d980-4949-9754-6cc1f826b35e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received unexpected event network-vif-plugged-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:27.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:28 np0005466031 nova_compute[235803]: 2025-10-02 12:23:28.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:28.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:29 np0005466031 nova_compute[235803]: 2025-10-02 12:23:29.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:23:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:29.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:23:29 np0005466031 nova_compute[235803]: 2025-10-02 12:23:29.684 2 INFO nova.virt.libvirt.driver [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Deleting instance files /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0_del#033[00m
Oct  2 08:23:29 np0005466031 nova_compute[235803]: 2025-10-02 12:23:29.685 2 INFO nova.virt.libvirt.driver [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Deletion of /var/lib/nova/instances/39f7159a-e413-4452-a17f-5166c4f788f0_del complete#033[00m
Oct  2 08:23:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:29 np0005466031 nova_compute[235803]: 2025-10-02 12:23:29.820 2 INFO nova.compute.manager [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Took 5.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:23:29 np0005466031 nova_compute[235803]: 2025-10-02 12:23:29.821 2 DEBUG oslo.service.loopingcall [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:23:29 np0005466031 nova_compute[235803]: 2025-10-02 12:23:29.822 2 DEBUG nova.compute.manager [-] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:23:29 np0005466031 nova_compute[235803]: 2025-10-02 12:23:29.822 2 DEBUG nova.network.neutron [-] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:23:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:30.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.102 2 DEBUG nova.network.neutron [-] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.128 2 INFO nova.compute.manager [-] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Took 1.31 seconds to deallocate network for instance.#033[00m
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.217 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.217 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.253 2 DEBUG nova.compute.manager [req-a173a6e6-e83c-41aa-a594-aeceff5a9e49 req-c5afda42-c5b6-4b46-946a-b971b25778f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Received event network-vif-deleted-16c4e2e1-3ab5-47c7-b4b4-b0b08633e158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.313 2 DEBUG oslo_concurrency.processutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:31.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1782790133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.739 2 DEBUG oslo_concurrency.processutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.747 2 DEBUG nova.compute.provider_tree [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.764 2 DEBUG nova.scheduler.client.report [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:31 np0005466031 podman[257427]: 2025-10-02 12:23:31.786142303 +0000 UTC m=+0.059406369 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.803 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:31 np0005466031 podman[257429]: 2025-10-02 12:23:31.845219132 +0000 UTC m=+0.108490344 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.867 2 INFO nova.scheduler.client.report [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Deleted allocations for instance 39f7159a-e413-4452-a17f-5166c4f788f0#033[00m
Oct  2 08:23:31 np0005466031 nova_compute[235803]: 2025-10-02 12:23:31.982 2 DEBUG oslo_concurrency.lockutils [None req-362996a1-8ec3-457d-a3e1-95645ef97a1d afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "39f7159a-e413-4452-a17f-5166c4f788f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:32.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:32 np0005466031 nova_compute[235803]: 2025-10-02 12:23:32.475 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:32 np0005466031 nova_compute[235803]: 2025-10-02 12:23:32.476 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:32 np0005466031 nova_compute[235803]: 2025-10-02 12:23:32.527 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:23:32 np0005466031 nova_compute[235803]: 2025-10-02 12:23:32.659 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:32 np0005466031 nova_compute[235803]: 2025-10-02 12:23:32.659 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:32 np0005466031 nova_compute[235803]: 2025-10-02 12:23:32.669 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:23:32 np0005466031 nova_compute[235803]: 2025-10-02 12:23:32.670 2 INFO nova.compute.claims [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:23:32 np0005466031 ceph-osd[79023]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct  2 08:23:32 np0005466031 nova_compute[235803]: 2025-10-02 12:23:32.799 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1057627497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.246 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.253 2 DEBUG nova.compute.provider_tree [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.270 2 DEBUG nova.scheduler.client.report [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.301 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.302 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.358 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.358 2 DEBUG nova.network.neutron [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.397 2 INFO nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.418 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:23:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:33.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.553 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.555 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.556 2 INFO nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Creating image(s)#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.594 2 DEBUG nova.storage.rbd_utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] rbd image 321c53a8-3488-43dc-b742-27102b6a5016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.634 2 DEBUG nova.storage.rbd_utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] rbd image 321c53a8-3488-43dc-b742-27102b6a5016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.673 2 DEBUG nova.storage.rbd_utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] rbd image 321c53a8-3488-43dc-b742-27102b6a5016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.677 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "504ddca37fa682fe676ed6ed2451bc9473b1f829" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.678 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "504ddca37fa682fe676ed6ed2451bc9473b1f829" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:33 np0005466031 nova_compute[235803]: 2025-10-02 12:23:33.687 2 DEBUG nova.policy [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c6bdc1acafd4db2bcc3e0251393b901', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4a6d727642dc44b3997a0b35c67e6ab1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:23:34 np0005466031 nova_compute[235803]: 2025-10-02 12:23:34.099 2 DEBUG nova.virt.libvirt.imagebackend [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/e7094a19-0695-4486-b083-e54642bc0338/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/e7094a19-0695-4486-b083-e54642bc0338/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:23:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:34.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:34 np0005466031 nova_compute[235803]: 2025-10-02 12:23:34.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e204 e204: 3 total, 3 up, 3 in
Oct  2 08:23:34 np0005466031 nova_compute[235803]: 2025-10-02 12:23:34.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:34 np0005466031 nova_compute[235803]: 2025-10-02 12:23:34.851 2 DEBUG nova.network.neutron [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Successfully created port: f9361bba-d251-49a5-a08b-5068dc6cd434 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:23:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:35.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:36.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.208 2 DEBUG nova.network.neutron [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Successfully updated port: f9361bba-d251-49a5-a08b-5068dc6cd434 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.235 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.236 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquired lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.236 2 DEBUG nova.network.neutron [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.338 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.397 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829.part --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.398 2 DEBUG nova.virt.images [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] e7094a19-0695-4486-b083-e54642bc0338 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.399 2 DEBUG nova.privsep.utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.399 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829.part /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.529 2 DEBUG nova.network.neutron [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.565 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829.part /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829.converted" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.569 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.621 2 DEBUG nova.compute.manager [req-008e2851-67e0-436a-9f5c-83166b2cdd91 req-2ddd09e2-56c4-4232-b4c9-1cf85866cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received event network-changed-f9361bba-d251-49a5-a08b-5068dc6cd434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.622 2 DEBUG nova.compute.manager [req-008e2851-67e0-436a-9f5c-83166b2cdd91 req-2ddd09e2-56c4-4232-b4c9-1cf85866cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Refreshing instance network info cache due to event network-changed-f9361bba-d251-49a5-a08b-5068dc6cd434. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.622 2 DEBUG oslo_concurrency.lockutils [req-008e2851-67e0-436a-9f5c-83166b2cdd91 req-2ddd09e2-56c4-4232-b4c9-1cf85866cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.664 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829.converted --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.665 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "504ddca37fa682fe676ed6ed2451bc9473b1f829" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.691 2 DEBUG nova.storage.rbd_utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] rbd image 321c53a8-3488-43dc-b742-27102b6a5016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:36 np0005466031 nova_compute[235803]: 2025-10-02 12:23:36.695 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829 321c53a8-3488-43dc-b742-27102b6a5016_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:37.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:38 np0005466031 nova_compute[235803]: 2025-10-02 12:23:38.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:23:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:38.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:23:38 np0005466031 nova_compute[235803]: 2025-10-02 12:23:38.760 2 DEBUG nova.network.neutron [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Updating instance_info_cache with network_info: [{"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:38 np0005466031 nova_compute[235803]: 2025-10-02 12:23:38.822 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Releasing lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:38 np0005466031 nova_compute[235803]: 2025-10-02 12:23:38.822 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Instance network_info: |[{"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:23:38 np0005466031 nova_compute[235803]: 2025-10-02 12:23:38.823 2 DEBUG oslo_concurrency.lockutils [req-008e2851-67e0-436a-9f5c-83166b2cdd91 req-2ddd09e2-56c4-4232-b4c9-1cf85866cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:38 np0005466031 nova_compute[235803]: 2025-10-02 12:23:38.823 2 DEBUG nova.network.neutron [req-008e2851-67e0-436a-9f5c-83166b2cdd91 req-2ddd09e2-56c4-4232-b4c9-1cf85866cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Refreshing network info cache for port f9361bba-d251-49a5-a08b-5068dc6cd434 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.225 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407804.223348, 39f7159a-e413-4452-a17f-5166c4f788f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.225 2 INFO nova.compute.manager [-] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.248 2 DEBUG nova.compute.manager [None req-1a65b5f1-3078-4722-8487-d8854d768b0c - - - - - -] [instance: 39f7159a-e413-4452-a17f-5166c4f788f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.664 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:23:39 np0005466031 nova_compute[235803]: 2025-10-02 12:23:39.664 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:23:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:40.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:40 np0005466031 nova_compute[235803]: 2025-10-02 12:23:40.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:40 np0005466031 nova_compute[235803]: 2025-10-02 12:23:40.730 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:40 np0005466031 nova_compute[235803]: 2025-10-02 12:23:40.731 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:40 np0005466031 nova_compute[235803]: 2025-10-02 12:23:40.766 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:23:40 np0005466031 nova_compute[235803]: 2025-10-02 12:23:40.900 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:40 np0005466031 nova_compute[235803]: 2025-10-02 12:23:40.900 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:40 np0005466031 nova_compute[235803]: 2025-10-02 12:23:40.908 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:23:40 np0005466031 nova_compute[235803]: 2025-10-02 12:23:40.908 2 INFO nova.compute.claims [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.097 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.210 2 DEBUG nova.network.neutron [req-008e2851-67e0-436a-9f5c-83166b2cdd91 req-2ddd09e2-56c4-4232-b4c9-1cf85866cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Updated VIF entry in instance network info cache for port f9361bba-d251-49a5-a08b-5068dc6cd434. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.213 2 DEBUG nova.network.neutron [req-008e2851-67e0-436a-9f5c-83166b2cdd91 req-2ddd09e2-56c4-4232-b4c9-1cf85866cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Updating instance_info_cache with network_info: [{"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.264 2 DEBUG oslo_concurrency.lockutils [req-008e2851-67e0-436a-9f5c-83166b2cdd91 req-2ddd09e2-56c4-4232-b4c9-1cf85866cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:41.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/85177071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.567 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.575 2 DEBUG nova.compute.provider_tree [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.611 2 DEBUG nova.scheduler.client.report [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:41 np0005466031 podman[257632]: 2025-10-02 12:23:41.667536557 +0000 UTC m=+0.081382654 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.691 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:41 np0005466031 podman[257630]: 2025-10-02 12:23:41.695421409 +0000 UTC m=+0.116616095 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.703 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.704 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.708 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.709 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.709 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.710 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.845 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.846 2 DEBUG nova.network.neutron [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.914 2 INFO nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:23:41 np0005466031 nova_compute[235803]: 2025-10-02 12:23:41.964 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.154 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.156 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.157 2 INFO nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Creating image(s)#033[00m
Oct  2 08:23:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1889210619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.195 2 DEBUG nova.storage.rbd_utils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image b3d45275-f66f-4629-896b-8fe3fceb65a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:42.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.227 2 DEBUG nova.storage.rbd_utils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image b3d45275-f66f-4629-896b-8fe3fceb65a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.251 2 DEBUG nova.storage.rbd_utils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image b3d45275-f66f-4629-896b-8fe3fceb65a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.254 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.278 2 DEBUG nova.policy [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51b45ef40bdc499a8409fd2bf3e6a339', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.282 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.313 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.314 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.314 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.314 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.342 2 DEBUG nova.storage.rbd_utils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image b3d45275-f66f-4629-896b-8fe3fceb65a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.345 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b3d45275-f66f-4629-896b-8fe3fceb65a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e205 e205: 3 total, 3 up, 3 in
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.519 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.520 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4615MB free_disk=20.942630767822266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.521 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.521 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:42 np0005466031 nova_compute[235803]: 2025-10-02 12:23:42.534 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829 321c53a8-3488-43dc-b742-27102b6a5016_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.839s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:43 np0005466031 nova_compute[235803]: 2025-10-02 12:23:43.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:43 np0005466031 nova_compute[235803]: 2025-10-02 12:23:43.259 2 DEBUG nova.storage.rbd_utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] resizing rbd image 321c53a8-3488-43dc-b742-27102b6a5016_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:23:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:43.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:44.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:44 np0005466031 nova_compute[235803]: 2025-10-02 12:23:44.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:44 np0005466031 nova_compute[235803]: 2025-10-02 12:23:44.470 2 DEBUG nova.network.neutron [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Successfully created port: fedd61db-0139-4493-a34f-892b56e476fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:23:44 np0005466031 nova_compute[235803]: 2025-10-02 12:23:44.507 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 321c53a8-3488-43dc-b742-27102b6a5016 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:23:44 np0005466031 nova_compute[235803]: 2025-10-02 12:23:44.507 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance b3d45275-f66f-4629-896b-8fe3fceb65a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:23:44 np0005466031 nova_compute[235803]: 2025-10-02 12:23:44.507 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:23:44 np0005466031 nova_compute[235803]: 2025-10-02 12:23:44.508 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:23:44 np0005466031 nova_compute[235803]: 2025-10-02 12:23:44.604 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/941580738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.060 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.066 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.102 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.149 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.150 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:45.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.917 2 DEBUG nova.network.neutron [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Successfully updated port: fedd61db-0139-4493-a34f-892b56e476fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.961 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.962 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquired lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:45 np0005466031 nova_compute[235803]: 2025-10-02 12:23:45.962 2 DEBUG nova.network.neutron [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.080 2 DEBUG nova.compute.manager [req-5848cff0-f6bd-4b3c-ab34-cb02c672d4fb req-c1b87699-2cb3-4c32-9b4a-645b7f0b5865 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-changed-fedd61db-0139-4493-a34f-892b56e476fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.081 2 DEBUG nova.compute.manager [req-5848cff0-f6bd-4b3c-ab34-cb02c672d4fb req-c1b87699-2cb3-4c32-9b4a-645b7f0b5865 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Refreshing instance network info cache due to event network-changed-fedd61db-0139-4493-a34f-892b56e476fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.081 2 DEBUG oslo_concurrency.lockutils [req-5848cff0-f6bd-4b3c-ab34-cb02c672d4fb req-c1b87699-2cb3-4c32-9b4a-645b7f0b5865 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.151 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.191 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.191 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.192 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.192 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:23:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:46.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.665 2 DEBUG nova.network.neutron [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.673 2 DEBUG nova.objects.instance [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lazy-loading 'migration_context' on Instance uuid 321c53a8-3488-43dc-b742-27102b6a5016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.695 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.695 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Ensure instance console log exists: /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.696 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.696 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.696 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.698 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Start _get_guest_xml network_info=[{"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'scsi', 'cdrom_bus': 'scsi', 'mapping': {'root': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'scsi', 'dev': 'sdb', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:23:21Z,direct_url=<?>,disk_format='qcow2',id=e7094a19-0695-4486-b083-e54642bc0338,min_disk=0,min_ram=0,name='',owner='74d6a07b22004c29bacecf2a4aa70dab',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:23:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/sda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/sda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'scsi', 'image_id': 'e7094a19-0695-4486-b083-e54642bc0338'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.702 2 WARNING nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.708 2 DEBUG nova.virt.libvirt.host [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.708 2 DEBUG nova.virt.libvirt.host [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.711 2 DEBUG nova.virt.libvirt.host [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.711 2 DEBUG nova.virt.libvirt.host [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.712 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.712 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:23:21Z,direct_url=<?>,disk_format='qcow2',id=e7094a19-0695-4486-b083-e54642bc0338,min_disk=0,min_ram=0,name='',owner='74d6a07b22004c29bacecf2a4aa70dab',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:23:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.712 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.713 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.713 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.713 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.713 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.713 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.714 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.714 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.714 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.714 2 DEBUG nova.virt.hardware [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:23:46 np0005466031 nova_compute[235803]: 2025-10-02 12:23:46.716 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2051029789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.152 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.192 2 DEBUG nova.storage.rbd_utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] rbd image 321c53a8-3488-43dc-b742-27102b6a5016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.196 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:47.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3580906529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.639 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.641 2 DEBUG nova.virt.libvirt.vif [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-180234842',display_name='tempest-AttachSCSIVolumeTestJSON-server-180234842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-180234842',id=52,image_ref='e7094a19-0695-4486-b083-e54642bc0338',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5zbbhSKkv62p8h7iZdLFx6xzoKfZY11s8xobB8hXq+UA3fzhlZ4TAOluaG66A68vlVgNmz+MoNXBrmvr8cMShcEeQPMmNADgGoZMKS/GZ2GEa6cnaImDPhpZ2YGfpwWQ==',key_name='tempest-keypair-290507723',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a6d727642dc44b3997a0b35c67e6ab1',ramdisk_id='',reservation_id='r-w6806e6a',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e7094a19-0695-4486-b083-e54642bc0338',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1491993290',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1491993290-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5c6bdc1acafd4db2bcc3e0251393b901',uuid=321c53a8-3488-43dc-b742-27102b6a5016,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.641 2 DEBUG nova.network.os_vif_util [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Converting VIF {"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.642 2 DEBUG nova.network.os_vif_util [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:d6:ab,bridge_name='br-int',has_traffic_filtering=True,id=f9361bba-d251-49a5-a08b-5068dc6cd434,network=Network(3734f4ab-0343-4d55-9d6e-98b7542ea7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9361bba-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.644 2 DEBUG nova.objects.instance [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 321c53a8-3488-43dc-b742-27102b6a5016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.661 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <uuid>321c53a8-3488-43dc-b742-27102b6a5016</uuid>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <name>instance-00000034</name>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachSCSIVolumeTestJSON-server-180234842</nova:name>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:23:46</nova:creationTime>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <nova:user uuid="5c6bdc1acafd4db2bcc3e0251393b901">tempest-AttachSCSIVolumeTestJSON-1491993290-project-member</nova:user>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <nova:project uuid="4a6d727642dc44b3997a0b35c67e6ab1">tempest-AttachSCSIVolumeTestJSON-1491993290</nova:project>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="e7094a19-0695-4486-b083-e54642bc0338"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <nova:port uuid="f9361bba-d251-49a5-a08b-5068dc6cd434">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <entry name="serial">321c53a8-3488-43dc-b742-27102b6a5016</entry>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <entry name="uuid">321c53a8-3488-43dc-b742-27102b6a5016</entry>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/321c53a8-3488-43dc-b742-27102b6a5016_disk">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <target dev="sda" bus="scsi"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <address type="drive" controller="0" unit="0"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/321c53a8-3488-43dc-b742-27102b6a5016_disk.config">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <target dev="sdb" bus="scsi"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <address type="drive" controller="0" unit="1"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="scsi" index="0" model="virtio-scsi"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:bb:d6:ab"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <target dev="tapf9361bba-d2"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016/console.log" append="off"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:23:47 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:23:47 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:23:47 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:23:47 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.662 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Preparing to wait for external event network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.662 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.663 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.663 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.663 2 DEBUG nova.virt.libvirt.vif [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-180234842',display_name='tempest-AttachSCSIVolumeTestJSON-server-180234842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-180234842',id=52,image_ref='e7094a19-0695-4486-b083-e54642bc0338',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5zbbhSKkv62p8h7iZdLFx6xzoKfZY11s8xobB8hXq+UA3fzhlZ4TAOluaG66A68vlVgNmz+MoNXBrmvr8cMShcEeQPMmNADgGoZMKS/GZ2GEa6cnaImDPhpZ2YGfpwWQ==',key_name='tempest-keypair-290507723',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a6d727642dc44b3997a0b35c67e6ab1',ramdisk_id='',reservation_id='r-w6806e6a',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e7094a19-0695-4486-b083-e54642bc0338',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1491993290',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1491993290-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5c6bdc1acafd4db2bcc3e0251393b901',uuid=321c53a8-3488-43dc-b742-27102b6a5016,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.664 2 DEBUG nova.network.os_vif_util [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Converting VIF {"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.664 2 DEBUG nova.network.os_vif_util [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:d6:ab,bridge_name='br-int',has_traffic_filtering=True,id=f9361bba-d251-49a5-a08b-5068dc6cd434,network=Network(3734f4ab-0343-4d55-9d6e-98b7542ea7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9361bba-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.665 2 DEBUG os_vif [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:d6:ab,bridge_name='br-int',has_traffic_filtering=True,id=f9361bba-d251-49a5-a08b-5068dc6cd434,network=Network(3734f4ab-0343-4d55-9d6e-98b7542ea7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9361bba-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.670 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9361bba-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.670 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf9361bba-d2, col_values=(('external_ids', {'iface-id': 'f9361bba-d251-49a5-a08b-5068dc6cd434', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:d6:ab', 'vm-uuid': '321c53a8-3488-43dc-b742-27102b6a5016'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:47 np0005466031 NetworkManager[44907]: <info>  [1759407827.6733] manager: (tapf9361bba-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.680 2 INFO os_vif [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:d6:ab,bridge_name='br-int',has_traffic_filtering=True,id=f9361bba-d251-49a5-a08b-5068dc6cd434,network=Network(3734f4ab-0343-4d55-9d6e-98b7542ea7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9361bba-d2')#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.732 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.733 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.733 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] No VIF found with MAC fa:16:3e:bb:d6:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.733 2 INFO nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Using config drive#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.759 2 DEBUG nova.storage.rbd_utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] rbd image 321c53a8-3488-43dc-b742-27102b6a5016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:47 np0005466031 nova_compute[235803]: 2025-10-02 12:23:47.981 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b3d45275-f66f-4629-896b-8fe3fceb65a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:48 np0005466031 nova_compute[235803]: 2025-10-02 12:23:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:48 np0005466031 nova_compute[235803]: 2025-10-02 12:23:48.035 2 DEBUG nova.network.neutron [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updating instance_info_cache with network_info: [{"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:48 np0005466031 nova_compute[235803]: 2025-10-02 12:23:48.087 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Releasing lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:48 np0005466031 nova_compute[235803]: 2025-10-02 12:23:48.087 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Instance network_info: |[{"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:23:48 np0005466031 nova_compute[235803]: 2025-10-02 12:23:48.088 2 DEBUG oslo_concurrency.lockutils [req-5848cff0-f6bd-4b3c-ab34-cb02c672d4fb req-c1b87699-2cb3-4c32-9b4a-645b7f0b5865 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:48 np0005466031 nova_compute[235803]: 2025-10-02 12:23:48.088 2 DEBUG nova.network.neutron [req-5848cff0-f6bd-4b3c-ab34-cb02c672d4fb req-c1b87699-2cb3-4c32-9b4a-645b7f0b5865 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Refreshing network info cache for port fedd61db-0139-4493-a34f-892b56e476fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:23:48 np0005466031 nova_compute[235803]: 2025-10-02 12:23:48.093 2 DEBUG nova.storage.rbd_utils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] resizing rbd image b3d45275-f66f-4629-896b-8fe3fceb65a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:23:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:23:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:48.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:23:48 np0005466031 nova_compute[235803]: 2025-10-02 12:23:48.663 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.007 2 INFO nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Creating config drive at /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016/disk.config#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.013 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1a3hrl9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.143 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1a3hrl9" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.178 2 DEBUG nova.storage.rbd_utils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] rbd image 321c53a8-3488-43dc-b742-27102b6a5016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.182 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016/disk.config 321c53a8-3488-43dc-b742-27102b6a5016_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.332 2 DEBUG oslo_concurrency.processutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016/disk.config 321c53a8-3488-43dc-b742-27102b6a5016_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.332 2 INFO nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Deleting local config drive /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016/disk.config because it was imported into RBD.#033[00m
Oct  2 08:23:49 np0005466031 kernel: tapf9361bba-d2: entered promiscuous mode
Oct  2 08:23:49 np0005466031 NetworkManager[44907]: <info>  [1759407829.3782] manager: (tapf9361bba-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct  2 08:23:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:49Z|00120|binding|INFO|Claiming lport f9361bba-d251-49a5-a08b-5068dc6cd434 for this chassis.
Oct  2 08:23:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:49Z|00121|binding|INFO|f9361bba-d251-49a5-a08b-5068dc6cd434: Claiming fa:16:3e:bb:d6:ab 10.100.0.3
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.390 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:d6:ab 10.100.0.3'], port_security=['fa:16:3e:bb:d6:ab 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '321c53a8-3488-43dc-b742-27102b6a5016', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3734f4ab-0343-4d55-9d6e-98b7542ea7fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6d727642dc44b3997a0b35c67e6ab1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd30173c-fa97-4944-b8ba-1ef49076d4b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb86c28-493a-4005-969f-63952648d379, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=f9361bba-d251-49a5-a08b-5068dc6cd434) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.391 141898 INFO neutron.agent.ovn.metadata.agent [-] Port f9361bba-d251-49a5-a08b-5068dc6cd434 in datapath 3734f4ab-0343-4d55-9d6e-98b7542ea7fe bound to our chassis#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.393 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3734f4ab-0343-4d55-9d6e-98b7542ea7fe#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.402 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[597b8fdd-0afd-4193-a5f9-7f3d6e2129be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.403 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3734f4ab-01 in ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:23:49 np0005466031 systemd-udevd[258072]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.405 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3734f4ab-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.405 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f7ff63-fb7a-453e-9930-6488354e2463]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.405 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[252315dd-e42b-426b-9bc9-d3289b2bea2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 systemd-machined[192227]: New machine qemu-20-instance-00000034.
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.414 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8d9835-fa50-4af6-be16-1d5f81d84479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 NetworkManager[44907]: <info>  [1759407829.4212] device (tapf9361bba-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:23:49 np0005466031 NetworkManager[44907]: <info>  [1759407829.4227] device (tapf9361bba-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.436 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[186511d7-94f0-455b-9fe6-86ed0c441422]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:49 np0005466031 systemd[1]: Started Virtual Machine qemu-20-instance-00000034.
Oct  2 08:23:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:49Z|00122|binding|INFO|Setting lport f9361bba-d251-49a5-a08b-5068dc6cd434 ovn-installed in OVS
Oct  2 08:23:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:49Z|00123|binding|INFO|Setting lport f9361bba-d251-49a5-a08b-5068dc6cd434 up in Southbound
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.463 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[174a0db6-6653-494b-bd83-a9e988787c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 NetworkManager[44907]: <info>  [1759407829.4727] manager: (tap3734f4ab-00): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.472 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e46d1698-adc2-433a-9b1a-b596b8ec5bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:49.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.498 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[97e84183-a1c4-4576-9b05-e8b44a9ed303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.501 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0660ce7d-153e-465e-999d-85bed8ac0108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 NetworkManager[44907]: <info>  [1759407829.5190] device (tap3734f4ab-00): carrier: link connected
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.522 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[02226d7c-64a2-4c3b-bea3-35dd6272bbc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.536 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a10b55-5ce5-4eb8-8ccc-60560f116c72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3734f4ab-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:a3:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568510, 'reachable_time': 20469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258107, 'error': None, 'target': 'ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.551 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7d502c74-3927-4023-9e83-7159e0c35166]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:a33f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568510, 'tstamp': 568510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258109, 'error': None, 'target': 'ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.567 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8189e3-736d-4ad7-a19a-ef7309fd2b34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3734f4ab-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:a3:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568510, 'reachable_time': 20469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258110, 'error': None, 'target': 'ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.596 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[62f2595c-a5fb-4276-be1e-73640e8030c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.673 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a86ff521-5863-4d61-95af-9ac11f5ffcec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.674 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3734f4ab-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.674 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.674 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3734f4ab-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:49 np0005466031 NetworkManager[44907]: <info>  [1759407829.6773] manager: (tap3734f4ab-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct  2 08:23:49 np0005466031 kernel: tap3734f4ab-00: entered promiscuous mode
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.681 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3734f4ab-00, col_values=(('external_ids', {'iface-id': 'ceb0e478-a678-4440-b6cc-adeb9acd8284'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:49Z|00124|binding|INFO|Releasing lport ceb0e478-a678-4440-b6cc-adeb9acd8284 from this chassis (sb_readonly=0)
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.684 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3734f4ab-0343-4d55-9d6e-98b7542ea7fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3734f4ab-0343-4d55-9d6e-98b7542ea7fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:23:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.687 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7197fa-b8dc-4758-a5db-44f57d642ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.688 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-3734f4ab-0343-4d55-9d6e-98b7542ea7fe
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/3734f4ab-0343-4d55-9d6e-98b7542ea7fe.pid.haproxy
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 3734f4ab-0343-4d55-9d6e-98b7542ea7fe
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:23:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:49.689 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe', 'env', 'PROCESS_TAG=haproxy-3734f4ab-0343-4d55-9d6e-98b7542ea7fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3734f4ab-0343-4d55-9d6e-98b7542ea7fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:23:49 np0005466031 nova_compute[235803]: 2025-10-02 12:23:49.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:50 np0005466031 podman[258160]: 2025-10-02 12:23:50.079187891 +0000 UTC m=+0.044927178 container create 8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:23:50 np0005466031 systemd[1]: Started libpod-conmon-8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf.scope.
Oct  2 08:23:50 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.139 2 DEBUG nova.network.neutron [req-5848cff0-f6bd-4b3c-ab34-cb02c672d4fb req-c1b87699-2cb3-4c32-9b4a-645b7f0b5865 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updated VIF entry in instance network info cache for port fedd61db-0139-4493-a34f-892b56e476fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.141 2 DEBUG nova.network.neutron [req-5848cff0-f6bd-4b3c-ab34-cb02c672d4fb req-c1b87699-2cb3-4c32-9b4a-645b7f0b5865 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updating instance_info_cache with network_info: [{"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:50 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe69430594643dfea5855cb0f1c5b91f8e712fad6983cb2de6f85e122a3d496/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:23:50 np0005466031 podman[258160]: 2025-10-02 12:23:50.054241212 +0000 UTC m=+0.019980519 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:23:50 np0005466031 podman[258160]: 2025-10-02 12:23:50.158024061 +0000 UTC m=+0.123763368 container init 8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:23:50 np0005466031 podman[258160]: 2025-10-02 12:23:50.163011533 +0000 UTC m=+0.128750820 container start 8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.165 2 DEBUG oslo_concurrency.lockutils [req-5848cff0-f6bd-4b3c-ab34-cb02c672d4fb req-c1b87699-2cb3-4c32-9b4a-645b7f0b5865 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:50 np0005466031 neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe[258175]: [NOTICE]   (258179) : New worker (258181) forked
Oct  2 08:23:50 np0005466031 neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe[258175]: [NOTICE]   (258179) : Loading success.
Oct  2 08:23:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:50.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.777 2 DEBUG nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received event network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.778 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.779 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.779 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.779 2 DEBUG nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Processing event network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.780 2 DEBUG nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received event network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.780 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.780 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.780 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.780 2 DEBUG nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] No waiting events found dispatching network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:50 np0005466031 nova_compute[235803]: 2025-10-02 12:23:50.781 2 WARNING nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received unexpected event network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.469 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407831.4392765, 321c53a8-3488-43dc-b742-27102b6a5016 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.470 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] VM Started (Lifecycle Event)#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.471 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.477 2 DEBUG nova.objects.instance [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'migration_context' on Instance uuid b3d45275-f66f-4629-896b-8fe3fceb65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.478 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.480 2 INFO nova.virt.libvirt.driver [-] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Instance spawned successfully.#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.481 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Attempting to register defaults for the following image properties: ['hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.484 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.485 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.485 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.486 2 DEBUG nova.virt.libvirt.driver [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:51.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.497 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.499 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.514 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.515 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Ensure instance console log exists: /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.515 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.515 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.515 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.517 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Start _get_guest_xml network_info=[{"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.520 2 WARNING nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.524 2 DEBUG nova.virt.libvirt.host [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.525 2 DEBUG nova.virt.libvirt.host [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.526 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.526 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407831.439451, 321c53a8-3488-43dc-b742-27102b6a5016 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.526 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.530 2 DEBUG nova.virt.libvirt.host [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.531 2 DEBUG nova.virt.libvirt.host [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.532 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.533 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.534 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.534 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.534 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.535 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.535 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.536 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.536 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.536 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.537 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.537 2 DEBUG nova.virt.hardware [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.541 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.569 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.573 2 INFO nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Took 18.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.573 2 DEBUG nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.577 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407831.4746895, 321c53a8-3488-43dc-b742-27102b6a5016 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.578 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.611 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.614 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.643 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.660 2 INFO nova.compute.manager [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Took 19.02 seconds to build instance.#033[00m
Oct  2 08:23:51 np0005466031 nova_compute[235803]: 2025-10-02 12:23:51.676 2 DEBUG oslo_concurrency.lockutils [None req-d1228da5-810b-41fc-8869-d8512db87b42 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3570288810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.101 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.123 2 DEBUG nova.storage.rbd_utils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image b3d45275-f66f-4629-896b-8fe3fceb65a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.126 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:52.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2916462947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.674 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.676 2 DEBUG nova.virt.libvirt.vif [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1553191561',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1553191561',id=53,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-eqvwjr3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=b3d45275-f66f-4629-896b-8fe3fceb65a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.677 2 DEBUG nova.network.os_vif_util [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.678 2 DEBUG nova.network.os_vif_util [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:43:18,bridge_name='br-int',has_traffic_filtering=True,id=fedd61db-0139-4493-a34f-892b56e476fe,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfedd61db-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.680 2 DEBUG nova.objects.instance [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3d45275-f66f-4629-896b-8fe3fceb65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.697 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <uuid>b3d45275-f66f-4629-896b-8fe3fceb65a3</uuid>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <name>instance-00000035</name>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-1553191561</nova:name>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:23:51</nova:creationTime>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <nova:user uuid="51b45ef40bdc499a8409fd2bf3e6a339">tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member</nova:user>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <nova:project uuid="12dfeaa31a6e4a2481a5332ce3094262">tempest-UpdateMultiattachVolumeNegativeTest-158673309</nova:project>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <nova:port uuid="fedd61db-0139-4493-a34f-892b56e476fe">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <entry name="serial">b3d45275-f66f-4629-896b-8fe3fceb65a3</entry>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <entry name="uuid">b3d45275-f66f-4629-896b-8fe3fceb65a3</entry>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/b3d45275-f66f-4629-896b-8fe3fceb65a3_disk">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/b3d45275-f66f-4629-896b-8fe3fceb65a3_disk.config">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:ba:43:18"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <target dev="tapfedd61db-01"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3/console.log" append="off"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:23:52 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:23:52 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:23:52 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:23:52 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.705 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Preparing to wait for external event network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.706 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.706 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.707 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.708 2 DEBUG nova.virt.libvirt.vif [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1553191561',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1553191561',id=53,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-eqvwjr3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=b3d45275-f66f-4629-896b-8fe3fceb65a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.708 2 DEBUG nova.network.os_vif_util [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.709 2 DEBUG nova.network.os_vif_util [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:43:18,bridge_name='br-int',has_traffic_filtering=True,id=fedd61db-0139-4493-a34f-892b56e476fe,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfedd61db-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.710 2 DEBUG os_vif [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:43:18,bridge_name='br-int',has_traffic_filtering=True,id=fedd61db-0139-4493-a34f-892b56e476fe,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfedd61db-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfedd61db-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfedd61db-01, col_values=(('external_ids', {'iface-id': 'fedd61db-0139-4493-a34f-892b56e476fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:43:18', 'vm-uuid': 'b3d45275-f66f-4629-896b-8fe3fceb65a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466031 NetworkManager[44907]: <info>  [1759407832.7204] manager: (tapfedd61db-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.726 2 INFO os_vif [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:43:18,bridge_name='br-int',has_traffic_filtering=True,id=fedd61db-0139-4493-a34f-892b56e476fe,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfedd61db-01')#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.824 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.835 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.835 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No VIF found with MAC fa:16:3e:ba:43:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.839 2 INFO nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Using config drive#033[00m
Oct  2 08:23:52 np0005466031 nova_compute[235803]: 2025-10-02 12:23:52.864 2 DEBUG nova.storage.rbd_utils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image b3d45275-f66f-4629-896b-8fe3fceb65a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:53 np0005466031 nova_compute[235803]: 2025-10-02 12:23:53.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:53 np0005466031 nova_compute[235803]: 2025-10-02 12:23:53.330 2 INFO nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Creating config drive at /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3/disk.config#033[00m
Oct  2 08:23:53 np0005466031 nova_compute[235803]: 2025-10-02 12:23:53.336 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplrz7_a3f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:53 np0005466031 nova_compute[235803]: 2025-10-02 12:23:53.476 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplrz7_a3f" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:53.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:53 np0005466031 nova_compute[235803]: 2025-10-02 12:23:53.502 2 DEBUG nova.storage.rbd_utils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image b3d45275-f66f-4629-896b-8fe3fceb65a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:53 np0005466031 nova_compute[235803]: 2025-10-02 12:23:53.505 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3/disk.config b3d45275-f66f-4629-896b-8fe3fceb65a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:55 np0005466031 NetworkManager[44907]: <info>  [1759407835.1338] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/71)
Oct  2 08:23:55 np0005466031 NetworkManager[44907]: <info>  [1759407835.1343] device (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 08:23:55 np0005466031 NetworkManager[44907]: <info>  [1759407835.1355] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/72)
Oct  2 08:23:55 np0005466031 NetworkManager[44907]: <info>  [1759407835.1360] device (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 08:23:55 np0005466031 nova_compute[235803]: 2025-10-02 12:23:55.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:55 np0005466031 NetworkManager[44907]: <info>  [1759407835.1372] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct  2 08:23:55 np0005466031 NetworkManager[44907]: <info>  [1759407835.1378] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct  2 08:23:55 np0005466031 NetworkManager[44907]: <info>  [1759407835.1385] device (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 08:23:55 np0005466031 NetworkManager[44907]: <info>  [1759407835.1389] device (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 08:23:55 np0005466031 nova_compute[235803]: 2025-10-02 12:23:55.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:55Z|00125|binding|INFO|Releasing lport ceb0e478-a678-4440-b6cc-adeb9acd8284 from this chassis (sb_readonly=0)
Oct  2 08:23:55 np0005466031 nova_compute[235803]: 2025-10-02 12:23:55.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:55.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:55 np0005466031 nova_compute[235803]: 2025-10-02 12:23:55.985 2 DEBUG nova.compute.manager [req-4e6cf317-a1da-40e4-ba4a-ddec4b2ca35d req-9a26a34a-db3f-4f8a-8e8f-e0c28ced9c0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received event network-changed-f9361bba-d251-49a5-a08b-5068dc6cd434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:55 np0005466031 nova_compute[235803]: 2025-10-02 12:23:55.985 2 DEBUG nova.compute.manager [req-4e6cf317-a1da-40e4-ba4a-ddec4b2ca35d req-9a26a34a-db3f-4f8a-8e8f-e0c28ced9c0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Refreshing instance network info cache due to event network-changed-f9361bba-d251-49a5-a08b-5068dc6cd434. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:23:55 np0005466031 nova_compute[235803]: 2025-10-02 12:23:55.986 2 DEBUG oslo_concurrency.lockutils [req-4e6cf317-a1da-40e4-ba4a-ddec4b2ca35d req-9a26a34a-db3f-4f8a-8e8f-e0c28ced9c0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:55 np0005466031 nova_compute[235803]: 2025-10-02 12:23:55.986 2 DEBUG oslo_concurrency.lockutils [req-4e6cf317-a1da-40e4-ba4a-ddec4b2ca35d req-9a26a34a-db3f-4f8a-8e8f-e0c28ced9c0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:55 np0005466031 nova_compute[235803]: 2025-10-02 12:23:55.986 2 DEBUG nova.network.neutron [req-4e6cf317-a1da-40e4-ba4a-ddec4b2ca35d req-9a26a34a-db3f-4f8a-8e8f-e0c28ced9c0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Refreshing network info cache for port f9361bba-d251-49a5-a08b-5068dc6cd434 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:23:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:56.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:56 np0005466031 nova_compute[235803]: 2025-10-02 12:23:56.900 2 DEBUG oslo_concurrency.processutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3/disk.config b3d45275-f66f-4629-896b-8fe3fceb65a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:56 np0005466031 nova_compute[235803]: 2025-10-02 12:23:56.900 2 INFO nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Deleting local config drive /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3/disk.config because it was imported into RBD.#033[00m
Oct  2 08:23:56 np0005466031 kernel: tapfedd61db-01: entered promiscuous mode
Oct  2 08:23:56 np0005466031 nova_compute[235803]: 2025-10-02 12:23:56.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:56Z|00126|binding|INFO|Claiming lport fedd61db-0139-4493-a34f-892b56e476fe for this chassis.
Oct  2 08:23:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:56Z|00127|binding|INFO|fedd61db-0139-4493-a34f-892b56e476fe: Claiming fa:16:3e:ba:43:18 10.100.0.13
Oct  2 08:23:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:56Z|00128|binding|INFO|Setting lport fedd61db-0139-4493-a34f-892b56e476fe ovn-installed in OVS
Oct  2 08:23:56 np0005466031 NetworkManager[44907]: <info>  [1759407836.9666] manager: (tapfedd61db-01): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Oct  2 08:23:56 np0005466031 nova_compute[235803]: 2025-10-02 12:23:56.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:56 np0005466031 nova_compute[235803]: 2025-10-02 12:23:56.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:56Z|00129|binding|INFO|Setting lport fedd61db-0139-4493-a34f-892b56e476fe up in Southbound
Oct  2 08:23:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:56.986 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:43:18 10.100.0.13'], port_security=['fa:16:3e:ba:43:18 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b3d45275-f66f-4629-896b-8fe3fceb65a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34ecce08-278a-4a16-9f99-cfef8148769d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b0b284f-6afe-4611-b8db-1ab4d5466651', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7fc2557b-b462-4493-9e4f-7b4266aaba5c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=fedd61db-0139-4493-a34f-892b56e476fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:56.987 141898 INFO neutron.agent.ovn.metadata.agent [-] Port fedd61db-0139-4493-a34f-892b56e476fe in datapath 34ecce08-278a-4a16-9f99-cfef8148769d bound to our chassis#033[00m
Oct  2 08:23:56 np0005466031 systemd-machined[192227]: New machine qemu-21-instance-00000035.
Oct  2 08:23:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:56.992 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34ecce08-278a-4a16-9f99-cfef8148769d#033[00m
Oct  2 08:23:56 np0005466031 systemd[1]: Started Virtual Machine qemu-21-instance-00000035.
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.002 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[67c32078-a115-4029-9c3e-1b78cc34d9da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.003 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34ecce08-21 in ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.005 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34ecce08-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:23:57 np0005466031 systemd-udevd[258424]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.005 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ab22009e-43c4-4c55-b911-5085f668fde6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.006 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[690013d6-39fe-4d3c-a7a5-95c452b77b66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 NetworkManager[44907]: <info>  [1759407837.0201] device (tapfedd61db-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:23:57 np0005466031 NetworkManager[44907]: <info>  [1759407837.0213] device (tapfedd61db-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.021 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[3988ccc2-6709-4458-8e2d-fd9170d4ee9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.046 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[24fcc6a4-3032-4a7c-98b3-4c90a102dff2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.071 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4fd82a-2cbf-4cad-a0d4-40eb161a904e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.076 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6915e00e-2adf-444d-8e48-ea97f111d2a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 NetworkManager[44907]: <info>  [1759407837.0773] manager: (tap34ecce08-20): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.103 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b63d3c05-53f0-429f-a040-bbb46008be83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.106 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a155d667-99cf-408a-aaf7-7b9aabb882d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 NetworkManager[44907]: <info>  [1759407837.1296] device (tap34ecce08-20): carrier: link connected
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.134 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a05ba98f-ac25-44ef-bfe7-f19f7b9a1718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.149 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9100fa6f-4736-410f-ad6d-ecb3331b2ea9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34ecce08-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c7:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569272, 'reachable_time': 19122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258457, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.161 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[82d8d772-b4f7-4871-981f-3bb500f7a033]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:c771'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 569272, 'tstamp': 569272}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258458, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.175 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9001bd-521a-4df4-9db7-893acfb289ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34ecce08-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c7:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569272, 'reachable_time': 19122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258459, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.196 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[df5b5bf0-db82-437b-a749-5c009d98d5a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.253 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1132d581-f5db-42e1-a234-4f0085659b93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.254 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34ecce08-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.254 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.255 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34ecce08-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466031 NetworkManager[44907]: <info>  [1759407837.2577] manager: (tap34ecce08-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct  2 08:23:57 np0005466031 kernel: tap34ecce08-20: entered promiscuous mode
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.264 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34ecce08-20, col_values=(('external_ids', {'iface-id': '8d2c214b-08f8-42fc-8049-39454e430512'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466031 ovn_controller[132413]: 2025-10-02T12:23:57Z|00130|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.270 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.271 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[97923b33-d758-4ba7-b614-9b88f72d5e5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.272 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-34ecce08-278a-4a16-9f99-cfef8148769d
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 34ecce08-278a-4a16-9f99-cfef8148769d
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:23:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:23:57.272 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'env', 'PROCESS_TAG=haproxy-34ecce08-278a-4a16-9f99-cfef8148769d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34ecce08-278a-4a16-9f99-cfef8148769d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.446 2 DEBUG nova.compute.manager [req-40af6544-798d-42de-bfb3-051242327d96 req-a333b3a2-c6d2-4c8b-9580-558549ea3428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.447 2 DEBUG oslo_concurrency.lockutils [req-40af6544-798d-42de-bfb3-051242327d96 req-a333b3a2-c6d2-4c8b-9580-558549ea3428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.447 2 DEBUG oslo_concurrency.lockutils [req-40af6544-798d-42de-bfb3-051242327d96 req-a333b3a2-c6d2-4c8b-9580-558549ea3428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.447 2 DEBUG oslo_concurrency.lockutils [req-40af6544-798d-42de-bfb3-051242327d96 req-a333b3a2-c6d2-4c8b-9580-558549ea3428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.448 2 DEBUG nova.compute.manager [req-40af6544-798d-42de-bfb3-051242327d96 req-a333b3a2-c6d2-4c8b-9580-558549ea3428 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Processing event network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:23:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:57 np0005466031 podman[258608]: 2025-10-02 12:23:57.670674018 +0000 UTC m=+0.065341238 container create c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:23:57 np0005466031 systemd[1]: Started libpod-conmon-c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7.scope.
Oct  2 08:23:57 np0005466031 nova_compute[235803]: 2025-10-02 12:23:57.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466031 podman[258608]: 2025-10-02 12:23:57.642078086 +0000 UTC m=+0.036745316 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:23:57 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:23:57 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b263adbf7a20944adb161b53a17e6442608dc0f08415099427bfc5f186c9147a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:23:57 np0005466031 podman[258608]: 2025-10-02 12:23:57.757335801 +0000 UTC m=+0.152003031 container init c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:23:57 np0005466031 podman[258608]: 2025-10-02 12:23:57.773057538 +0000 UTC m=+0.167724748 container start c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:23:57 np0005466031 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[258623]: [NOTICE]   (258646) : New worker (258651) forked
Oct  2 08:23:57 np0005466031 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[258623]: [NOTICE]   (258646) : Loading success.
Oct  2 08:23:58 np0005466031 nova_compute[235803]: 2025-10-02 12:23:58.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:58 np0005466031 podman[258708]: 2025-10-02 12:23:58.080093183 +0000 UTC m=+0.065664207 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 08:23:58 np0005466031 podman[258708]: 2025-10-02 12:23:58.172109128 +0000 UTC m=+0.157680122 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 08:23:58 np0005466031 nova_compute[235803]: 2025-10-02 12:23:58.184 2 DEBUG nova.network.neutron [req-4e6cf317-a1da-40e4-ba4a-ddec4b2ca35d req-9a26a34a-db3f-4f8a-8e8f-e0c28ced9c0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Updated VIF entry in instance network info cache for port f9361bba-d251-49a5-a08b-5068dc6cd434. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:23:58 np0005466031 nova_compute[235803]: 2025-10-02 12:23:58.184 2 DEBUG nova.network.neutron [req-4e6cf317-a1da-40e4-ba4a-ddec4b2ca35d req-9a26a34a-db3f-4f8a-8e8f-e0c28ced9c0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Updating instance_info_cache with network_info: [{"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:58.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:58 np0005466031 nova_compute[235803]: 2025-10-02 12:23:58.345 2 DEBUG oslo_concurrency.lockutils [req-4e6cf317-a1da-40e4-ba4a-ddec4b2ca35d req-9a26a34a-db3f-4f8a-8e8f-e0c28ced9c0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-321c53a8-3488-43dc-b742-27102b6a5016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:58 np0005466031 podman[258862]: 2025-10-02 12:23:58.656229826 +0000 UTC m=+0.061941651 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:23:58 np0005466031 podman[258882]: 2025-10-02 12:23:58.721424249 +0000 UTC m=+0.049306282 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:23:58 np0005466031 podman[258862]: 2025-10-02 12:23:58.726428931 +0000 UTC m=+0.132140766 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:23:58 np0005466031 podman[258927]: 2025-10-02 12:23:58.893957452 +0000 UTC m=+0.044332641 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, release=1793, io.openshift.expose-services=, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived)
Oct  2 08:23:58 np0005466031 podman[258927]: 2025-10-02 12:23:58.906773826 +0000 UTC m=+0.057149005 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, architecture=x86_64, io.buildah.version=1.28.2)
Oct  2 08:23:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:23:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.559 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.560 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.560 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.560 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.561 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] No waiting events found dispatching network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.561 2 WARNING nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received unexpected event network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:23:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.723 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407839.7233088, b3d45275-f66f-4629-896b-8fe3fceb65a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.723 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] VM Started (Lifecycle Event)#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.726 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.729 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.732 2 INFO nova.virt.libvirt.driver [-] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Instance spawned successfully.#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.733 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.762 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.765 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.772 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.774 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.774 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.775 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.775 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.776 2 DEBUG nova.virt.libvirt.driver [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.808 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.808 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407839.7255127, b3d45275-f66f-4629-896b-8fe3fceb65a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.809 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.851 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.856 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759407839.7289572, b3d45275-f66f-4629-896b-8fe3fceb65a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.856 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.864 2 INFO nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Took 17.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.865 2 DEBUG nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.916 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.921 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:59 np0005466031 nova_compute[235803]: 2025-10-02 12:23:59.979 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:24:00 np0005466031 nova_compute[235803]: 2025-10-02 12:24:00.004 2 INFO nova.compute.manager [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Took 19.14 seconds to build instance.#033[00m
Oct  2 08:24:00 np0005466031 nova_compute[235803]: 2025-10-02 12:24:00.029 2 DEBUG oslo_concurrency.lockutils [None req-e01ab5ac-195b-454a-9dea-ec5b12a73b06 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:00.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:24:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:24:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:01.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:24:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954818865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:24:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:24:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954818865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:24:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:02.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:02 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:02Z|00131|memory|INFO|peak resident set size grew 50% in last 1697.1 seconds, from 16256 kB to 24456 kB
Oct  2 08:24:02 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:02Z|00132|memory|INFO|idl-cells-OVN_Southbound:10316 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:361 lflow-cache-entries-cache-matches:291 lflow-cache-size-KB:1445 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:652 ofctrl_installed_flow_usage-KB:477 ofctrl_sb_flow_ref_usage-KB:244
Oct  2 08:24:02 np0005466031 podman[259120]: 2025-10-02 12:24:02.657319361 +0000 UTC m=+0.083251187 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:24:02 np0005466031 podman[259121]: 2025-10-02 12:24:02.662275241 +0000 UTC m=+0.088226138 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:24:02 np0005466031 nova_compute[235803]: 2025-10-02 12:24:02.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:03 np0005466031 nova_compute[235803]: 2025-10-02 12:24:03.021 2 DEBUG nova.compute.manager [req-53918a01-70a2-4bcb-ab12-2b72534ab596 req-e04a61d7-5bf1-4b93-8344-93e5d43c48a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-changed-fedd61db-0139-4493-a34f-892b56e476fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:03 np0005466031 nova_compute[235803]: 2025-10-02 12:24:03.021 2 DEBUG nova.compute.manager [req-53918a01-70a2-4bcb-ab12-2b72534ab596 req-e04a61d7-5bf1-4b93-8344-93e5d43c48a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Refreshing instance network info cache due to event network-changed-fedd61db-0139-4493-a34f-892b56e476fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:24:03 np0005466031 nova_compute[235803]: 2025-10-02 12:24:03.022 2 DEBUG oslo_concurrency.lockutils [req-53918a01-70a2-4bcb-ab12-2b72534ab596 req-e04a61d7-5bf1-4b93-8344-93e5d43c48a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:24:03 np0005466031 nova_compute[235803]: 2025-10-02 12:24:03.022 2 DEBUG oslo_concurrency.lockutils [req-53918a01-70a2-4bcb-ab12-2b72534ab596 req-e04a61d7-5bf1-4b93-8344-93e5d43c48a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:24:03 np0005466031 nova_compute[235803]: 2025-10-02 12:24:03.022 2 DEBUG nova.network.neutron [req-53918a01-70a2-4bcb-ab12-2b72534ab596 req-e04a61d7-5bf1-4b93-8344-93e5d43c48a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Refreshing network info cache for port fedd61db-0139-4493-a34f-892b56e476fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:24:03 np0005466031 nova_compute[235803]: 2025-10-02 12:24:03.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:03.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:04.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e206 e206: 3 total, 3 up, 3 in
Oct  2 08:24:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:05.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:06.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:06 np0005466031 nova_compute[235803]: 2025-10-02 12:24:06.671 2 DEBUG nova.network.neutron [req-53918a01-70a2-4bcb-ab12-2b72534ab596 req-e04a61d7-5bf1-4b93-8344-93e5d43c48a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updated VIF entry in instance network info cache for port fedd61db-0139-4493-a34f-892b56e476fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:24:06 np0005466031 nova_compute[235803]: 2025-10-02 12:24:06.672 2 DEBUG nova.network.neutron [req-53918a01-70a2-4bcb-ab12-2b72534ab596 req-e04a61d7-5bf1-4b93-8344-93e5d43c48a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updating instance_info_cache with network_info: [{"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:06 np0005466031 nova_compute[235803]: 2025-10-02 12:24:06.703 2 DEBUG oslo_concurrency.lockutils [req-53918a01-70a2-4bcb-ab12-2b72534ab596 req-e04a61d7-5bf1-4b93-8344-93e5d43c48a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:24:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:07.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:07 np0005466031 nova_compute[235803]: 2025-10-02 12:24:07.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:07Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:d6:ab 10.100.0.3
Oct  2 08:24:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:07Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:d6:ab 10.100.0.3
Oct  2 08:24:08 np0005466031 nova_compute[235803]: 2025-10-02 12:24:08.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:08.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:09.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:09.954103) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849954149, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2818, "num_deletes": 518, "total_data_size": 5961745, "memory_usage": 6045168, "flush_reason": "Manual Compaction"}
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849985428, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3917090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30970, "largest_seqno": 33783, "table_properties": {"data_size": 3905675, "index_size": 7013, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26911, "raw_average_key_size": 20, "raw_value_size": 3880750, "raw_average_value_size": 2917, "num_data_blocks": 302, "num_entries": 1330, "num_filter_entries": 1330, "num_deletions": 518, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407647, "oldest_key_time": 1759407647, "file_creation_time": 1759407849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 31379 microseconds, and 10299 cpu microseconds.
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:09.985483) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3917090 bytes OK
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:09.985504) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:09.988830) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:09.988842) EVENT_LOG_v1 {"time_micros": 1759407849988838, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:09.988858) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5948233, prev total WAL file size 5948233, number of live WAL files 2.
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:09.990139) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3825KB)], [60(8427KB)]
Oct  2 08:24:09 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849990169, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12546985, "oldest_snapshot_seqno": -1}
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 5686 keys, 10470737 bytes, temperature: kUnknown
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850033792, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 10470737, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10430323, "index_size": 25072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 146251, "raw_average_key_size": 25, "raw_value_size": 10325695, "raw_average_value_size": 1815, "num_data_blocks": 1008, "num_entries": 5686, "num_filter_entries": 5686, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759407849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:10.034000) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10470737 bytes
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:10.036615) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 287.1 rd, 239.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6738, records dropped: 1052 output_compression: NoCompression
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:10.036637) EVENT_LOG_v1 {"time_micros": 1759407850036625, "job": 36, "event": "compaction_finished", "compaction_time_micros": 43696, "compaction_time_cpu_micros": 20561, "output_level": 6, "num_output_files": 1, "total_output_size": 10470737, "num_input_records": 6738, "num_output_records": 5686, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850037317, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850038837, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:09.990044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:10.038860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:10.038864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:10.038866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:10.038867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:24:10.038868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:10.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:11.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:12 np0005466031 podman[259195]: 2025-10-02 12:24:12.128535187 +0000 UTC m=+0.069361062 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:24:12 np0005466031 podman[259196]: 2025-10-02 12:24:12.161671799 +0000 UTC m=+0.091084960 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  2 08:24:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:12.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:12 np0005466031 nova_compute[235803]: 2025-10-02 12:24:12.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:13 np0005466031 nova_compute[235803]: 2025-10-02 12:24:13.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:13Z|00133|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct  2 08:24:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:13Z|00134|binding|INFO|Releasing lport ceb0e478-a678-4440-b6cc-adeb9acd8284 from this chassis (sb_readonly=0)
Oct  2 08:24:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e207 e207: 3 total, 3 up, 3 in
Oct  2 08:24:13 np0005466031 nova_compute[235803]: 2025-10-02 12:24:13.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:13.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:14.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e208 e208: 3 total, 3 up, 3 in
Oct  2 08:24:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:15.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:16.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:17.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:17 np0005466031 nova_compute[235803]: 2025-10-02 12:24:17.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:18 np0005466031 nova_compute[235803]: 2025-10-02 12:24:18.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e209 e209: 3 total, 3 up, 3 in
Oct  2 08:24:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:19.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:20Z|00135|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct  2 08:24:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:20Z|00136|binding|INFO|Releasing lport ceb0e478-a678-4440-b6cc-adeb9acd8284 from this chassis (sb_readonly=0)
Oct  2 08:24:20 np0005466031 nova_compute[235803]: 2025-10-02 12:24:20.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:20Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:43:18 10.100.0.13
Oct  2 08:24:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:20Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:43:18 10.100.0.13
Oct  2 08:24:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:21.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:22.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e210 e210: 3 total, 3 up, 3 in
Oct  2 08:24:22 np0005466031 nova_compute[235803]: 2025-10-02 12:24:22.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:23 np0005466031 nova_compute[235803]: 2025-10-02 12:24:23.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:23.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:24.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:25.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:25.830 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:25.831 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:25.832 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:24:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:26.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:24:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:27.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:27 np0005466031 nova_compute[235803]: 2025-10-02 12:24:27.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e211 e211: 3 total, 3 up, 3 in
Oct  2 08:24:28 np0005466031 nova_compute[235803]: 2025-10-02 12:24:28.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:28 np0005466031 nova_compute[235803]: 2025-10-02 12:24:28.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:28.149 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:24:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:28.150 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:24:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:28.151 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:28.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:28 np0005466031 nova_compute[235803]: 2025-10-02 12:24:28.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:29.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:30.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:30 np0005466031 nova_compute[235803]: 2025-10-02 12:24:30.543 2 DEBUG oslo_concurrency.lockutils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:30 np0005466031 nova_compute[235803]: 2025-10-02 12:24:30.544 2 DEBUG oslo_concurrency.lockutils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:30 np0005466031 nova_compute[235803]: 2025-10-02 12:24:30.559 2 DEBUG nova.objects.instance [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lazy-loading 'flavor' on Instance uuid 321c53a8-3488-43dc-b742-27102b6a5016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:30 np0005466031 nova_compute[235803]: 2025-10-02 12:24:30.594 2 DEBUG oslo_concurrency.lockutils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.169 2 DEBUG oslo_concurrency.lockutils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.170 2 DEBUG oslo_concurrency.lockutils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.170 2 INFO nova.compute.manager [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Attaching volume f9d9b67b-54c7-4f6f-b70f-7ca98b415001 to /dev/sdc#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.353 2 DEBUG os_brick.utils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.354 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.368 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.369 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[363ec590-7741-40ed-b5cb-aeca6a9d330a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.370 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.376 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.377 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd4b34e-5b76-47e1-bc32-964d37fbdb87]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.378 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.387 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.388 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[dcff2a8d-f334-4518-9ffe-588a7babce5e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.389 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebd4abe-901b-43f8-a41b-d868528e5a00]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.389 2 DEBUG oslo_concurrency.processutils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.419 2 DEBUG oslo_concurrency.processutils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.422 2 DEBUG os_brick.initiator.connectors.lightos [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.422 2 DEBUG os_brick.initiator.connectors.lightos [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.422 2 DEBUG os_brick.initiator.connectors.lightos [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.423 2 DEBUG os_brick.utils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:24:31 np0005466031 nova_compute[235803]: 2025-10-02 12:24:31.423 2 DEBUG nova.virt.block_device [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Updating existing volume attachment record: 12eb22e2-2245-49e3-8bc1-c1f37d8f4f7a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:24:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:31.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:24:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3742383102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:24:32 np0005466031 nova_compute[235803]: 2025-10-02 12:24:32.173 2 DEBUG nova.objects.instance [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lazy-loading 'flavor' on Instance uuid 321c53a8-3488-43dc-b742-27102b6a5016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:32 np0005466031 nova_compute[235803]: 2025-10-02 12:24:32.205 2 DEBUG nova.virt.libvirt.guest [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:24:32 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-f9d9b67b-54c7-4f6f-b70f-7ca98b415001">
Oct  2 08:24:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:  <auth username="openstack">
Oct  2 08:24:32 np0005466031 nova_compute[235803]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:  </auth>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:  <target dev="sdc" bus="scsi"/>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:  <serial>f9d9b67b-54c7-4f6f-b70f-7ca98b415001</serial>
Oct  2 08:24:32 np0005466031 nova_compute[235803]:  <address type="drive" controller="0" unit="2"/>
Oct  2 08:24:32 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:24:32 np0005466031 nova_compute[235803]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:24:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:32 np0005466031 nova_compute[235803]: 2025-10-02 12:24:32.550 2 DEBUG nova.virt.libvirt.driver [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:24:32 np0005466031 nova_compute[235803]: 2025-10-02 12:24:32.550 2 DEBUG nova.virt.libvirt.driver [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:24:32 np0005466031 nova_compute[235803]: 2025-10-02 12:24:32.550 2 DEBUG nova.virt.libvirt.driver [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] No BDM found with device name sdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:24:32 np0005466031 nova_compute[235803]: 2025-10-02 12:24:32.551 2 DEBUG nova.virt.libvirt.driver [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] No VIF found with MAC fa:16:3e:bb:d6:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:24:32 np0005466031 nova_compute[235803]: 2025-10-02 12:24:32.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:33 np0005466031 nova_compute[235803]: 2025-10-02 12:24:33.073 2 DEBUG oslo_concurrency.lockutils [None req-2b32bc8b-3d7e-4082-92bb-24a01cf4390b 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:33 np0005466031 nova_compute[235803]: 2025-10-02 12:24:33.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:33.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:33 np0005466031 podman[259397]: 2025-10-02 12:24:33.649790696 +0000 UTC m=+0.075863236 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  2 08:24:33 np0005466031 podman[259398]: 2025-10-02 12:24:33.673191552 +0000 UTC m=+0.103605056 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.203 2 DEBUG oslo_concurrency.lockutils [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.204 2 DEBUG oslo_concurrency.lockutils [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.254 2 INFO nova.compute.manager [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Detaching volume f9d9b67b-54c7-4f6f-b70f-7ca98b415001#033[00m
Oct  2 08:24:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.359 2 INFO nova.virt.block_device [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Attempting to driver detach volume f9d9b67b-54c7-4f6f-b70f-7ca98b415001 from mountpoint /dev/sdc#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.371 2 DEBUG nova.virt.libvirt.driver [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Attempting to detach device sdc from instance 321c53a8-3488-43dc-b742-27102b6a5016 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.372 2 DEBUG nova.virt.libvirt.guest [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-f9d9b67b-54c7-4f6f-b70f-7ca98b415001">
Oct  2 08:24:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <target dev="sdc" bus="scsi"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <serial>f9d9b67b-54c7-4f6f-b70f-7ca98b415001</serial>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:24:34 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.381 2 INFO nova.virt.libvirt.driver [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Successfully detached device sdc from instance 321c53a8-3488-43dc-b742-27102b6a5016 from the persistent domain config.#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.382 2 DEBUG nova.virt.libvirt.driver [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] (1/8): Attempting to detach device sdc with device alias scsi0-0-0-2 from instance 321c53a8-3488-43dc-b742-27102b6a5016 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.382 2 DEBUG nova.virt.libvirt.guest [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-f9d9b67b-54c7-4f6f-b70f-7ca98b415001">
Oct  2 08:24:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <target dev="sdc" bus="scsi"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <serial>f9d9b67b-54c7-4f6f-b70f-7ca98b415001</serial>
Oct  2 08:24:34 np0005466031 nova_compute[235803]:  <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Oct  2 08:24:34 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:24:34 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:24:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.721 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759407874.7205317, 321c53a8-3488-43dc-b742-27102b6a5016 => scsi0-0-0-2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.723 2 DEBUG nova.virt.libvirt.driver [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Start waiting for the detach event from libvirt for device sdc with device alias scsi0-0-0-2 for instance 321c53a8-3488-43dc-b742-27102b6a5016 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:24:34 np0005466031 nova_compute[235803]: 2025-10-02 12:24:34.727 2 INFO nova.virt.libvirt.driver [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Successfully detached device sdc from instance 321c53a8-3488-43dc-b742-27102b6a5016 from the live domain config.#033[00m
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.136 2 DEBUG nova.objects.instance [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lazy-loading 'flavor' on Instance uuid 321c53a8-3488-43dc-b742-27102b6a5016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.359 2 DEBUG oslo_concurrency.lockutils [None req-3b55227e-cc9b-4493-8fb5-a4045ee39c17 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:35.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.888 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.889 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.889 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.890 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.890 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.891 2 INFO nova.compute.manager [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Terminating instance#033[00m
Oct  2 08:24:35 np0005466031 nova_compute[235803]: 2025-10-02 12:24:35.892 2 DEBUG nova.compute.manager [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:24:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:24:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:36.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:24:36 np0005466031 kernel: tapf9361bba-d2 (unregistering): left promiscuous mode
Oct  2 08:24:36 np0005466031 NetworkManager[44907]: <info>  [1759407876.4463] device (tapf9361bba-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:24:36 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:36Z|00137|binding|INFO|Releasing lport f9361bba-d251-49a5-a08b-5068dc6cd434 from this chassis (sb_readonly=0)
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:36Z|00138|binding|INFO|Setting lport f9361bba-d251-49a5-a08b-5068dc6cd434 down in Southbound
Oct  2 08:24:36 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:36Z|00139|binding|INFO|Removing iface tapf9361bba-d2 ovn-installed in OVS
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.460 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:d6:ab 10.100.0.3'], port_security=['fa:16:3e:bb:d6:ab 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '321c53a8-3488-43dc-b742-27102b6a5016', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3734f4ab-0343-4d55-9d6e-98b7542ea7fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6d727642dc44b3997a0b35c67e6ab1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd30173c-fa97-4944-b8ba-1ef49076d4b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb86c28-493a-4005-969f-63952648d379, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=f9361bba-d251-49a5-a08b-5068dc6cd434) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.462 141898 INFO neutron.agent.ovn.metadata.agent [-] Port f9361bba-d251-49a5-a08b-5068dc6cd434 in datapath 3734f4ab-0343-4d55-9d6e-98b7542ea7fe unbound from our chassis#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.464 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3734f4ab-0343-4d55-9d6e-98b7542ea7fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.465 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e67b24e5-9ec6-433f-b782-d54314552a22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.466 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe namespace which is not needed anymore#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct  2 08:24:36 np0005466031 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000034.scope: Consumed 15.344s CPU time.
Oct  2 08:24:36 np0005466031 systemd-machined[192227]: Machine qemu-20-instance-00000034 terminated.
Oct  2 08:24:36 np0005466031 neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe[258175]: [NOTICE]   (258179) : haproxy version is 2.8.14-c23fe91
Oct  2 08:24:36 np0005466031 neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe[258175]: [NOTICE]   (258179) : path to executable is /usr/sbin/haproxy
Oct  2 08:24:36 np0005466031 neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe[258175]: [WARNING]  (258179) : Exiting Master process...
Oct  2 08:24:36 np0005466031 neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe[258175]: [WARNING]  (258179) : Exiting Master process...
Oct  2 08:24:36 np0005466031 neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe[258175]: [ALERT]    (258179) : Current worker (258181) exited with code 143 (Terminated)
Oct  2 08:24:36 np0005466031 neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe[258175]: [WARNING]  (258179) : All workers exited. Exiting... (0)
Oct  2 08:24:36 np0005466031 systemd[1]: libpod-8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf.scope: Deactivated successfully.
Oct  2 08:24:36 np0005466031 podman[259470]: 2025-10-02 12:24:36.589014124 +0000 UTC m=+0.034282185 container died 8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:24:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf-userdata-shm.mount: Deactivated successfully.
Oct  2 08:24:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay-9fe69430594643dfea5855cb0f1c5b91f8e712fad6983cb2de6f85e122a3d496-merged.mount: Deactivated successfully.
Oct  2 08:24:36 np0005466031 podman[259470]: 2025-10-02 12:24:36.626585472 +0000 UTC m=+0.071853503 container cleanup 8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:36 np0005466031 systemd[1]: libpod-conmon-8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf.scope: Deactivated successfully.
Oct  2 08:24:36 np0005466031 podman[259498]: 2025-10-02 12:24:36.682990505 +0000 UTC m=+0.034547713 container remove 8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.688 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e9f1af-06c5-42ed-9e81-22a47b40679e]: (4, ('Thu Oct  2 12:24:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe (8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf)\n8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf\nThu Oct  2 12:24:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe (8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf)\n8f2686bd7e3fa055e960aa82d89e622736d9d698c22b46eb6e426b4b111f3ebf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.689 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e6047a09-5f5e-4043-a139-82df0a732667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.690 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3734f4ab-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 kernel: tap3734f4ab-00: left promiscuous mode
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 NetworkManager[44907]: <info>  [1759407876.7126] manager: (tapf9361bba-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.713 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e2ca90-51d9-47a0-b4d5-7503aff5517b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.733 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9e37cfb2-322e-439d-b27d-aadd7744ddaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.735 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1d7223-ad24-47ba-9d1a-86e9c58ddf2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.734 2 INFO nova.virt.libvirt.driver [-] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Instance destroyed successfully.#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.735 2 DEBUG nova.objects.instance [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lazy-loading 'resources' on Instance uuid 321c53a8-3488-43dc-b742-27102b6a5016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.752 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d41f4bbc-89c5-479e-9fc5-76240faba2de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568504, 'reachable_time': 43106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259525, 'error': None, 'target': 'ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.755 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3734f4ab-0343-4d55-9d6e-98b7542ea7fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:24:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:24:36.755 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[8c163890-c34e-428a-8c2f-05fd0916d767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:36 np0005466031 systemd[1]: run-netns-ovnmeta\x2d3734f4ab\x2d0343\x2d4d55\x2d9d6e\x2d98b7542ea7fe.mount: Deactivated successfully.
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.771 2 DEBUG nova.virt.libvirt.vif [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-180234842',display_name='tempest-AttachSCSIVolumeTestJSON-server-180234842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-180234842',id=52,image_ref='e7094a19-0695-4486-b083-e54642bc0338',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5zbbhSKkv62p8h7iZdLFx6xzoKfZY11s8xobB8hXq+UA3fzhlZ4TAOluaG66A68vlVgNmz+MoNXBrmvr8cMShcEeQPMmNADgGoZMKS/GZ2GEa6cnaImDPhpZ2YGfpwWQ==',key_name='tempest-keypair-290507723',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4a6d727642dc44b3997a0b35c67e6ab1',ramdisk_id='',reservation_id='r-w6806e6a',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e7094a19-0695-4486-b083-e54642bc0338',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_scsi_model='virtio-scsi',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1491993290',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1491993290-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5c6bdc1acafd4db2bcc3e0251393b901',uuid=321c53a8-3488-43dc-b742-27102b6a5016,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.771 2 DEBUG nova.network.os_vif_util [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Converting VIF {"id": "f9361bba-d251-49a5-a08b-5068dc6cd434", "address": "fa:16:3e:bb:d6:ab", "network": {"id": "3734f4ab-0343-4d55-9d6e-98b7542ea7fe", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-2108006177-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6d727642dc44b3997a0b35c67e6ab1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9361bba-d2", "ovs_interfaceid": "f9361bba-d251-49a5-a08b-5068dc6cd434", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.772 2 DEBUG nova.network.os_vif_util [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:d6:ab,bridge_name='br-int',has_traffic_filtering=True,id=f9361bba-d251-49a5-a08b-5068dc6cd434,network=Network(3734f4ab-0343-4d55-9d6e-98b7542ea7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9361bba-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.772 2 DEBUG os_vif [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:d6:ab,bridge_name='br-int',has_traffic_filtering=True,id=f9361bba-d251-49a5-a08b-5068dc6cd434,network=Network(3734f4ab-0343-4d55-9d6e-98b7542ea7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9361bba-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9361bba-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.778 2 INFO os_vif [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:d6:ab,bridge_name='br-int',has_traffic_filtering=True,id=f9361bba-d251-49a5-a08b-5068dc6cd434,network=Network(3734f4ab-0343-4d55-9d6e-98b7542ea7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9361bba-d2')#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.814 2 DEBUG nova.compute.manager [req-038dfd99-c704-4090-88a7-f7dacac0657f req-32c75147-cf9f-40db-8ef0-a783591c7093 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received event network-vif-unplugged-f9361bba-d251-49a5-a08b-5068dc6cd434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.815 2 DEBUG oslo_concurrency.lockutils [req-038dfd99-c704-4090-88a7-f7dacac0657f req-32c75147-cf9f-40db-8ef0-a783591c7093 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.815 2 DEBUG oslo_concurrency.lockutils [req-038dfd99-c704-4090-88a7-f7dacac0657f req-32c75147-cf9f-40db-8ef0-a783591c7093 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.815 2 DEBUG oslo_concurrency.lockutils [req-038dfd99-c704-4090-88a7-f7dacac0657f req-32c75147-cf9f-40db-8ef0-a783591c7093 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.816 2 DEBUG nova.compute.manager [req-038dfd99-c704-4090-88a7-f7dacac0657f req-32c75147-cf9f-40db-8ef0-a783591c7093 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] No waiting events found dispatching network-vif-unplugged-f9361bba-d251-49a5-a08b-5068dc6cd434 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:24:36 np0005466031 nova_compute[235803]: 2025-10-02 12:24:36.816 2 DEBUG nova.compute.manager [req-038dfd99-c704-4090-88a7-f7dacac0657f req-32c75147-cf9f-40db-8ef0-a783591c7093 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received event network-vif-unplugged-f9361bba-d251-49a5-a08b-5068dc6cd434 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:24:37 np0005466031 nova_compute[235803]: 2025-10-02 12:24:37.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:37.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:38 np0005466031 nova_compute[235803]: 2025-10-02 12:24:38.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:38.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:24:38Z|00140|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct  2 08:24:38 np0005466031 nova_compute[235803]: 2025-10-02 12:24:38.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:39 np0005466031 nova_compute[235803]: 2025-10-02 12:24:39.121 2 DEBUG nova.compute.manager [req-ac73b685-a198-49ed-b221-c489a1e5d0d4 req-a4138f71-9b9e-4f51-a007-298b6da7301b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received event network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:39 np0005466031 nova_compute[235803]: 2025-10-02 12:24:39.122 2 DEBUG oslo_concurrency.lockutils [req-ac73b685-a198-49ed-b221-c489a1e5d0d4 req-a4138f71-9b9e-4f51-a007-298b6da7301b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "321c53a8-3488-43dc-b742-27102b6a5016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:39 np0005466031 nova_compute[235803]: 2025-10-02 12:24:39.122 2 DEBUG oslo_concurrency.lockutils [req-ac73b685-a198-49ed-b221-c489a1e5d0d4 req-a4138f71-9b9e-4f51-a007-298b6da7301b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:39 np0005466031 nova_compute[235803]: 2025-10-02 12:24:39.122 2 DEBUG oslo_concurrency.lockutils [req-ac73b685-a198-49ed-b221-c489a1e5d0d4 req-a4138f71-9b9e-4f51-a007-298b6da7301b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:39 np0005466031 nova_compute[235803]: 2025-10-02 12:24:39.122 2 DEBUG nova.compute.manager [req-ac73b685-a198-49ed-b221-c489a1e5d0d4 req-a4138f71-9b9e-4f51-a007-298b6da7301b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] No waiting events found dispatching network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:24:39 np0005466031 nova_compute[235803]: 2025-10-02 12:24:39.123 2 WARNING nova.compute.manager [req-ac73b685-a198-49ed-b221-c489a1e5d0d4 req-a4138f71-9b9e-4f51-a007-298b6da7301b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received unexpected event network-vif-plugged-f9361bba-d251-49a5-a08b-5068dc6cd434 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:24:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:39.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:24:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:40.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:24:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.671 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.827 2 INFO nova.virt.libvirt.driver [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Deleting instance files /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016_del#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.828 2 INFO nova.virt.libvirt.driver [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Deletion of /var/lib/nova/instances/321c53a8-3488-43dc-b742-27102b6a5016_del complete#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.931 2 INFO nova.compute.manager [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Took 6.04 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.933 2 DEBUG oslo.service.loopingcall [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.933 2 DEBUG nova.compute.manager [-] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:24:41 np0005466031 nova_compute[235803]: 2025-10-02 12:24:41.933 2 DEBUG nova.network.neutron [-] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:24:42 np0005466031 nova_compute[235803]: 2025-10-02 12:24:42.218 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:24:42 np0005466031 nova_compute[235803]: 2025-10-02 12:24:42.219 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:24:42 np0005466031 nova_compute[235803]: 2025-10-02 12:24:42.219 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:24:42 np0005466031 nova_compute[235803]: 2025-10-02 12:24:42.219 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b3d45275-f66f-4629-896b-8fe3fceb65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:24:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:42.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:24:42 np0005466031 podman[259551]: 2025-10-02 12:24:42.674751282 +0000 UTC m=+0.093129918 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:24:42 np0005466031 podman[259552]: 2025-10-02 12:24:42.69053557 +0000 UTC m=+0.106562519 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:24:43 np0005466031 nova_compute[235803]: 2025-10-02 12:24:43.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:24:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:24:44 np0005466031 nova_compute[235803]: 2025-10-02 12:24:44.680 2 DEBUG nova.network.neutron [-] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:44 np0005466031 nova_compute[235803]: 2025-10-02 12:24:44.741 2 DEBUG nova.compute.manager [req-56490c29-93a6-4e8e-a0cc-3ebf00978ec5 req-e5c29e94-a5d0-400a-810f-bc01fb61dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Received event network-vif-deleted-f9361bba-d251-49a5-a08b-5068dc6cd434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:44 np0005466031 nova_compute[235803]: 2025-10-02 12:24:44.742 2 INFO nova.compute.manager [req-56490c29-93a6-4e8e-a0cc-3ebf00978ec5 req-e5c29e94-a5d0-400a-810f-bc01fb61dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Neutron deleted interface f9361bba-d251-49a5-a08b-5068dc6cd434; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:24:44 np0005466031 nova_compute[235803]: 2025-10-02 12:24:44.742 2 DEBUG nova.network.neutron [req-56490c29-93a6-4e8e-a0cc-3ebf00978ec5 req-e5c29e94-a5d0-400a-810f-bc01fb61dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:44 np0005466031 nova_compute[235803]: 2025-10-02 12:24:44.828 2 INFO nova.compute.manager [-] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Took 2.89 seconds to deallocate network for instance.#033[00m
Oct  2 08:24:44 np0005466031 nova_compute[235803]: 2025-10-02 12:24:44.921 2 DEBUG nova.compute.manager [req-56490c29-93a6-4e8e-a0cc-3ebf00978ec5 req-e5c29e94-a5d0-400a-810f-bc01fb61dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Detach interface failed, port_id=f9361bba-d251-49a5-a08b-5068dc6cd434, reason: Instance 321c53a8-3488-43dc-b742-27102b6a5016 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.049 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.050 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.090 2 DEBUG nova.scheduler.client.report [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.119 2 DEBUG nova.scheduler.client.report [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.120 2 DEBUG nova.compute.provider_tree [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.132 2 DEBUG nova.scheduler.client.report [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.170 2 DEBUG nova.scheduler.client.report [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.289 2 DEBUG oslo_concurrency.processutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:45.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/318574402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.741 2 DEBUG oslo_concurrency.processutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.748 2 DEBUG nova.compute.provider_tree [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.809 2 DEBUG nova.scheduler.client.report [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.812 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updating instance_info_cache with network_info: [{"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.840 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.840 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.842 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.843 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.845 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.846 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.847 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.885 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.885 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.886 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.886 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.887 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:45 np0005466031 nova_compute[235803]: 2025-10-02 12:24:45.920 2 INFO nova.scheduler.client.report [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Deleted allocations for instance 321c53a8-3488-43dc-b742-27102b6a5016#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.029 2 DEBUG oslo_concurrency.lockutils [None req-a5a4f107-9419-4511-bd9a-9b9400ae6475 5c6bdc1acafd4db2bcc3e0251393b901 4a6d727642dc44b3997a0b35c67e6ab1 - - default default] Lock "321c53a8-3488-43dc-b742-27102b6a5016" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:46 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/405708132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:46.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.334 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.424 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.424 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.594 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.595 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4482MB free_disk=20.92194366455078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.595 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.596 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.707 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance b3d45275-f66f-4629-896b-8fe3fceb65a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.707 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.708 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:46 np0005466031 nova_compute[235803]: 2025-10-02 12:24:46.826 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4126952542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:47 np0005466031 nova_compute[235803]: 2025-10-02 12:24:47.238 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:47 np0005466031 nova_compute[235803]: 2025-10-02 12:24:47.245 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:24:47 np0005466031 nova_compute[235803]: 2025-10-02 12:24:47.264 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:24:47 np0005466031 nova_compute[235803]: 2025-10-02 12:24:47.304 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:24:47 np0005466031 nova_compute[235803]: 2025-10-02 12:24:47.305 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:47.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:48 np0005466031 nova_compute[235803]: 2025-10-02 12:24:48.094 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:48 np0005466031 nova_compute[235803]: 2025-10-02 12:24:48.095 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:24:48 np0005466031 nova_compute[235803]: 2025-10-02 12:24:48.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:48 np0005466031 nova_compute[235803]: 2025-10-02 12:24:48.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:49.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e212 e212: 3 total, 3 up, 3 in
Oct  2 08:24:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:50.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:51.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:51 np0005466031 nova_compute[235803]: 2025-10-02 12:24:51.733 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407876.7324965, 321c53a8-3488-43dc-b742-27102b6a5016 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:24:51 np0005466031 nova_compute[235803]: 2025-10-02 12:24:51.733 2 INFO nova.compute.manager [-] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:24:51 np0005466031 nova_compute[235803]: 2025-10-02 12:24:51.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:52 np0005466031 nova_compute[235803]: 2025-10-02 12:24:52.254 2 DEBUG nova.compute.manager [None req-aaacf48e-563a-4fb5-bc44-73219ce53225 - - - - - -] [instance: 321c53a8-3488-43dc-b742-27102b6a5016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:24:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:52.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:52 np0005466031 nova_compute[235803]: 2025-10-02 12:24:52.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:53 np0005466031 nova_compute[235803]: 2025-10-02 12:24:53.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:24:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:24:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:54.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:56.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:56 np0005466031 nova_compute[235803]: 2025-10-02 12:24:56.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:24:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:57.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:24:57 np0005466031 nova_compute[235803]: 2025-10-02 12:24:57.816 2 DEBUG nova.compute.manager [req-eebf91ef-1246-476c-9fc1-b11ec90114a2 req-39d8bbc7-62ac-4e73-80e0-f6cbb4dc8616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-changed-fedd61db-0139-4493-a34f-892b56e476fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:57 np0005466031 nova_compute[235803]: 2025-10-02 12:24:57.817 2 DEBUG nova.compute.manager [req-eebf91ef-1246-476c-9fc1-b11ec90114a2 req-39d8bbc7-62ac-4e73-80e0-f6cbb4dc8616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Refreshing instance network info cache due to event network-changed-fedd61db-0139-4493-a34f-892b56e476fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:24:57 np0005466031 nova_compute[235803]: 2025-10-02 12:24:57.817 2 DEBUG oslo_concurrency.lockutils [req-eebf91ef-1246-476c-9fc1-b11ec90114a2 req-39d8bbc7-62ac-4e73-80e0-f6cbb4dc8616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:24:57 np0005466031 nova_compute[235803]: 2025-10-02 12:24:57.818 2 DEBUG oslo_concurrency.lockutils [req-eebf91ef-1246-476c-9fc1-b11ec90114a2 req-39d8bbc7-62ac-4e73-80e0-f6cbb4dc8616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:24:57 np0005466031 nova_compute[235803]: 2025-10-02 12:24:57.818 2 DEBUG nova.network.neutron [req-eebf91ef-1246-476c-9fc1-b11ec90114a2 req-39d8bbc7-62ac-4e73-80e0-f6cbb4dc8616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Refreshing network info cache for port fedd61db-0139-4493-a34f-892b56e476fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:24:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 e213: 3 total, 3 up, 3 in
Oct  2 08:24:58 np0005466031 nova_compute[235803]: 2025-10-02 12:24:58.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:58.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:24:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:59.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:00.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:01 np0005466031 nova_compute[235803]: 2025-10-02 12:25:01.152 2 DEBUG nova.network.neutron [req-eebf91ef-1246-476c-9fc1-b11ec90114a2 req-39d8bbc7-62ac-4e73-80e0-f6cbb4dc8616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updated VIF entry in instance network info cache for port fedd61db-0139-4493-a34f-892b56e476fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:25:01 np0005466031 nova_compute[235803]: 2025-10-02 12:25:01.152 2 DEBUG nova.network.neutron [req-eebf91ef-1246-476c-9fc1-b11ec90114a2 req-39d8bbc7-62ac-4e73-80e0-f6cbb4dc8616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updating instance_info_cache with network_info: [{"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:01 np0005466031 nova_compute[235803]: 2025-10-02 12:25:01.245 2 DEBUG oslo_concurrency.lockutils [req-eebf91ef-1246-476c-9fc1-b11ec90114a2 req-39d8bbc7-62ac-4e73-80e0-f6cbb4dc8616 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:01.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:01 np0005466031 nova_compute[235803]: 2025-10-02 12:25:01.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:02.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:03 np0005466031 nova_compute[235803]: 2025-10-02 12:25:03.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:04.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:04 np0005466031 podman[259721]: 2025-10-02 12:25:04.653795368 +0000 UTC m=+0.088885967 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:25:04 np0005466031 podman[259722]: 2025-10-02 12:25:04.661776175 +0000 UTC m=+0.091544652 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:25:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:25:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3312100151' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:25:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:25:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3312100151' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:25:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:05.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:05 np0005466031 ovn_controller[132413]: 2025-10-02T12:25:05Z|00141|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct  2 08:25:05 np0005466031 nova_compute[235803]: 2025-10-02 12:25:05.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:06.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:06 np0005466031 nova_compute[235803]: 2025-10-02 12:25:06.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:25:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:07.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:25:08 np0005466031 nova_compute[235803]: 2025-10-02 12:25:08.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:08.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:09.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:10.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:11.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:11 np0005466031 nova_compute[235803]: 2025-10-02 12:25:11.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:12.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:13 np0005466031 nova_compute[235803]: 2025-10-02 12:25:13.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:13.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:13 np0005466031 podman[259822]: 2025-10-02 12:25:13.645203809 +0000 UTC m=+0.066734647 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:25:13 np0005466031 podman[259823]: 2025-10-02 12:25:13.654341179 +0000 UTC m=+0.073051007 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible)
Oct  2 08:25:13 np0005466031 nova_compute[235803]: 2025-10-02 12:25:13.750 2 DEBUG oslo_concurrency.lockutils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:13 np0005466031 nova_compute[235803]: 2025-10-02 12:25:13.751 2 DEBUG oslo_concurrency.lockutils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.007 2 DEBUG nova.objects.instance [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid b3d45275-f66f-4629-896b-8fe3fceb65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.124 2 DEBUG oslo_concurrency.lockutils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:25:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:14.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.675 2 DEBUG oslo_concurrency.lockutils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.675 2 DEBUG oslo_concurrency.lockutils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.675 2 INFO nova.compute.manager [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Attaching volume 5957fb80-298a-4379-a0ba-fde86e2113d0 to /dev/vdb#033[00m
Oct  2 08:25:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.969 2 DEBUG os_brick.utils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.970 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.983 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.983 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[67eb492d-8f4e-4e65-afc6-2dd24ac2c70d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.984 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.994 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.995 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[1adb9885-d78e-4d3d-bfdc-c60b8e59a436]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:14 np0005466031 nova_compute[235803]: 2025-10-02 12:25:14.996 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.010 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.010 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[85bfd0b6-4982-437c-9a72-3b5ff222bb2c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.012 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f936f5-72f6-4e0d-9ab0-11cfcac40980]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.013 2 DEBUG oslo_concurrency.processutils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.041 2 DEBUG oslo_concurrency.processutils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.045 2 DEBUG os_brick.initiator.connectors.lightos [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.045 2 DEBUG os_brick.initiator.connectors.lightos [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.046 2 DEBUG os_brick.initiator.connectors.lightos [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.046 2 DEBUG os_brick.utils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:25:15 np0005466031 nova_compute[235803]: 2025-10-02 12:25:15.047 2 DEBUG nova.virt.block_device [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updating existing volume attachment record: 0db9806a-63f9-40d6-af04-ea6e4c8861fc _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:25:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:15.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:16.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:16 np0005466031 nova_compute[235803]: 2025-10-02 12:25:16.606 2 DEBUG nova.objects.instance [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid b3d45275-f66f-4629-896b-8fe3fceb65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:16 np0005466031 nova_compute[235803]: 2025-10-02 12:25:16.703 2 DEBUG nova.virt.libvirt.driver [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Attempting to attach volume 5957fb80-298a-4379-a0ba-fde86e2113d0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:25:16 np0005466031 nova_compute[235803]: 2025-10-02 12:25:16.706 2 DEBUG nova.virt.libvirt.guest [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:25:16 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct  2 08:25:16 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:  <auth username="openstack">
Oct  2 08:25:16 np0005466031 nova_compute[235803]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:  </auth>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:  <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct  2 08:25:16 np0005466031 nova_compute[235803]:  <shareable/>
Oct  2 08:25:16 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:25:16 np0005466031 nova_compute[235803]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:25:16 np0005466031 nova_compute[235803]: 2025-10-02 12:25:16.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:17 np0005466031 nova_compute[235803]: 2025-10-02 12:25:17.015 2 DEBUG nova.virt.libvirt.driver [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:17 np0005466031 nova_compute[235803]: 2025-10-02 12:25:17.015 2 DEBUG nova.virt.libvirt.driver [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:17 np0005466031 nova_compute[235803]: 2025-10-02 12:25:17.016 2 DEBUG nova.virt.libvirt.driver [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:17 np0005466031 nova_compute[235803]: 2025-10-02 12:25:17.016 2 DEBUG nova.virt.libvirt.driver [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No VIF found with MAC fa:16:3e:ba:43:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:25:17 np0005466031 ovn_controller[132413]: 2025-10-02T12:25:17Z|00142|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct  2 08:25:17 np0005466031 nova_compute[235803]: 2025-10-02 12:25:17.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:17.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:18 np0005466031 nova_compute[235803]: 2025-10-02 12:25:18.024 2 DEBUG oslo_concurrency.lockutils [None req-4388534b-4175-4001-86f6-7dd07e7059c3 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:18 np0005466031 nova_compute[235803]: 2025-10-02 12:25:18.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:18.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:19.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:19 np0005466031 nova_compute[235803]: 2025-10-02 12:25:19.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:20.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:21.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:21 np0005466031 nova_compute[235803]: 2025-10-02 12:25:21.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:22.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:25:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2486575946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:25:23 np0005466031 nova_compute[235803]: 2025-10-02 12:25:23.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:25:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:23.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:25:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:24.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:25.831 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:25.832 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:25.833 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:25:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:26.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:25:26 np0005466031 nova_compute[235803]: 2025-10-02 12:25:26.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:27.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:25:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:25:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:25:28 np0005466031 nova_compute[235803]: 2025-10-02 12:25:28.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:29.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:25:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:30.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:25:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:31.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:31 np0005466031 nova_compute[235803]: 2025-10-02 12:25:31.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:32.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:33 np0005466031 nova_compute[235803]: 2025-10-02 12:25:33.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:25:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:33.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:25:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:34.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:34 np0005466031 nova_compute[235803]: 2025-10-02 12:25:34.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:35.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:35 np0005466031 podman[260081]: 2025-10-02 12:25:35.630154033 +0000 UTC m=+0.054541561 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 08:25:35 np0005466031 podman[260082]: 2025-10-02 12:25:35.693279527 +0000 UTC m=+0.113299731 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  2 08:25:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:36.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:36 np0005466031 nova_compute[235803]: 2025-10-02 12:25:36.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:37.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:37 np0005466031 nova_compute[235803]: 2025-10-02 12:25:37.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:38 np0005466031 nova_compute[235803]: 2025-10-02 12:25:38.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:38.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:38 np0005466031 nova_compute[235803]: 2025-10-02 12:25:38.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:39.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:25:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:25:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:40.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:41.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.830 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.830 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.830 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.831 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.831 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.857 2 DEBUG oslo_concurrency.lockutils [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.858 2 DEBUG oslo_concurrency.lockutils [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:41 np0005466031 nova_compute[235803]: 2025-10-02 12:25:41.892 2 INFO nova.compute.manager [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Detaching volume 5957fb80-298a-4379-a0ba-fde86e2113d0#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.175 2 INFO nova.virt.block_device [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Attempting to driver detach volume 5957fb80-298a-4379-a0ba-fde86e2113d0 from mountpoint /dev/vdb#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.184 2 DEBUG nova.virt.libvirt.driver [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Attempting to detach device vdb from instance b3d45275-f66f-4629-896b-8fe3fceb65a3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.184 2 DEBUG nova.virt.libvirt.guest [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct  2 08:25:42 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <shareable/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:25:42 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.192 2 INFO nova.virt.libvirt.driver [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully detached device vdb from instance b3d45275-f66f-4629-896b-8fe3fceb65a3 from the persistent domain config.#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.193 2 DEBUG nova.virt.libvirt.driver [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b3d45275-f66f-4629-896b-8fe3fceb65a3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.193 2 DEBUG nova.virt.libvirt.guest [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct  2 08:25:42 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <shareable/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:25:42 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:25:42 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:25:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:25:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1383723782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.257 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.305 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759407942.3045862, b3d45275-f66f-4629-896b-8fe3fceb65a3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.307 2 DEBUG nova.virt.libvirt.driver [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b3d45275-f66f-4629-896b-8fe3fceb65a3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.313 2 INFO nova.virt.libvirt.driver [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully detached device vdb from instance b3d45275-f66f-4629-896b-8fe3fceb65a3 from the live domain config.#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.345 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.345 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:42.400 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:42.402 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:25:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.532 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.533 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4459MB free_disk=20.89710235595703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.533 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.533 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.867 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance b3d45275-f66f-4629-896b-8fe3fceb65a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.868 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:25:42 np0005466031 nova_compute[235803]: 2025-10-02 12:25:42.868 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.039 2 DEBUG nova.objects.instance [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid b3d45275-f66f-4629-896b-8fe3fceb65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.068 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:25:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1633199061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.493 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.499 2 DEBUG oslo_concurrency.lockutils [None req-4e4f7675-005a-4c85-b8b0-7ccdfd61f512 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.503 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.578 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.579 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.579 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:43 np0005466031 nova_compute[235803]: 2025-10-02 12:25:43.580 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:43.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:44.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:44 np0005466031 podman[260227]: 2025-10-02 12:25:44.613490154 +0000 UTC m=+0.047637005 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:25:44 np0005466031 podman[260228]: 2025-10-02 12:25:44.636434516 +0000 UTC m=+0.062085576 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:25:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:44 np0005466031 nova_compute[235803]: 2025-10-02 12:25:44.918 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:44 np0005466031 nova_compute[235803]: 2025-10-02 12:25:44.918 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:25:44 np0005466031 nova_compute[235803]: 2025-10-02 12:25:44.918 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:25:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:45.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:46 np0005466031 nova_compute[235803]: 2025-10-02 12:25:46.486 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:25:46 np0005466031 nova_compute[235803]: 2025-10-02 12:25:46.486 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:46 np0005466031 nova_compute[235803]: 2025-10-02 12:25:46.486 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:25:46 np0005466031 nova_compute[235803]: 2025-10-02 12:25:46.486 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b3d45275-f66f-4629-896b-8fe3fceb65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:46 np0005466031 nova_compute[235803]: 2025-10-02 12:25:46.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:46 np0005466031 nova_compute[235803]: 2025-10-02 12:25:46.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:47.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:48 np0005466031 nova_compute[235803]: 2025-10-02 12:25:48.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:48.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:49.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.164 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updating instance_info_cache with network_info: [{"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.193 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-b3d45275-f66f-4629-896b-8fe3fceb65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.194 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.194 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.194 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.195 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.195 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.195 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.195 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.195 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.196 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.221 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.230 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.230 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.231 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.231 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.231 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.233 2 INFO nova.compute.manager [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Terminating instance#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.233 2 DEBUG nova.compute.manager [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:25:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:50.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:50 np0005466031 kernel: tapfedd61db-01 (unregistering): left promiscuous mode
Oct  2 08:25:50 np0005466031 NetworkManager[44907]: <info>  [1759407950.4751] device (tapfedd61db-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:25:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:25:50Z|00143|binding|INFO|Releasing lport fedd61db-0139-4493-a34f-892b56e476fe from this chassis (sb_readonly=0)
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:25:50Z|00144|binding|INFO|Setting lport fedd61db-0139-4493-a34f-892b56e476fe down in Southbound
Oct  2 08:25:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:25:50Z|00145|binding|INFO|Removing iface tapfedd61db-01 ovn-installed in OVS
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.494 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:43:18 10.100.0.13'], port_security=['fa:16:3e:ba:43:18 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b3d45275-f66f-4629-896b-8fe3fceb65a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34ecce08-278a-4a16-9f99-cfef8148769d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b0b284f-6afe-4611-b8db-1ab4d5466651', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7fc2557b-b462-4493-9e4f-7b4266aaba5c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=fedd61db-0139-4493-a34f-892b56e476fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.495 141898 INFO neutron.agent.ovn.metadata.agent [-] Port fedd61db-0139-4493-a34f-892b56e476fe in datapath 34ecce08-278a-4a16-9f99-cfef8148769d unbound from our chassis#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.496 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34ecce08-278a-4a16-9f99-cfef8148769d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.497 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9acea022-a678-4fc7-8f14-2215a369735c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.498 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d namespace which is not needed anymore#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005466031 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct  2 08:25:50 np0005466031 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000035.scope: Consumed 17.914s CPU time.
Oct  2 08:25:50 np0005466031 systemd-machined[192227]: Machine qemu-21-instance-00000035 terminated.
Oct  2 08:25:50 np0005466031 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[258623]: [NOTICE]   (258646) : haproxy version is 2.8.14-c23fe91
Oct  2 08:25:50 np0005466031 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[258623]: [NOTICE]   (258646) : path to executable is /usr/sbin/haproxy
Oct  2 08:25:50 np0005466031 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[258623]: [WARNING]  (258646) : Exiting Master process...
Oct  2 08:25:50 np0005466031 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[258623]: [WARNING]  (258646) : Exiting Master process...
Oct  2 08:25:50 np0005466031 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[258623]: [ALERT]    (258646) : Current worker (258651) exited with code 143 (Terminated)
Oct  2 08:25:50 np0005466031 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[258623]: [WARNING]  (258646) : All workers exited. Exiting... (0)
Oct  2 08:25:50 np0005466031 systemd[1]: libpod-c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7.scope: Deactivated successfully.
Oct  2 08:25:50 np0005466031 podman[260292]: 2025-10-02 12:25:50.643002264 +0000 UTC m=+0.049677472 container died c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:25:50 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b263adbf7a20944adb161b53a17e6442608dc0f08415099427bfc5f186c9147a-merged.mount: Deactivated successfully.
Oct  2 08:25:50 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7-userdata-shm.mount: Deactivated successfully.
Oct  2 08:25:50 np0005466031 podman[260292]: 2025-10-02 12:25:50.680307065 +0000 UTC m=+0.086982233 container cleanup c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.683 2 INFO nova.virt.libvirt.driver [-] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Instance destroyed successfully.#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.683 2 DEBUG nova.objects.instance [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'resources' on Instance uuid b3d45275-f66f-4629-896b-8fe3fceb65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:50 np0005466031 systemd[1]: libpod-conmon-c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7.scope: Deactivated successfully.
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.731 2 DEBUG nova.virt.libvirt.vif [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1553191561',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1553191561',id=53,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-eqvwjr3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=b3d45275-f66f-4629-896b-8fe3fceb65a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.732 2 DEBUG nova.network.os_vif_util [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "fedd61db-0139-4493-a34f-892b56e476fe", "address": "fa:16:3e:ba:43:18", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfedd61db-01", "ovs_interfaceid": "fedd61db-0139-4493-a34f-892b56e476fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.732 2 DEBUG nova.network.os_vif_util [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:43:18,bridge_name='br-int',has_traffic_filtering=True,id=fedd61db-0139-4493-a34f-892b56e476fe,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfedd61db-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.733 2 DEBUG os_vif [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:43:18,bridge_name='br-int',has_traffic_filtering=True,id=fedd61db-0139-4493-a34f-892b56e476fe,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfedd61db-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfedd61db-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:25:50 np0005466031 podman[260334]: 2025-10-02 12:25:50.740208547 +0000 UTC m=+0.038084233 container remove c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.740 2 INFO os_vif [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:43:18,bridge_name='br-int',has_traffic_filtering=True,id=fedd61db-0139-4493-a34f-892b56e476fe,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfedd61db-01')#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.747 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[957d33bd-87b1-4351-9bd1-aa34cb355bd9]: (4, ('Thu Oct  2 12:25:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d (c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7)\nc334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7\nThu Oct  2 12:25:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d (c334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7)\nc334b1929b4f5b62f2ba3dbba5a50da9d1d194b70b4d97d01a06bd4d710708b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.749 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c938f711-bef4-4f14-bba4-70929f5c64fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.750 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34ecce08-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:50 np0005466031 kernel: tap34ecce08-20: left promiscuous mode
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005466031 nova_compute[235803]: 2025-10-02 12:25:50.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.768 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c19a26b4-1635-446a-b9f6-6e35a8b0580d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.793 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[65c456f5-0dd6-46ce-8eb0-764df17e41af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.795 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed38640-1e84-41f9-b355-f399a2093860]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.808 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4523323d-9969-4aa8-be53-c858b40ba75c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569265, 'reachable_time': 35526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260367, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.810 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:25:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:50.810 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[39094df9-7a64-41a9-9995-e0e0e25e06b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:50 np0005466031 systemd[1]: run-netns-ovnmeta\x2d34ecce08\x2d278a\x2d4a16\x2d9f99\x2dcfef8148769d.mount: Deactivated successfully.
Oct  2 08:25:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:25:51.404 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:51.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:51 np0005466031 nova_compute[235803]: 2025-10-02 12:25:51.670 2 DEBUG nova.compute.manager [req-df519628-ecd0-4800-91e4-7fb0b0114998 req-7c384871-5b3d-4adf-bfd4-7f568dc2e10c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-vif-unplugged-fedd61db-0139-4493-a34f-892b56e476fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:51 np0005466031 nova_compute[235803]: 2025-10-02 12:25:51.671 2 DEBUG oslo_concurrency.lockutils [req-df519628-ecd0-4800-91e4-7fb0b0114998 req-7c384871-5b3d-4adf-bfd4-7f568dc2e10c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:51 np0005466031 nova_compute[235803]: 2025-10-02 12:25:51.671 2 DEBUG oslo_concurrency.lockutils [req-df519628-ecd0-4800-91e4-7fb0b0114998 req-7c384871-5b3d-4adf-bfd4-7f568dc2e10c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:51 np0005466031 nova_compute[235803]: 2025-10-02 12:25:51.671 2 DEBUG oslo_concurrency.lockutils [req-df519628-ecd0-4800-91e4-7fb0b0114998 req-7c384871-5b3d-4adf-bfd4-7f568dc2e10c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:51 np0005466031 nova_compute[235803]: 2025-10-02 12:25:51.671 2 DEBUG nova.compute.manager [req-df519628-ecd0-4800-91e4-7fb0b0114998 req-7c384871-5b3d-4adf-bfd4-7f568dc2e10c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] No waiting events found dispatching network-vif-unplugged-fedd61db-0139-4493-a34f-892b56e476fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:51 np0005466031 nova_compute[235803]: 2025-10-02 12:25:51.671 2 DEBUG nova.compute.manager [req-df519628-ecd0-4800-91e4-7fb0b0114998 req-7c384871-5b3d-4adf-bfd4-7f568dc2e10c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-vif-unplugged-fedd61db-0139-4493-a34f-892b56e476fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:25:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:52.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:52 np0005466031 nova_compute[235803]: 2025-10-02 12:25:52.934 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:53 np0005466031 nova_compute[235803]: 2025-10-02 12:25:53.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:53.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:53 np0005466031 nova_compute[235803]: 2025-10-02 12:25:53.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:53 np0005466031 nova_compute[235803]: 2025-10-02 12:25:53.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:25:54 np0005466031 nova_compute[235803]: 2025-10-02 12:25:54.232 2 DEBUG nova.compute.manager [req-bf7dcf87-b78d-4ff4-b61d-f14782fa3579 req-a1b2bacc-0c51-47a0-a292-597a0c545c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:54 np0005466031 nova_compute[235803]: 2025-10-02 12:25:54.232 2 DEBUG oslo_concurrency.lockutils [req-bf7dcf87-b78d-4ff4-b61d-f14782fa3579 req-a1b2bacc-0c51-47a0-a292-597a0c545c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:54 np0005466031 nova_compute[235803]: 2025-10-02 12:25:54.232 2 DEBUG oslo_concurrency.lockutils [req-bf7dcf87-b78d-4ff4-b61d-f14782fa3579 req-a1b2bacc-0c51-47a0-a292-597a0c545c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:54 np0005466031 nova_compute[235803]: 2025-10-02 12:25:54.233 2 DEBUG oslo_concurrency.lockutils [req-bf7dcf87-b78d-4ff4-b61d-f14782fa3579 req-a1b2bacc-0c51-47a0-a292-597a0c545c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:54 np0005466031 nova_compute[235803]: 2025-10-02 12:25:54.233 2 DEBUG nova.compute.manager [req-bf7dcf87-b78d-4ff4-b61d-f14782fa3579 req-a1b2bacc-0c51-47a0-a292-597a0c545c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] No waiting events found dispatching network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:54 np0005466031 nova_compute[235803]: 2025-10-02 12:25:54.233 2 WARNING nova.compute.manager [req-bf7dcf87-b78d-4ff4-b61d-f14782fa3579 req-a1b2bacc-0c51-47a0-a292-597a0c545c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received unexpected event network-vif-plugged-fedd61db-0139-4493-a34f-892b56e476fe for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:25:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:54.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:55.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:55 np0005466031 nova_compute[235803]: 2025-10-02 12:25:55.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:57 np0005466031 nova_compute[235803]: 2025-10-02 12:25:57.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:58 np0005466031 nova_compute[235803]: 2025-10-02 12:25:58.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:59 np0005466031 nova_compute[235803]: 2025-10-02 12:25:59.239 2 INFO nova.virt.libvirt.driver [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Deleting instance files /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3_del#033[00m
Oct  2 08:25:59 np0005466031 nova_compute[235803]: 2025-10-02 12:25:59.241 2 INFO nova.virt.libvirt.driver [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Deletion of /var/lib/nova/instances/b3d45275-f66f-4629-896b-8fe3fceb65a3_del complete#033[00m
Oct  2 08:25:59 np0005466031 nova_compute[235803]: 2025-10-02 12:25:59.320 2 INFO nova.compute.manager [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Took 9.09 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:25:59 np0005466031 nova_compute[235803]: 2025-10-02 12:25:59.320 2 DEBUG oslo.service.loopingcall [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:25:59 np0005466031 nova_compute[235803]: 2025-10-02 12:25:59.321 2 DEBUG nova.compute.manager [-] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:25:59 np0005466031 nova_compute[235803]: 2025-10-02 12:25:59.322 2 DEBUG nova.network.neutron [-] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:25:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:25:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:59.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.304 2 DEBUG nova.network.neutron [-] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.331 2 INFO nova.compute.manager [-] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Took 1.01 seconds to deallocate network for instance.#033[00m
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.403 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.403 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:00.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.501 2 DEBUG oslo_concurrency.processutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1061664957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.953 2 DEBUG oslo_concurrency.processutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.961 2 DEBUG nova.compute.provider_tree [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:26:00 np0005466031 nova_compute[235803]: 2025-10-02 12:26:00.998 2 DEBUG nova.scheduler.client.report [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:26:01 np0005466031 nova_compute[235803]: 2025-10-02 12:26:01.033 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:01 np0005466031 nova_compute[235803]: 2025-10-02 12:26:01.060 2 INFO nova.scheduler.client.report [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Deleted allocations for instance b3d45275-f66f-4629-896b-8fe3fceb65a3#033[00m
Oct  2 08:26:01 np0005466031 nova_compute[235803]: 2025-10-02 12:26:01.147 2 DEBUG oslo_concurrency.lockutils [None req-bc24b2da-0ff5-4fb3-b5a5-769906d0327e 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "b3d45275-f66f-4629-896b-8fe3fceb65a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:01 np0005466031 nova_compute[235803]: 2025-10-02 12:26:01.657 2 DEBUG nova.compute.manager [req-f816cb10-a0e4-48fe-967e-1d1c90970cf6 req-43197d25-a859-41d8-b3b6-3240f0e7acd6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Received event network-vif-deleted-fedd61db-0139-4493-a34f-892b56e476fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:02.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:03 np0005466031 nova_compute[235803]: 2025-10-02 12:26:03.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:26:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:04.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:26:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:26:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/425071959' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:26:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:26:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/425071959' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:26:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.664 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.664 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.681 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407950.679847, b3d45275-f66f-4629-896b-8fe3fceb65a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.681 2 INFO nova.compute.manager [-] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.763 2 DEBUG nova.compute.manager [None req-dab18ba4-4461-4a94-9cd1-bd7a6b0fe08a - - - - - -] [instance: b3d45275-f66f-4629-896b-8fe3fceb65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.773 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.939 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.939 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.946 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:26:05 np0005466031 nova_compute[235803]: 2025-10-02 12:26:05.946 2 INFO nova.compute.claims [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:26:06 np0005466031 nova_compute[235803]: 2025-10-02 12:26:06.137 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:06.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2398778346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:06 np0005466031 nova_compute[235803]: 2025-10-02 12:26:06.575 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:06 np0005466031 nova_compute[235803]: 2025-10-02 12:26:06.580 2 DEBUG nova.compute.provider_tree [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:26:06 np0005466031 podman[260470]: 2025-10-02 12:26:06.622387861 +0000 UTC m=+0.053196532 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:26:06 np0005466031 nova_compute[235803]: 2025-10-02 12:26:06.643 2 DEBUG nova.scheduler.client.report [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:26:06 np0005466031 podman[260472]: 2025-10-02 12:26:06.664515929 +0000 UTC m=+0.095284629 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 08:26:06 np0005466031 nova_compute[235803]: 2025-10-02 12:26:06.817 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:06 np0005466031 nova_compute[235803]: 2025-10-02 12:26:06.817 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:26:06 np0005466031 nova_compute[235803]: 2025-10-02 12:26:06.967 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:26:06 np0005466031 nova_compute[235803]: 2025-10-02 12:26:06.967 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:26:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:26:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3384030533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:26:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:26:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3384030533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:26:07 np0005466031 nova_compute[235803]: 2025-10-02 12:26:07.060 2 INFO nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:26:07 np0005466031 nova_compute[235803]: 2025-10-02 12:26:07.268 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:26:07 np0005466031 nova_compute[235803]: 2025-10-02 12:26:07.611 2 INFO nova.virt.block_device [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Booting with volume d91ade79-bea0-4f14-93f8-80a864c83dfa at /dev/vda#033[00m
Oct  2 08:26:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:07.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.017 2 DEBUG os_brick.utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.018 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.029 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.030 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[860e6bf1-093a-4e1e-848b-f8c56bd75c05]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.031 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.038 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.038 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[a427c385-1c03-4d69-b6f1-c37585265d0c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.040 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.048 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.049 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[3d026250-6b83-43a9-bdea-551f15a15d00]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.051 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[1dbe4f89-e81c-4d05-8b54-80a41654517a]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.052 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.078 2 DEBUG nova.policy [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '94e0e2f26a1648368032ab7e6732655c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d96bae071ef4595bd93c956dd20796c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.081 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.083 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.083 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.083 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.084 2 DEBUG os_brick.utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.084 2 DEBUG nova.virt.block_device [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating existing volume attachment record: d13ebf65-b423-4dcf-8bca-485d4eec8ed2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:26:08 np0005466031 nova_compute[235803]: 2025-10-02 12:26:08.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:08.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:26:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6849 writes, 35K keys, 6849 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 6849 writes, 6849 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1674 writes, 8336 keys, 1674 commit groups, 1.0 writes per commit group, ingest: 16.84 MB, 0.03 MB/s#012Interval WAL: 1674 writes, 1674 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    121.1      0.34              0.11        18    0.019       0      0       0.0       0.0#012  L6      1/0    9.99 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6    191.7    158.5      0.94              0.45        17    0.056     86K   9961       0.0       0.0#012 Sum      1/0    9.99 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.6    140.8    148.5      1.29              0.56        35    0.037     86K   9961       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    156.7    161.0      0.32              0.14         8    0.040     24K   3115       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    191.7    158.5      0.94              0.45        17    0.056     86K   9961       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    122.1      0.34              0.11        17    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.08 MB/s write, 0.18 GB read, 0.08 MB/s read, 1.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 19.80 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000151 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1155,19.12 MB,6.28963%) FilterBlock(35,248.05 KB,0.079682%) IndexBlock(35,451.27 KB,0.144964%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:26:09 np0005466031 nova_compute[235803]: 2025-10-02 12:26:09.536 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully created port: c31a45fc-37b9-4809-89b1-839d4e85765d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:26:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:09 np0005466031 nova_compute[235803]: 2025-10-02 12:26:09.984 2 INFO nova.virt.block_device [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Booting with volume d87ab1d9-322f-4ca3-8b9b-14a670a2e320 at /dev/vdb#033[00m
Oct  2 08:26:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:26:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2035805136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:26:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:26:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2035805136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.373 2 DEBUG os_brick.utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.374 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.385 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.386 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[df0cd3ff-aa11-4eaa-aaef-51aee55bc402]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.387 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.395 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.395 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[c49ddbd4-0565-4a80-b0c6-412ea72b5020]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.396 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.407 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.408 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[f16b0e49-9392-4a5d-8070-812476bf93fb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.409 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[fe698ec5-7cca-4668-ba4f-c5a5c72f3186]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.409 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:26:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:10.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.452 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.456 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.457 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.458 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.458 2 DEBUG os_brick.utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.459 2 DEBUG nova.virt.block_device [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating existing volume attachment record: 723939dd-8ab6-441b-821d-ba26b6f0a41c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:26:10 np0005466031 nova_compute[235803]: 2025-10-02 12:26:10.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:11 np0005466031 nova_compute[235803]: 2025-10-02 12:26:11.060 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully created port: 006d1393-a12a-44ea-9d1c-ba017fde9058 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:26:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:11.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.023 2 INFO nova.virt.block_device [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Booting with volume 3cae6300-aa08-440b-8899-aaa48fab86bd at /dev/vdc#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.254 2 DEBUG os_brick.utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.255 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.273 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.274 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[33d1b6db-6f43-4f10-b97e-c9fe63766da6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.275 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.289 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.289 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[9c435058-f496-44a1-9949-0ab996d847f3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.291 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.302 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.302 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[1f04e5c4-a31c-476b-ad97-cfcbced166cc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.303 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[39f3ae88-da32-4731-b3db-4c635756a600]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.304 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.345 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.348 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.348 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.348 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.349 2 DEBUG os_brick.utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] <== get_connector_properties: return (93ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.349 2 DEBUG nova.virt.block_device [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating existing volume attachment record: 5bc54f85-aaa5-4214-877f-7aa78ebc1f3e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:26:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:12.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:12 np0005466031 nova_compute[235803]: 2025-10-02 12:26:12.546 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully created port: 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:26:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/695025001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:26:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:13.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.730 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.732 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.732 2 INFO nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Creating image(s)#033[00m
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.732 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.733 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Ensure instance console log exists: /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.733 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.734 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:13 np0005466031 nova_compute[235803]: 2025-10-02 12:26:13.734 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:14 np0005466031 nova_compute[235803]: 2025-10-02 12:26:14.010 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully created port: d1b1a282-3a38-454d-bc99-885b75bac9cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:26:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:14.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:14 np0005466031 nova_compute[235803]: 2025-10-02 12:26:14.847 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully created port: a619a50d-dbe2-4780-a273-9b1db89a98f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:26:15 np0005466031 nova_compute[235803]: 2025-10-02 12:26:15.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:15 np0005466031 podman[260594]: 2025-10-02 12:26:15.675682291 +0000 UTC m=+0.061803067 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid)
Oct  2 08:26:15 np0005466031 nova_compute[235803]: 2025-10-02 12:26:15.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:15 np0005466031 podman[260593]: 2025-10-02 12:26:15.700652671 +0000 UTC m=+0.090975876 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:26:15 np0005466031 nova_compute[235803]: 2025-10-02 12:26:15.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:16.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:16 np0005466031 nova_compute[235803]: 2025-10-02 12:26:16.735 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully updated port: c31a45fc-37b9-4809-89b1-839d4e85765d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:26:17 np0005466031 nova_compute[235803]: 2025-10-02 12:26:17.015 2 DEBUG nova.compute.manager [req-eab5208f-73a7-41bb-ba16-6e197b1e4dba req-51f44b51-3353-4575-abfe-eff99b498621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-changed-c31a45fc-37b9-4809-89b1-839d4e85765d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:17 np0005466031 nova_compute[235803]: 2025-10-02 12:26:17.015 2 DEBUG nova.compute.manager [req-eab5208f-73a7-41bb-ba16-6e197b1e4dba req-51f44b51-3353-4575-abfe-eff99b498621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing instance network info cache due to event network-changed-c31a45fc-37b9-4809-89b1-839d4e85765d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:26:17 np0005466031 nova_compute[235803]: 2025-10-02 12:26:17.015 2 DEBUG oslo_concurrency.lockutils [req-eab5208f-73a7-41bb-ba16-6e197b1e4dba req-51f44b51-3353-4575-abfe-eff99b498621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:26:17 np0005466031 nova_compute[235803]: 2025-10-02 12:26:17.015 2 DEBUG oslo_concurrency.lockutils [req-eab5208f-73a7-41bb-ba16-6e197b1e4dba req-51f44b51-3353-4575-abfe-eff99b498621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:26:17 np0005466031 nova_compute[235803]: 2025-10-02 12:26:17.016 2 DEBUG nova.network.neutron [req-eab5208f-73a7-41bb-ba16-6e197b1e4dba req-51f44b51-3353-4575-abfe-eff99b498621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing network info cache for port c31a45fc-37b9-4809-89b1-839d4e85765d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:26:17 np0005466031 nova_compute[235803]: 2025-10-02 12:26:17.381 2 DEBUG nova.network.neutron [req-eab5208f-73a7-41bb-ba16-6e197b1e4dba req-51f44b51-3353-4575-abfe-eff99b498621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:26:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:17.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:18 np0005466031 nova_compute[235803]: 2025-10-02 12:26:18.029 2 DEBUG nova.network.neutron [req-eab5208f-73a7-41bb-ba16-6e197b1e4dba req-51f44b51-3353-4575-abfe-eff99b498621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:18 np0005466031 nova_compute[235803]: 2025-10-02 12:26:18.043 2 DEBUG oslo_concurrency.lockutils [req-eab5208f-73a7-41bb-ba16-6e197b1e4dba req-51f44b51-3353-4575-abfe-eff99b498621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:26:18 np0005466031 nova_compute[235803]: 2025-10-02 12:26:18.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:18 np0005466031 nova_compute[235803]: 2025-10-02 12:26:18.257 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully updated port: dd45e845-2479-49a6-a571-33984e911f3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:26:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:18.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:19 np0005466031 nova_compute[235803]: 2025-10-02 12:26:19.188 2 DEBUG nova.compute.manager [req-d4825c0c-c52c-46c1-b435-7cca1c28530b req-6be95c7c-257e-4410-b547-c8b2a47b9cff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-changed-dd45e845-2479-49a6-a571-33984e911f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:19 np0005466031 nova_compute[235803]: 2025-10-02 12:26:19.188 2 DEBUG nova.compute.manager [req-d4825c0c-c52c-46c1-b435-7cca1c28530b req-6be95c7c-257e-4410-b547-c8b2a47b9cff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing instance network info cache due to event network-changed-dd45e845-2479-49a6-a571-33984e911f3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:26:19 np0005466031 nova_compute[235803]: 2025-10-02 12:26:19.189 2 DEBUG oslo_concurrency.lockutils [req-d4825c0c-c52c-46c1-b435-7cca1c28530b req-6be95c7c-257e-4410-b547-c8b2a47b9cff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:26:19 np0005466031 nova_compute[235803]: 2025-10-02 12:26:19.189 2 DEBUG oslo_concurrency.lockutils [req-d4825c0c-c52c-46c1-b435-7cca1c28530b req-6be95c7c-257e-4410-b547-c8b2a47b9cff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:26:19 np0005466031 nova_compute[235803]: 2025-10-02 12:26:19.189 2 DEBUG nova.network.neutron [req-d4825c0c-c52c-46c1-b435-7cca1c28530b req-6be95c7c-257e-4410-b547-c8b2a47b9cff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing network info cache for port dd45e845-2479-49a6-a571-33984e911f3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:26:19 np0005466031 nova_compute[235803]: 2025-10-02 12:26:19.466 2 DEBUG nova.network.neutron [req-d4825c0c-c52c-46c1-b435-7cca1c28530b req-6be95c7c-257e-4410-b547-c8b2a47b9cff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:26:19 np0005466031 nova_compute[235803]: 2025-10-02 12:26:19.493 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully updated port: 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:26:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:20.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:20 np0005466031 nova_compute[235803]: 2025-10-02 12:26:20.535 2 DEBUG nova.network.neutron [req-d4825c0c-c52c-46c1-b435-7cca1c28530b req-6be95c7c-257e-4410-b547-c8b2a47b9cff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:20 np0005466031 nova_compute[235803]: 2025-10-02 12:26:20.559 2 DEBUG oslo_concurrency.lockutils [req-d4825c0c-c52c-46c1-b435-7cca1c28530b req-6be95c7c-257e-4410-b547-c8b2a47b9cff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:26:20 np0005466031 nova_compute[235803]: 2025-10-02 12:26:20.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.134 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.185 2 WARNING nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.186 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Triggering sync for uuid 776370c1-1213-4676-b85e-ce1c0491afc6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.186 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.348 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully updated port: 006d1393-a12a-44ea-9d1c-ba017fde9058 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.445 2 DEBUG nova.compute.manager [req-6e5c3490-73a0-43c5-8f0c-bc14187cf6e1 req-c06dc67b-71da-42f0-a65b-b47efdae13b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-changed-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.445 2 DEBUG nova.compute.manager [req-6e5c3490-73a0-43c5-8f0c-bc14187cf6e1 req-c06dc67b-71da-42f0-a65b-b47efdae13b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing instance network info cache due to event network-changed-705ea63d-4c9b-450a-ac81-c5bf6ef0c274. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.446 2 DEBUG oslo_concurrency.lockutils [req-6e5c3490-73a0-43c5-8f0c-bc14187cf6e1 req-c06dc67b-71da-42f0-a65b-b47efdae13b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.446 2 DEBUG oslo_concurrency.lockutils [req-6e5c3490-73a0-43c5-8f0c-bc14187cf6e1 req-c06dc67b-71da-42f0-a65b-b47efdae13b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:26:21 np0005466031 nova_compute[235803]: 2025-10-02 12:26:21.446 2 DEBUG nova.network.neutron [req-6e5c3490-73a0-43c5-8f0c-bc14187cf6e1 req-c06dc67b-71da-42f0-a65b-b47efdae13b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing network info cache for port 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:26:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:21.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:22.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:22 np0005466031 nova_compute[235803]: 2025-10-02 12:26:22.625 2 DEBUG nova.network.neutron [req-6e5c3490-73a0-43c5-8f0c-bc14187cf6e1 req-c06dc67b-71da-42f0-a65b-b47efdae13b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.158 2 DEBUG nova.network.neutron [req-6e5c3490-73a0-43c5-8f0c-bc14187cf6e1 req-c06dc67b-71da-42f0-a65b-b47efdae13b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.195 2 DEBUG oslo_concurrency.lockutils [req-6e5c3490-73a0-43c5-8f0c-bc14187cf6e1 req-c06dc67b-71da-42f0-a65b-b47efdae13b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.537 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully updated port: 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.574 2 DEBUG nova.compute.manager [req-bdb9d25f-5888-4cff-ac30-bfae02e227a5 req-dcc14e81-5c9e-46df-be5e-323876d8a0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-changed-006d1393-a12a-44ea-9d1c-ba017fde9058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.574 2 DEBUG nova.compute.manager [req-bdb9d25f-5888-4cff-ac30-bfae02e227a5 req-dcc14e81-5c9e-46df-be5e-323876d8a0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing instance network info cache due to event network-changed-006d1393-a12a-44ea-9d1c-ba017fde9058. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.574 2 DEBUG oslo_concurrency.lockutils [req-bdb9d25f-5888-4cff-ac30-bfae02e227a5 req-dcc14e81-5c9e-46df-be5e-323876d8a0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.574 2 DEBUG oslo_concurrency.lockutils [req-bdb9d25f-5888-4cff-ac30-bfae02e227a5 req-dcc14e81-5c9e-46df-be5e-323876d8a0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:26:23 np0005466031 nova_compute[235803]: 2025-10-02 12:26:23.575 2 DEBUG nova.network.neutron [req-bdb9d25f-5888-4cff-ac30-bfae02e227a5 req-dcc14e81-5c9e-46df-be5e-323876d8a0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing network info cache for port 006d1393-a12a-44ea-9d1c-ba017fde9058 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:26:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:23.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:24 np0005466031 nova_compute[235803]: 2025-10-02 12:26:24.149 2 DEBUG nova.network.neutron [req-bdb9d25f-5888-4cff-ac30-bfae02e227a5 req-dcc14e81-5c9e-46df-be5e-323876d8a0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:26:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:26:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:24.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:26:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:25.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:25 np0005466031 nova_compute[235803]: 2025-10-02 12:26:25.648 2 DEBUG nova.network.neutron [req-bdb9d25f-5888-4cff-ac30-bfae02e227a5 req-dcc14e81-5c9e-46df-be5e-323876d8a0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:25 np0005466031 nova_compute[235803]: 2025-10-02 12:26:25.681 2 DEBUG oslo_concurrency.lockutils [req-bdb9d25f-5888-4cff-ac30-bfae02e227a5 req-dcc14e81-5c9e-46df-be5e-323876d8a0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:26:25 np0005466031 nova_compute[235803]: 2025-10-02 12:26:25.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:25.831 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:25.832 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:25.832 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:26 np0005466031 nova_compute[235803]: 2025-10-02 12:26:26.047 2 DEBUG nova.compute.manager [req-bf09ea57-dbb7-41e8-bda7-ac53edc28682 req-79abed41-27ed-4ff6-b811-7a34b7cd9de4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-changed-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:26 np0005466031 nova_compute[235803]: 2025-10-02 12:26:26.048 2 DEBUG nova.compute.manager [req-bf09ea57-dbb7-41e8-bda7-ac53edc28682 req-79abed41-27ed-4ff6-b811-7a34b7cd9de4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing instance network info cache due to event network-changed-22f1362c-d698-4f08-b8a3-4a4f609ef2b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:26:26 np0005466031 nova_compute[235803]: 2025-10-02 12:26:26.048 2 DEBUG oslo_concurrency.lockutils [req-bf09ea57-dbb7-41e8-bda7-ac53edc28682 req-79abed41-27ed-4ff6-b811-7a34b7cd9de4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:26:26 np0005466031 nova_compute[235803]: 2025-10-02 12:26:26.048 2 DEBUG oslo_concurrency.lockutils [req-bf09ea57-dbb7-41e8-bda7-ac53edc28682 req-79abed41-27ed-4ff6-b811-7a34b7cd9de4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:26:26 np0005466031 nova_compute[235803]: 2025-10-02 12:26:26.048 2 DEBUG nova.network.neutron [req-bf09ea57-dbb7-41e8-bda7-ac53edc28682 req-79abed41-27ed-4ff6-b811-7a34b7cd9de4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing network info cache for port 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:26:26 np0005466031 nova_compute[235803]: 2025-10-02 12:26:26.109 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully updated port: d1b1a282-3a38-454d-bc99-885b75bac9cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:26:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:27 np0005466031 nova_compute[235803]: 2025-10-02 12:26:27.328 2 DEBUG nova.network.neutron [req-bf09ea57-dbb7-41e8-bda7-ac53edc28682 req-79abed41-27ed-4ff6-b811-7a34b7cd9de4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:26:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:27.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:28 np0005466031 nova_compute[235803]: 2025-10-02 12:26:28.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:28 np0005466031 nova_compute[235803]: 2025-10-02 12:26:28.386 2 DEBUG nova.compute.manager [req-a5b3acd6-84e4-42a1-96b0-96b1cb21551c req-308e6bea-efc2-4211-8e1b-feb40dbf3d2d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-changed-d1b1a282-3a38-454d-bc99-885b75bac9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:28 np0005466031 nova_compute[235803]: 2025-10-02 12:26:28.386 2 DEBUG nova.compute.manager [req-a5b3acd6-84e4-42a1-96b0-96b1cb21551c req-308e6bea-efc2-4211-8e1b-feb40dbf3d2d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing instance network info cache due to event network-changed-d1b1a282-3a38-454d-bc99-885b75bac9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:26:28 np0005466031 nova_compute[235803]: 2025-10-02 12:26:28.387 2 DEBUG oslo_concurrency.lockutils [req-a5b3acd6-84e4-42a1-96b0-96b1cb21551c req-308e6bea-efc2-4211-8e1b-feb40dbf3d2d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:26:28 np0005466031 nova_compute[235803]: 2025-10-02 12:26:28.388 2 DEBUG nova.network.neutron [req-bf09ea57-dbb7-41e8-bda7-ac53edc28682 req-79abed41-27ed-4ff6-b811-7a34b7cd9de4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:28.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:28 np0005466031 nova_compute[235803]: 2025-10-02 12:26:28.630 2 DEBUG oslo_concurrency.lockutils [req-bf09ea57-dbb7-41e8-bda7-ac53edc28682 req-79abed41-27ed-4ff6-b811-7a34b7cd9de4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:26:28 np0005466031 nova_compute[235803]: 2025-10-02 12:26:28.631 2 DEBUG oslo_concurrency.lockutils [req-a5b3acd6-84e4-42a1-96b0-96b1cb21551c req-308e6bea-efc2-4211-8e1b-feb40dbf3d2d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:26:28 np0005466031 nova_compute[235803]: 2025-10-02 12:26:28.632 2 DEBUG nova.network.neutron [req-a5b3acd6-84e4-42a1-96b0-96b1cb21551c req-308e6bea-efc2-4211-8e1b-feb40dbf3d2d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing network info cache for port d1b1a282-3a38-454d-bc99-885b75bac9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:26:29 np0005466031 nova_compute[235803]: 2025-10-02 12:26:29.373 2 DEBUG nova.network.neutron [req-a5b3acd6-84e4-42a1-96b0-96b1cb21551c req-308e6bea-efc2-4211-8e1b-feb40dbf3d2d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:26:29 np0005466031 nova_compute[235803]: 2025-10-02 12:26:29.454 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Successfully updated port: a619a50d-dbe2-4780-a273-9b1db89a98f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:26:29 np0005466031 nova_compute[235803]: 2025-10-02 12:26:29.492 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:26:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:29.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.253 2 DEBUG nova.network.neutron [req-a5b3acd6-84e4-42a1-96b0-96b1cb21551c req-308e6bea-efc2-4211-8e1b-feb40dbf3d2d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.279 2 DEBUG oslo_concurrency.lockutils [req-a5b3acd6-84e4-42a1-96b0-96b1cb21551c req-308e6bea-efc2-4211-8e1b-feb40dbf3d2d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.280 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.280 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:26:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:30.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.566 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.591 2 DEBUG nova.compute.manager [req-be4ef26a-ecb6-41ba-ab62-e178337685ea req-97fcef74-dfe7-4a6e-90db-f4eeda0d5fd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-changed-a619a50d-dbe2-4780-a273-9b1db89a98f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.591 2 DEBUG nova.compute.manager [req-be4ef26a-ecb6-41ba-ab62-e178337685ea req-97fcef74-dfe7-4a6e-90db-f4eeda0d5fd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing instance network info cache due to event network-changed-a619a50d-dbe2-4780-a273-9b1db89a98f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.592 2 DEBUG oslo_concurrency.lockutils [req-be4ef26a-ecb6-41ba-ab62-e178337685ea req-97fcef74-dfe7-4a6e-90db-f4eeda0d5fd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:26:30 np0005466031 nova_compute[235803]: 2025-10-02 12:26:30.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:26:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:31.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:26:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:33 np0005466031 nova_compute[235803]: 2025-10-02 12:26:33.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:33.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:26:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:26:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:35.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:35 np0005466031 nova_compute[235803]: 2025-10-02 12:26:35.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:37 np0005466031 podman[260693]: 2025-10-02 12:26:37.617236552 +0000 UTC m=+0.051316430 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:26:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:37.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:37 np0005466031 podman[260694]: 2025-10-02 12:26:37.668395435 +0000 UTC m=+0.100281190 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 08:26:38 np0005466031 nova_compute[235803]: 2025-10-02 12:26:38.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:38.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:39.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:39 np0005466031 nova_compute[235803]: 2025-10-02 12:26:39.689 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:40.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:40 np0005466031 nova_compute[235803]: 2025-10-02 12:26:40.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:40 np0005466031 nova_compute[235803]: 2025-10-02 12:26:40.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:41 np0005466031 nova_compute[235803]: 2025-10-02 12:26:41.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:41.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:41 np0005466031 nova_compute[235803]: 2025-10-02 12:26:41.710 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:41 np0005466031 nova_compute[235803]: 2025-10-02 12:26:41.711 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:41 np0005466031 nova_compute[235803]: 2025-10-02 12:26:41.711 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:41 np0005466031 nova_compute[235803]: 2025-10-02 12:26:41.711 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:26:41 np0005466031 nova_compute[235803]: 2025-10-02 12:26:41.711 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:26:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:26:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1937974989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.195 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.390 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.391 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4679MB free_disk=20.942699432373047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.392 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.392 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:26:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 24K writes, 96K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 24K writes, 8501 syncs, 2.86 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9713 writes, 37K keys, 9713 commit groups, 1.0 writes per commit group, ingest: 38.51 MB, 0.06 MB/s#012Interval WAL: 9713 writes, 3884 syncs, 2.50 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:26:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.497 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 776370c1-1213-4676-b85e-ce1c0491afc6 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.497 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.497 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:26:42 np0005466031 nova_compute[235803]: 2025-10-02 12:26:42.656 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1347811142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:43 np0005466031 nova_compute[235803]: 2025-10-02 12:26:43.076 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:43 np0005466031 nova_compute[235803]: 2025-10-02 12:26:43.081 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:26:43 np0005466031 nova_compute[235803]: 2025-10-02 12:26:43.100 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:26:43 np0005466031 nova_compute[235803]: 2025-10-02 12:26:43.124 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:26:43 np0005466031 nova_compute[235803]: 2025-10-02 12:26:43.125 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:43 np0005466031 nova_compute[235803]: 2025-10-02 12:26:43.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:43.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:44 np0005466031 nova_compute[235803]: 2025-10-02 12:26:44.125 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:44 np0005466031 nova_compute[235803]: 2025-10-02 12:26:44.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:44 np0005466031 nova_compute[235803]: 2025-10-02 12:26:44.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:26:44 np0005466031 nova_compute[235803]: 2025-10-02 12:26:44.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:26:44 np0005466031 nova_compute[235803]: 2025-10-02 12:26:44.661 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:26:44 np0005466031 nova_compute[235803]: 2025-10-02 12:26:44.662 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:26:44 np0005466031 nova_compute[235803]: 2025-10-02 12:26:44.662 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:44 np0005466031 nova_compute[235803]: 2025-10-02 12:26:44.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:44.906 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:44.909 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:26:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:45.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:45 np0005466031 nova_compute[235803]: 2025-10-02 12:26:45.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:46.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:46 np0005466031 podman[260919]: 2025-10-02 12:26:46.622818805 +0000 UTC m=+0.053598745 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct  2 08:26:46 np0005466031 podman[260920]: 2025-10-02 12:26:46.648474304 +0000 UTC m=+0.074497608 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:26:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:47.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.005 2 DEBUG nova.network.neutron [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.060 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.061 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance network_info: |[{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.061 2 DEBUG oslo_concurrency.lockutils [req-be4ef26a-ecb6-41ba-ab62-e178337685ea req-97fcef74-dfe7-4a6e-90db-f4eeda0d5fd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.061 2 DEBUG nova.network.neutron [req-be4ef26a-ecb6-41ba-ab62-e178337685ea req-97fcef74-dfe7-4a6e-90db-f4eeda0d5fd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing network info cache for port a619a50d-dbe2-4780-a273-9b1db89a98f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.068 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Start _get_guest_xml network_info=[{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T1
Oct  2 08:26:48 np0005466031 nova_compute[235803]: in_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d91ade79-bea0-4f14-93f8-80a864c83dfa', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd91ade79-bea0-4f14-93f8-80a864c83dfa', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '776370c1-1213-4676-b85e-ce1c0491afc6', 'attached_at': '', 'detached_at': '', 'volume_id': 'd91ade79-bea0-4f14-93f8-80a864c83dfa', 'serial': 'd91ade79-bea0-4f14-93f8-80a864c83dfa'}, 'attachment_id': 'd13ebf65-b423-4dcf-8bca-485d4eec8ed2', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d87ab1d9-322f-4ca3-8b9b-14a670a2e320', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd87ab1d9-322f-4ca3-8b9b-14a670a2e320', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '776370c1-1213-4676-b85e-ce1c0491afc6', 'attached_at': '', 'detached_at': '', 'volume_id': 'd87ab1d9-322f-4ca3-8b9b-14a670a2e320', 'serial': 'd87ab1d9-322f-4ca3-8b9b-14a670a2e320'}, 'attachment_id': '723939dd-8ab6-441b-821d-ba26b6f0a41c', 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'guest_format': None, 'boot_index': 1, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3cae6300-aa08-440b-8899-aaa48fab86bd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3cae6300-aa08-440b-8899-aaa48fab86bd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '776370c1-1213-4676-b85e-ce1c0491afc6', 'attached_at': '', 'detached_at': '', 'volume_id': '3cae6300-aa08-440b-8899-aaa48fab86bd', 'serial': '3cae6300-aa08-440b-8899-aaa48fab86bd'}, 'attachment_id': '5bc54f85-aaa5-4214-877f-7aa78ebc1f3e', 'delete_on_termination': False, 'mount_device': '/dev/vdc', 'guest_format': None, 'boot_index': 2, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.072 2 WARNING nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.076 2 DEBUG nova.virt.libvirt.host [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.077 2 DEBUG nova.virt.libvirt.host [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.082 2 DEBUG nova.virt.libvirt.host [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.082 2 DEBUG nova.virt.libvirt.host [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.083 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.083 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.084 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.084 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.084 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.084 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.084 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.085 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.085 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.085 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.085 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.085 2 DEBUG nova.virt.hardware [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.109 2 DEBUG nova.storage.rbd_utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] rbd image 776370c1-1213-4676-b85e-ce1c0491afc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.113 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:48 np0005466031 rsyslogd[1006]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 12:26:48.068 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:48.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:26:48 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2080025941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.519 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.612 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.613 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.613 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:a5:91,bridge_name='br-int',has_traffic_filtering=True,id=c31a45fc-37b9-4809-89b1-839d4e85765d,network=Network(6d1afc59-3ec5-4518-a68b-f8ab041976c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc31a45fc-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.614 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.614 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.615 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:42,bridge_name='br-int',has_traffic_filtering=True,id=dd45e845-2479-49a6-a571-33984e911f3c,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdd45e845-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.616 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.616 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.616 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c5:38,bridge_name='br-int',has_traffic_filtering=True,id=705ea63d-4c9b-450a-ac81-c5bf6ef0c274,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap705ea63d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.617 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.617 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.618 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bc:17,bridge_name='br-int',has_traffic_filtering=True,id=006d1393-a12a-44ea-9d1c-ba017fde9058,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap006d1393-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.618 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.618 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.619 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:0b:3e,bridge_name='br-int',has_traffic_filtering=True,id=22f1362c-d698-4f08-b8a3-4a4f609ef2b5,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f1362c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.619 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.620 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.620 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:7d:47,bridge_name='br-int',has_traffic_filtering=True,id=d1b1a282-3a38-454d-bc99-885b75bac9cc,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1b1a282-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.621 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.621 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.622 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=a619a50d-dbe2-4780-a273-9b1db89a98f7,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa619a50d-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.622 2 DEBUG nova.objects.instance [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lazy-loading 'pci_devices' on Instance uuid 776370c1-1213-4676-b85e-ce1c0491afc6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.643 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <uuid>776370c1-1213-4676-b85e-ce1c0491afc6</uuid>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <name>instance-00000039</name>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <nova:name>tempest-device-tagging-server-667401928</nova:name>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:26:48</nova:creationTime>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:user uuid="94e0e2f26a1648368032ab7e6732655c">tempest-TaggedBootDevicesTest_v242-1211830922-project-member</nova:user>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:project uuid="6d96bae071ef4595bd93c956dd20796c">tempest-TaggedBootDevicesTest_v242-1211830922</nova:project>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:port uuid="c31a45fc-37b9-4809-89b1-839d4e85765d">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:port uuid="dd45e845-2479-49a6-a571-33984e911f3c">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.1.1.166" ipVersion="4"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:port uuid="705ea63d-4c9b-450a-ac81-c5bf6ef0c274">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.1.1.81" ipVersion="4"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:port uuid="006d1393-a12a-44ea-9d1c-ba017fde9058">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.1.1.82" ipVersion="4"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:port uuid="22f1362c-d698-4f08-b8a3-4a4f609ef2b5">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.1.1.157" ipVersion="4"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:port uuid="d1b1a282-3a38-454d-bc99-885b75bac9cc">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <nova:port uuid="a619a50d-dbe2-4780-a273-9b1db89a98f7">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <entry name="serial">776370c1-1213-4676-b85e-ce1c0491afc6</entry>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <entry name="uuid">776370c1-1213-4676-b85e-ce1c0491afc6</entry>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/776370c1-1213-4676-b85e-ce1c0491afc6_disk.config">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:26:48 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-d91ade79-bea0-4f14-93f8-80a864c83dfa">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <serial>d91ade79-bea0-4f14-93f8-80a864c83dfa</serial>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-d87ab1d9-322f-4ca3-8b9b-14a670a2e320">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <serial>d87ab1d9-322f-4ca3-8b9b-14a670a2e320</serial>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-3cae6300-aa08-440b-8899-aaa48fab86bd">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="vdc" bus="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <serial>3cae6300-aa08-440b-8899-aaa48fab86bd</serial>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:94:a5:91"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="tapc31a45fc-37"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:dc:4a:42"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="tapdd45e845-24"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:7a:c5:38"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="tap705ea63d-4c"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:e6:bc:17"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="tap006d1393-a1"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:26:0b:3e"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="tap22f1362c-d6"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:d7:7d:47"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="tapd1b1a282-3a"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:fa:c6:b6"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <target dev="tapa619a50d-db"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6/console.log" append="off"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:26:48 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:26:48 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:26:48 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:26:48 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.644 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Preparing to wait for external event network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.645 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.645 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.645 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.645 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Preparing to wait for external event network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.645 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.646 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.646 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.646 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Preparing to wait for external event network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.646 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.646 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.647 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.647 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Preparing to wait for external event network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.647 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.647 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.647 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.648 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Preparing to wait for external event network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.648 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.648 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.648 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.648 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Preparing to wait for external event network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.649 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.649 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.649 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.649 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Preparing to wait for external event network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.649 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.650 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.650 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.651 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.651 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.652 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:a5:91,bridge_name='br-int',has_traffic_filtering=True,id=c31a45fc-37b9-4809-89b1-839d4e85765d,network=Network(6d1afc59-3ec5-4518-a68b-f8ab041976c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc31a45fc-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.652 2 DEBUG os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:a5:91,bridge_name='br-int',has_traffic_filtering=True,id=c31a45fc-37b9-4809-89b1-839d4e85765d,network=Network(6d1afc59-3ec5-4518-a68b-f8ab041976c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc31a45fc-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc31a45fc-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc31a45fc-37, col_values=(('external_ids', {'iface-id': 'c31a45fc-37b9-4809-89b1-839d4e85765d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:a5:91', 'vm-uuid': '776370c1-1213-4676-b85e-ce1c0491afc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 NetworkManager[44907]: <info>  [1759408008.6590] manager: (tapc31a45fc-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.666 2 INFO os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:a5:91,bridge_name='br-int',has_traffic_filtering=True,id=c31a45fc-37b9-4809-89b1-839d4e85765d,network=Network(6d1afc59-3ec5-4518-a68b-f8ab041976c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc31a45fc-37')#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.667 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.667 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.667 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:42,bridge_name='br-int',has_traffic_filtering=True,id=dd45e845-2479-49a6-a571-33984e911f3c,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdd45e845-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.668 2 DEBUG os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:42,bridge_name='br-int',has_traffic_filtering=True,id=dd45e845-2479-49a6-a571-33984e911f3c,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdd45e845-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.670 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd45e845-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.670 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd45e845-24, col_values=(('external_ids', {'iface-id': 'dd45e845-2479-49a6-a571-33984e911f3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:4a:42', 'vm-uuid': '776370c1-1213-4676-b85e-ce1c0491afc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 NetworkManager[44907]: <info>  [1759408008.6736] manager: (tapdd45e845-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.680 2 INFO os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:42,bridge_name='br-int',has_traffic_filtering=True,id=dd45e845-2479-49a6-a571-33984e911f3c,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdd45e845-24')#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.680 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.681 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.681 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c5:38,bridge_name='br-int',has_traffic_filtering=True,id=705ea63d-4c9b-450a-ac81-c5bf6ef0c274,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap705ea63d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.682 2 DEBUG os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c5:38,bridge_name='br-int',has_traffic_filtering=True,id=705ea63d-4c9b-450a-ac81-c5bf6ef0c274,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap705ea63d-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705ea63d-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.685 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap705ea63d-4c, col_values=(('external_ids', {'iface-id': '705ea63d-4c9b-450a-ac81-c5bf6ef0c274', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:c5:38', 'vm-uuid': '776370c1-1213-4676-b85e-ce1c0491afc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 NetworkManager[44907]: <info>  [1759408008.6872] manager: (tap705ea63d-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.697 2 INFO os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c5:38,bridge_name='br-int',has_traffic_filtering=True,id=705ea63d-4c9b-450a-ac81-c5bf6ef0c274,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap705ea63d-4c')#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.698 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.698 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.699 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bc:17,bridge_name='br-int',has_traffic_filtering=True,id=006d1393-a12a-44ea-9d1c-ba017fde9058,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap006d1393-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.699 2 DEBUG os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bc:17,bridge_name='br-int',has_traffic_filtering=True,id=006d1393-a12a-44ea-9d1c-ba017fde9058,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap006d1393-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.700 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.700 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap006d1393-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.703 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap006d1393-a1, col_values=(('external_ids', {'iface-id': '006d1393-a12a-44ea-9d1c-ba017fde9058', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:bc:17', 'vm-uuid': '776370c1-1213-4676-b85e-ce1c0491afc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 NetworkManager[44907]: <info>  [1759408008.7049] manager: (tap006d1393-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.717 2 INFO os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bc:17,bridge_name='br-int',has_traffic_filtering=True,id=006d1393-a12a-44ea-9d1c-ba017fde9058,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap006d1393-a1')#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.717 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.718 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.718 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:0b:3e,bridge_name='br-int',has_traffic_filtering=True,id=22f1362c-d698-4f08-b8a3-4a4f609ef2b5,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f1362c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.718 2 DEBUG os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:0b:3e,bridge_name='br-int',has_traffic_filtering=True,id=22f1362c-d698-4f08-b8a3-4a4f609ef2b5,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f1362c-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22f1362c-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22f1362c-d6, col_values=(('external_ids', {'iface-id': '22f1362c-d698-4f08-b8a3-4a4f609ef2b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:0b:3e', 'vm-uuid': '776370c1-1213-4676-b85e-ce1c0491afc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 NetworkManager[44907]: <info>  [1759408008.7241] manager: (tap22f1362c-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.739 2 INFO os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:0b:3e,bridge_name='br-int',has_traffic_filtering=True,id=22f1362c-d698-4f08-b8a3-4a4f609ef2b5,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f1362c-d6')#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.740 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.740 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.741 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:7d:47,bridge_name='br-int',has_traffic_filtering=True,id=d1b1a282-3a38-454d-bc99-885b75bac9cc,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1b1a282-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.741 2 DEBUG os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:7d:47,bridge_name='br-int',has_traffic_filtering=True,id=d1b1a282-3a38-454d-bc99-885b75bac9cc,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1b1a282-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.742 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1b1a282-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.746 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1b1a282-3a, col_values=(('external_ids', {'iface-id': 'd1b1a282-3a38-454d-bc99-885b75bac9cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:7d:47', 'vm-uuid': '776370c1-1213-4676-b85e-ce1c0491afc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 NetworkManager[44907]: <info>  [1759408008.7483] manager: (tapd1b1a282-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.764 2 INFO os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:7d:47,bridge_name='br-int',has_traffic_filtering=True,id=d1b1a282-3a38-454d-bc99-885b75bac9cc,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1b1a282-3a')#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.765 2 DEBUG nova.virt.libvirt.vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:26:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.765 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.766 2 DEBUG nova.network.os_vif_util [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=a619a50d-dbe2-4780-a273-9b1db89a98f7,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa619a50d-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.766 2 DEBUG os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=a619a50d-dbe2-4780-a273-9b1db89a98f7,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa619a50d-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.767 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.767 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa619a50d-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa619a50d-db, col_values=(('external_ids', {'iface-id': 'a619a50d-dbe2-4780-a273-9b1db89a98f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:c6:b6', 'vm-uuid': '776370c1-1213-4676-b85e-ce1c0491afc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 NetworkManager[44907]: <info>  [1759408008.7709] manager: (tapa619a50d-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.791 2 INFO os_vif [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=a619a50d-dbe2-4780-a273-9b1db89a98f7,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa619a50d-db')#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.870 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.871 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.871 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] No VIF found with MAC fa:16:3e:94:a5:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.871 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] No VIF found with MAC fa:16:3e:26:0b:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.873 2 INFO nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Using config drive#033[00m
Oct  2 08:26:48 np0005466031 nova_compute[235803]: 2025-10-02 12:26:48.901 2 DEBUG nova.storage.rbd_utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] rbd image 776370c1-1213-4676-b85e-ce1c0491afc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:26:49 np0005466031 nova_compute[235803]: 2025-10-02 12:26:49.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:49 np0005466031 nova_compute[235803]: 2025-10-02 12:26:49.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:26:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:49.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:49 np0005466031 nova_compute[235803]: 2025-10-02 12:26:49.761 2 INFO nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Creating config drive at /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6/disk.config#033[00m
Oct  2 08:26:49 np0005466031 nova_compute[235803]: 2025-10-02 12:26:49.766 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7czbqhbh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:49 np0005466031 nova_compute[235803]: 2025-10-02 12:26:49.897 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7czbqhbh" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:49 np0005466031 nova_compute[235803]: 2025-10-02 12:26:49.921 2 DEBUG nova.storage.rbd_utils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] rbd image 776370c1-1213-4676-b85e-ce1c0491afc6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:26:49 np0005466031 nova_compute[235803]: 2025-10-02 12:26:49.923 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6/disk.config 776370c1-1213-4676-b85e-ce1c0491afc6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.409 2 DEBUG oslo_concurrency.processutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6/disk.config 776370c1-1213-4676-b85e-ce1c0491afc6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.410 2 INFO nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Deleting local config drive /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6/disk.config because it was imported into RBD.#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.4612] manager: (tapc31a45fc-37): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.4737] manager: (tapdd45e845-24): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Oct  2 08:26:50 np0005466031 kernel: tapc31a45fc-37: entered promiscuous mode
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00146|binding|INFO|Claiming lport c31a45fc-37b9-4809-89b1-839d4e85765d for this chassis.
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00147|binding|INFO|c31a45fc-37b9-4809-89b1-839d4e85765d: Claiming fa:16:3e:94:a5:91 10.100.0.3
Oct  2 08:26:50 np0005466031 kernel: tap705ea63d-4c: entered promiscuous mode
Oct  2 08:26:50 np0005466031 kernel: tapdd45e845-24: entered promiscuous mode
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.493 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:a5:91 10.100.0.3'], port_security=['fa:16:3e:94:a5:91 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1afc59-3ec5-4518-a68b-f8ab041976c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a53c3168-1ef0-4852-abb6-568d97f42365, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=c31a45fc-37b9-4809-89b1-839d4e85765d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.494 141898 INFO neutron.agent.ovn.metadata.agent [-] Port c31a45fc-37b9-4809-89b1-839d4e85765d in datapath 6d1afc59-3ec5-4518-a68b-f8ab041976c5 bound to our chassis#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.496 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1afc59-3ec5-4518-a68b-f8ab041976c5#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.4982] manager: (tap705ea63d-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Oct  2 08:26:50 np0005466031 systemd-udevd[261111]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:26:50 np0005466031 systemd-udevd[261113]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:26:50 np0005466031 systemd-udevd[261114]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:26:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5062] manager: (tap006d1393-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct  2 08:26:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:50.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:50 np0005466031 systemd-udevd[261121]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.512 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a775b434-00b6-4798-9c5a-64bb354bf6cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.513 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1afc59-31 in ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5150] device (tapc31a45fc-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.515 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1afc59-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.515 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[810f5765-8a19-4d5d-9a48-e84628d4d085]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5165] device (tapdd45e845-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.517 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5a8976-0fef-41c5-893e-b6c1d27d2d6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5183] device (tapc31a45fc-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5191] device (tapdd45e845-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5198] device (tap705ea63d-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5212] device (tap705ea63d-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.531 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec60c0a-2a07-427e-919d-482299293142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5371] manager: (tap22f1362c-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5567] manager: (tapd1b1a282-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.561 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[18d0d805-8771-42e0-a37e-ce909d3e4177]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00148|binding|INFO|Claiming lport 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 for this chassis.
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00149|binding|INFO|705ea63d-4c9b-450a-ac81-c5bf6ef0c274: Claiming fa:16:3e:7a:c5:38 10.1.1.81
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00150|binding|INFO|Claiming lport dd45e845-2479-49a6-a571-33984e911f3c for this chassis.
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00151|binding|INFO|dd45e845-2479-49a6-a571-33984e911f3c: Claiming fa:16:3e:dc:4a:42 10.1.1.166
Oct  2 08:26:50 np0005466031 kernel: tap22f1362c-d6: entered promiscuous mode
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5701] device (tap22f1362c-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:26:50 np0005466031 kernel: tapd1b1a282-3a: entered promiscuous mode
Oct  2 08:26:50 np0005466031 kernel: tap006d1393-a1: entered promiscuous mode
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.570 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:c5:38 10.1.1.81'], port_security=['fa:16:3e:7a:c5:38 10.1.1.81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-511179794', 'neutron:cidrs': '10.1.1.81/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-511179794', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '69e120d5-0955-4ba9-b571-b55e164419d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a508ac-640c-4078-bef8-1202d246fb47, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=705ea63d-4c9b-450a-ac81-c5bf6ef0c274) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.572 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:4a:42 10.1.1.166'], port_security=['fa:16:3e:dc:4a:42 10.1.1.166'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-942090899', 'neutron:cidrs': '10.1.1.166/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-942090899', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '69e120d5-0955-4ba9-b571-b55e164419d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a508ac-640c-4078-bef8-1202d246fb47, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=dd45e845-2479-49a6-a571-33984e911f3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5747] device (tap006d1393-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5759] manager: (tapa619a50d-db): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5762] device (tap22f1362c-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5766] device (tapd1b1a282-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5771] device (tap006d1393-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5772] device (tapd1b1a282-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:26:50 np0005466031 kernel: tapa619a50d-db: entered promiscuous mode
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5849] device (tapa619a50d-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.5858] device (tapa619a50d-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00152|binding|INFO|Claiming lport a619a50d-dbe2-4780-a273-9b1db89a98f7 for this chassis.
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00153|binding|INFO|a619a50d-dbe2-4780-a273-9b1db89a98f7: Claiming fa:16:3e:fa:c6:b6 10.2.2.200
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00154|binding|INFO|Claiming lport 006d1393-a12a-44ea-9d1c-ba017fde9058 for this chassis.
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00155|binding|INFO|006d1393-a12a-44ea-9d1c-ba017fde9058: Claiming fa:16:3e:e6:bc:17 10.1.1.82
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00156|binding|INFO|Claiming lport 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 for this chassis.
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00157|binding|INFO|22f1362c-d698-4f08-b8a3-4a4f609ef2b5: Claiming fa:16:3e:26:0b:3e 10.1.1.157
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00158|binding|INFO|Claiming lport d1b1a282-3a38-454d-bc99-885b75bac9cc for this chassis.
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00159|binding|INFO|d1b1a282-3a38-454d-bc99-885b75bac9cc: Claiming fa:16:3e:d7:7d:47 10.2.2.100
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00160|binding|INFO|Setting lport c31a45fc-37b9-4809-89b1-839d4e85765d ovn-installed in OVS
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.597 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[1951af70-d3c2-44f6-b0e1-894bb3427f45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00161|binding|INFO|Setting lport c31a45fc-37b9-4809-89b1-839d4e85765d up in Southbound
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.601 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:c6:b6 10.2.2.200'], port_security=['fa:16:3e:fa:c6:b6 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbdab1e7-04c3-4962-ba1f-23312665b37c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=a619a50d-dbe2-4780-a273-9b1db89a98f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.602 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:bc:17 10.1.1.82'], port_security=['fa:16:3e:e6:bc:17 10.1.1.82'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.82/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a508ac-640c-4078-bef8-1202d246fb47, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=006d1393-a12a-44ea-9d1c-ba017fde9058) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.604 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:7d:47 10.2.2.100'], port_security=['fa:16:3e:d7:7d:47 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbdab1e7-04c3-4962-ba1f-23312665b37c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=d1b1a282-3a38-454d-bc99-885b75bac9cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.605 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:0b:3e 10.1.1.157'], port_security=['fa:16:3e:26:0b:3e 10.1.1.157'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.157/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a508ac-640c-4078-bef8-1202d246fb47, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=22f1362c-d698-4f08-b8a3-4a4f609ef2b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.6072] manager: (tap6d1afc59-30): new Veth device (/org/freedesktop/NetworkManager/Devices/93)
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.606 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ec23a528-e568-4fe3-a46d-821ae9d67b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 systemd-machined[192227]: New machine qemu-22-instance-00000039.
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.636 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6073cd-0c76-4abc-8bd4-6676bb16f97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.639 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e1157ffa-2fbe-46f6-bf0e-afce7e4961fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 systemd[1]: Started Virtual Machine qemu-22-instance-00000039.
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.6625] device (tap6d1afc59-30): carrier: link connected
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.666 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3867689b-cbd3-4938-bb4e-ff318ac97392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.683 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[270d3ac4-04bd-416c-9521-201e2a64703d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1afc59-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:05:3d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586625, 'reachable_time': 39125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261160, 'error': None, 'target': 'ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.700 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d5eeb486-4792-4ea1-8820-1a0b815b915f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:53d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586625, 'tstamp': 586625}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261165, 'error': None, 'target': 'ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.715 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[42ba0645-3391-42fd-b25e-578132271c15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1afc59-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:05:3d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586625, 'reachable_time': 39125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261173, 'error': None, 'target': 'ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00162|binding|INFO|Setting lport d1b1a282-3a38-454d-bc99-885b75bac9cc ovn-installed in OVS
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00163|binding|INFO|Setting lport d1b1a282-3a38-454d-bc99-885b75bac9cc up in Southbound
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00164|binding|INFO|Setting lport a619a50d-dbe2-4780-a273-9b1db89a98f7 ovn-installed in OVS
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00165|binding|INFO|Setting lport a619a50d-dbe2-4780-a273-9b1db89a98f7 up in Southbound
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00166|binding|INFO|Setting lport dd45e845-2479-49a6-a571-33984e911f3c ovn-installed in OVS
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00167|binding|INFO|Setting lport dd45e845-2479-49a6-a571-33984e911f3c up in Southbound
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00168|binding|INFO|Setting lport 006d1393-a12a-44ea-9d1c-ba017fde9058 ovn-installed in OVS
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00169|binding|INFO|Setting lport 006d1393-a12a-44ea-9d1c-ba017fde9058 up in Southbound
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00170|binding|INFO|Setting lport 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 ovn-installed in OVS
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00171|binding|INFO|Setting lport 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 up in Southbound
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00172|binding|INFO|Setting lport 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 ovn-installed in OVS
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00173|binding|INFO|Setting lport 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 up in Southbound
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.746 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a67ce196-24a6-468b-b85b-f46c96d0c676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.802 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf1f8a6-1159-4503-807e-d78da70bf976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.804 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1afc59-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.804 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.805 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1afc59-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 kernel: tap6d1afc59-30: entered promiscuous mode
Oct  2 08:26:50 np0005466031 NetworkManager[44907]: <info>  [1759408010.8074] manager: (tap6d1afc59-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.809 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1afc59-30, col_values=(('external_ids', {'iface-id': '94f1acad-3082-4bf0-97ec-ca1be0eff0d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:50Z|00174|binding|INFO|Releasing lport 94f1acad-3082-4bf0-97ec-ca1be0eff0d8 from this chassis (sb_readonly=0)
Oct  2 08:26:50 np0005466031 nova_compute[235803]: 2025-10-02 12:26:50.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.823 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1afc59-3ec5-4518-a68b-f8ab041976c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1afc59-3ec5-4518-a68b-f8ab041976c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.824 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[72df20cb-6953-43d6-8305-1e629cf6f2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.825 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-6d1afc59-3ec5-4518-a68b-f8ab041976c5
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/6d1afc59-3ec5-4518-a68b-f8ab041976c5.pid.haproxy
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 6d1afc59-3ec5-4518-a68b-f8ab041976c5
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:26:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:50.826 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5', 'env', 'PROCESS_TAG=haproxy-6d1afc59-3ec5-4518-a68b-f8ab041976c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1afc59-3ec5-4518-a68b-f8ab041976c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:26:51 np0005466031 podman[261304]: 2025-10-02 12:26:51.228261744 +0000 UTC m=+0.115428542 container create 5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:26:51 np0005466031 podman[261304]: 2025-10-02 12:26:51.136255409 +0000 UTC m=+0.023422227 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.297 2 DEBUG nova.compute.manager [req-3118e18b-e726-4df6-8d31-91bbbaa5d2ce req-6733dd44-f006-4ec2-845d-e6f8233d51a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.298 2 DEBUG oslo_concurrency.lockutils [req-3118e18b-e726-4df6-8d31-91bbbaa5d2ce req-6733dd44-f006-4ec2-845d-e6f8233d51a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.298 2 DEBUG oslo_concurrency.lockutils [req-3118e18b-e726-4df6-8d31-91bbbaa5d2ce req-6733dd44-f006-4ec2-845d-e6f8233d51a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.298 2 DEBUG oslo_concurrency.lockutils [req-3118e18b-e726-4df6-8d31-91bbbaa5d2ce req-6733dd44-f006-4ec2-845d-e6f8233d51a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.299 2 DEBUG nova.compute.manager [req-3118e18b-e726-4df6-8d31-91bbbaa5d2ce req-6733dd44-f006-4ec2-845d-e6f8233d51a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Processing event network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:26:51 np0005466031 systemd[1]: Started libpod-conmon-5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109.scope.
Oct  2 08:26:51 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:26:51 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff5417e5d553644566196df5420c1804747ad6fd417687276c37914c0258b09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:26:51 np0005466031 podman[261304]: 2025-10-02 12:26:51.368387766 +0000 UTC m=+0.255554574 container init 5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:26:51 np0005466031 podman[261304]: 2025-10-02 12:26:51.374423117 +0000 UTC m=+0.261589915 container start 5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:26:51 np0005466031 neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261349]: [NOTICE]   (261353) : New worker (261355) forked
Oct  2 08:26:51 np0005466031 neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261349]: [NOTICE]   (261353) : Loading success.
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.434 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 in datapath fdb7aec6-8fa5-4966-aee1-bf0ccd52182b unbound from our chassis#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.436 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fdb7aec6-8fa5-4966-aee1-bf0ccd52182b#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.448 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2d80be-8d81-4954-8337-9caf9e41fb35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.449 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfdb7aec6-81 in ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.451 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfdb7aec6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.451 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d213f108-79f3-468c-bdfb-73e75cd81361]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.452 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e81f4142-8050-4cc2-a890-bf6130a91c05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.466 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[616b8c9d-de97-4413-8ff0-6d5aeed60824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.489 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3314f3-2465-4c33-9923-c0367b988959]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.519 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[42d356ea-a106-4e4b-84c8-f12d8a60fa34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 NetworkManager[44907]: <info>  [1759408011.5254] manager: (tapfdb7aec6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.524 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb24332-fc1a-47cf-9a8b-6d37ab7fde3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.557 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2567190c-a7e7-4e02-a62e-7706f863e0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.560 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[82c92b5f-c0f7-45df-81e5-cf4a495f39fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 NetworkManager[44907]: <info>  [1759408011.5837] device (tapfdb7aec6-80): carrier: link connected
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.588 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e49b78-80b5-487f-b529-5768aa9de61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.606 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5a201428-2882-4975-93bc-8a2f90c3f469]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfdb7aec6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:9e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586717, 'reachable_time': 20852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261374, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.621 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[13271d1f-796e-4c1c-aa14-e439636690e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:9ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586717, 'tstamp': 586717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261375, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.642 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9921a8-6ba3-4bfe-986b-6189426df493]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfdb7aec6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:9e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586717, 'reachable_time': 20852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261376, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:51.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.674 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd05a1d-a6d3-4d03-b09a-29ff96f9b9aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.730 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c76e6b6d-3725-4daa-8a7d-e24ff9d3c1f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.732 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb7aec6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.732 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.733 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdb7aec6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:51 np0005466031 NetworkManager[44907]: <info>  [1759408011.7359] manager: (tapfdb7aec6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Oct  2 08:26:51 np0005466031 kernel: tapfdb7aec6-80: entered promiscuous mode
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.739 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfdb7aec6-80, col_values=(('external_ids', {'iface-id': 'b09e2acf-3bb2-4302-9b2a-c5cd2fbf3dd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:51Z|00175|binding|INFO|Releasing lport b09e2acf-3bb2-4302-9b2a-c5cd2fbf3dd7 from this chassis (sb_readonly=0)
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.750 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408011.7504597, 776370c1-1213-4676-b85e-ce1c0491afc6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.751 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.756 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fdb7aec6-8fa5-4966-aee1-bf0ccd52182b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fdb7aec6-8fa5-4966-aee1-bf0ccd52182b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.757 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[319342a9-ac35-4d8b-9dfc-6e7f2161e925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.758 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/fdb7aec6-8fa5-4966-aee1-bf0ccd52182b.pid.haproxy
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID fdb7aec6-8fa5-4966-aee1-bf0ccd52182b
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:26:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:51.758 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'env', 'PROCESS_TAG=haproxy-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fdb7aec6-8fa5-4966-aee1-bf0ccd52182b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.779 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.783 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408011.7512946, 776370c1-1213-4676-b85e-ce1c0491afc6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.783 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.830 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.833 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:26:51 np0005466031 nova_compute[235803]: 2025-10-02 12:26:51.862 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:26:52 np0005466031 podman[261409]: 2025-10-02 12:26:52.109749794 +0000 UTC m=+0.058301868 container create c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:26:52 np0005466031 systemd[1]: Started libpod-conmon-c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537.scope.
Oct  2 08:26:52 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:26:52 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c593c5188fca2c53c727a8497835ca525b617d4ec51d82875bc1e4acc0ff54fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:26:52 np0005466031 podman[261409]: 2025-10-02 12:26:52.075011627 +0000 UTC m=+0.023563791 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:26:52 np0005466031 podman[261409]: 2025-10-02 12:26:52.176755368 +0000 UTC m=+0.125307472 container init c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:26:52 np0005466031 podman[261409]: 2025-10-02 12:26:52.181846613 +0000 UTC m=+0.130398687 container start c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:26:52 np0005466031 neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b[261424]: [NOTICE]   (261428) : New worker (261430) forked
Oct  2 08:26:52 np0005466031 neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b[261424]: [NOTICE]   (261428) : Loading success.
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.233 141898 INFO neutron.agent.ovn.metadata.agent [-] Port dd45e845-2479-49a6-a571-33984e911f3c in datapath fdb7aec6-8fa5-4966-aee1-bf0ccd52182b unbound from our chassis#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.235 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fdb7aec6-8fa5-4966-aee1-bf0ccd52182b#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.247 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a53ff0-fdff-4552-81fc-43959617977f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.270 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f94627fa-55c5-409a-a44f-e4331638d751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.272 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[514f51f5-0cc9-453e-b9dd-1ac9ce3f259f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.292 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e8826b27-ff26-4f5d-bffc-551ca1811bf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.306 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bdaab00b-6669-4317-b9a0-62bb37748a3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfdb7aec6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:9e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 196, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 196, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586717, 'reachable_time': 20852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261444, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.318 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[51897040-61e3-4e8e-93cb-15223af815a1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfdb7aec6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586728, 'tstamp': 586728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261445, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tapfdb7aec6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586731, 'tstamp': 586731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261445, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.320 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb7aec6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.324 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdb7aec6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.324 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.324 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfdb7aec6-80, col_values=(('external_ids', {'iface-id': 'b09e2acf-3bb2-4302-9b2a-c5cd2fbf3dd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.324 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.325 141898 INFO neutron.agent.ovn.metadata.agent [-] Port a619a50d-dbe2-4780-a273-9b1db89a98f7 in datapath b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 unbound from our chassis#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.327 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6de4fd3-3bc2-47d6-8842-1ef0515c43e0#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.337 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f92629d3-95d4-4e6f-889b-3aaf80dc0221]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.338 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6de4fd3-31 in ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.340 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6de4fd3-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.340 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[adc95976-b604-4dc0-b7a5-a3582bc2f2d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.342 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[242eaccf-7297-403e-b3f0-834fbcc37f59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.351 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[818c09ca-76df-4bad-a11d-8cd0bff8694f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.359 2 DEBUG nova.network.neutron [req-be4ef26a-ecb6-41ba-ab62-e178337685ea req-97fcef74-dfe7-4a6e-90db-f4eeda0d5fd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updated VIF entry in instance network info cache for port a619a50d-dbe2-4780-a273-9b1db89a98f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.360 2 DEBUG nova.network.neutron [req-be4ef26a-ecb6-41ba-ab62-e178337685ea req-97fcef74-dfe7-4a6e-90db-f4eeda0d5fd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.363 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[904d6a0d-1689-4305-9dec-cf00b9e440ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.384 2 DEBUG oslo_concurrency.lockutils [req-be4ef26a-ecb6-41ba-ab62-e178337685ea req-97fcef74-dfe7-4a6e-90db-f4eeda0d5fd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.388 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b729a325-8a6e-4bcf-8d8b-7c740dd9c8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 NetworkManager[44907]: <info>  [1759408012.3967] manager: (tapb6de4fd3-30): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.395 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4be0be2b-c4a6-401e-8644-f23087e93a69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.431 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[000325b9-7efc-4725-ad9c-80c811f1911f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.434 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fab077fc-e525-4da8-8ba0-44f37a21d0fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.450 2 DEBUG nova.compute.manager [req-0882dc76-c3ed-48e6-ba4b-cd13b0009f0a req-fa333476-718f-4643-bfc2-df36332cec01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.451 2 DEBUG oslo_concurrency.lockutils [req-0882dc76-c3ed-48e6-ba4b-cd13b0009f0a req-fa333476-718f-4643-bfc2-df36332cec01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.451 2 DEBUG oslo_concurrency.lockutils [req-0882dc76-c3ed-48e6-ba4b-cd13b0009f0a req-fa333476-718f-4643-bfc2-df36332cec01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.451 2 DEBUG oslo_concurrency.lockutils [req-0882dc76-c3ed-48e6-ba4b-cd13b0009f0a req-fa333476-718f-4643-bfc2-df36332cec01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.451 2 DEBUG nova.compute.manager [req-0882dc76-c3ed-48e6-ba4b-cd13b0009f0a req-fa333476-718f-4643-bfc2-df36332cec01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Processing event network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:26:52 np0005466031 NetworkManager[44907]: <info>  [1759408012.4570] device (tapb6de4fd3-30): carrier: link connected
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.463 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a82b4227-88cf-4888-bfa3-83f85c2d7685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.479 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1177fc0d-2f78-4082-bdc8-cc0a577d7c87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6de4fd3-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:88:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586804, 'reachable_time': 25801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261456, 'error': None, 'target': 'ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.494 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf70502-fc22-4027-8d50-098873c4fbf7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:887c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586804, 'tstamp': 586804}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261457, 'error': None, 'target': 'ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:52.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.510 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d341cbef-a462-4f03-bce9-b2b338443d64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6de4fd3-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:88:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586804, 'reachable_time': 25801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261458, 'error': None, 'target': 'ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.547 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[017c28d4-7f65-4cd7-a5e8-51a06f4d94ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.607 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[89b5475f-3b8b-44ce-8847-9fdaf6625ce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.609 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6de4fd3-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.609 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.610 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6de4fd3-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:52 np0005466031 kernel: tapb6de4fd3-30: entered promiscuous mode
Oct  2 08:26:52 np0005466031 NetworkManager[44907]: <info>  [1759408012.6126] manager: (tapb6de4fd3-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.617 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6de4fd3-30, col_values=(('external_ids', {'iface-id': 'fcdeb859-fcb6-4b02-954c-53ff74d10f3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:52 np0005466031 ovn_controller[132413]: 2025-10-02T12:26:52Z|00176|binding|INFO|Releasing lport fcdeb859-fcb6-4b02-954c-53ff74d10f3d from this chassis (sb_readonly=0)
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.633 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6de4fd3-3bc2-47d6-8842-1ef0515c43e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6de4fd3-3bc2-47d6-8842-1ef0515c43e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.633 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ff79b9a7-987f-485d-a5ef-f028145d6147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.634 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/b6de4fd3-3bc2-47d6-8842-1ef0515c43e0.pid.haproxy
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID b6de4fd3-3bc2-47d6-8842-1ef0515c43e0
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:26:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:52.635 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'env', 'PROCESS_TAG=haproxy-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6de4fd3-3bc2-47d6-8842-1ef0515c43e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.674 2 DEBUG nova.compute.manager [req-6d705928-ed7e-4773-b3bb-7710b535453c req-0ee0db59-8d9a-4bca-80a4-8d08c248412c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.675 2 DEBUG oslo_concurrency.lockutils [req-6d705928-ed7e-4773-b3bb-7710b535453c req-0ee0db59-8d9a-4bca-80a4-8d08c248412c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.675 2 DEBUG oslo_concurrency.lockutils [req-6d705928-ed7e-4773-b3bb-7710b535453c req-0ee0db59-8d9a-4bca-80a4-8d08c248412c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.675 2 DEBUG oslo_concurrency.lockutils [req-6d705928-ed7e-4773-b3bb-7710b535453c req-0ee0db59-8d9a-4bca-80a4-8d08c248412c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:52 np0005466031 nova_compute[235803]: 2025-10-02 12:26:52.675 2 DEBUG nova.compute.manager [req-6d705928-ed7e-4773-b3bb-7710b535453c req-0ee0db59-8d9a-4bca-80a4-8d08c248412c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Processing event network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:26:53 np0005466031 podman[261490]: 2025-10-02 12:26:53.003748539 +0000 UTC m=+0.056865777 container create e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:26:53 np0005466031 systemd[1]: Started libpod-conmon-e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6.scope.
Oct  2 08:26:53 np0005466031 podman[261490]: 2025-10-02 12:26:52.971151723 +0000 UTC m=+0.024268971 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:26:53 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:26:53 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca09e7ee1ffb70dba80d53fadd1c1ef28f77c79485791e247559df6a6cc100d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:26:53 np0005466031 podman[261490]: 2025-10-02 12:26:53.105570883 +0000 UTC m=+0.158688161 container init e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:26:53 np0005466031 podman[261490]: 2025-10-02 12:26:53.115203097 +0000 UTC m=+0.168320335 container start e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:26:53 np0005466031 neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0[261505]: [NOTICE]   (261509) : New worker (261511) forked
Oct  2 08:26:53 np0005466031 neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0[261505]: [NOTICE]   (261509) : Loading success.
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.195 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 006d1393-a12a-44ea-9d1c-ba017fde9058 in datapath fdb7aec6-8fa5-4966-aee1-bf0ccd52182b unbound from our chassis#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.197 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fdb7aec6-8fa5-4966-aee1-bf0ccd52182b#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.215 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3c33fb9c-96f9-4a35-8732-e35090713d5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.251 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[474439bd-dc7a-4dd2-b046-5d92fb2b6e62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.258 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d86a296c-7fde-406a-9939-703fe9496e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.293 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0be7386c-00cd-4c01-9585-b48200ad1ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.313 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccf627b-380f-4f4f-ad6a-1b6b4b04c745]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfdb7aec6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:9e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 612, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 612, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586717, 'reachable_time': 20852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 528, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 528, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261548, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.329 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8ad71a-079a-46ab-8c77-98605fcc6c00]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfdb7aec6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586728, 'tstamp': 586728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261550, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tapfdb7aec6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586731, 'tstamp': 586731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261550, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.331 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb7aec6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.334 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdb7aec6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.334 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.335 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfdb7aec6-80, col_values=(('external_ids', {'iface-id': 'b09e2acf-3bb2-4302-9b2a-c5cd2fbf3dd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.335 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.336 141898 INFO neutron.agent.ovn.metadata.agent [-] Port d1b1a282-3a38-454d-bc99-885b75bac9cc in datapath b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 unbound from our chassis#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.338 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6de4fd3-3bc2-47d6-8842-1ef0515c43e0#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.354 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec7cbc3-1d6e-4fb2-9259-a696e2a0f805]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.386 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e51a3d4e-31ea-4b3e-b2fb-2b88887ebf26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.389 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0488fb79-807c-4337-be31-1c84b0839f0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.413 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[af886d26-7bc6-4364-a218-2466d9e01419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.420 2 DEBUG nova.compute.manager [req-7f2f3996-1329-421b-826a-8fcd7e3d3c14 req-8fbd2ea1-488e-46e2-9463-22adab76fd94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.420 2 DEBUG oslo_concurrency.lockutils [req-7f2f3996-1329-421b-826a-8fcd7e3d3c14 req-8fbd2ea1-488e-46e2-9463-22adab76fd94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.420 2 DEBUG oslo_concurrency.lockutils [req-7f2f3996-1329-421b-826a-8fcd7e3d3c14 req-8fbd2ea1-488e-46e2-9463-22adab76fd94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.421 2 DEBUG oslo_concurrency.lockutils [req-7f2f3996-1329-421b-826a-8fcd7e3d3c14 req-8fbd2ea1-488e-46e2-9463-22adab76fd94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.421 2 DEBUG nova.compute.manager [req-7f2f3996-1329-421b-826a-8fcd7e3d3c14 req-8fbd2ea1-488e-46e2-9463-22adab76fd94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No event matching network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d in dict_keys([('network-vif-plugged', 'dd45e845-2479-49a6-a571-33984e911f3c'), ('network-vif-plugged', '22f1362c-d698-4f08-b8a3-4a4f609ef2b5'), ('network-vif-plugged', 'd1b1a282-3a38-454d-bc99-885b75bac9cc'), ('network-vif-plugged', 'a619a50d-dbe2-4780-a273-9b1db89a98f7')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.421 2 WARNING nova.compute.manager [req-7f2f3996-1329-421b-826a-8fcd7e3d3c14 req-8fbd2ea1-488e-46e2-9463-22adab76fd94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.428 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c225cb24-2adf-46f4-a5fe-337a1919d536]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6de4fd3-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:88:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586804, 'reachable_time': 25801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261582, 'error': None, 'target': 'ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.442 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[79b63200-2ba2-4075-99c7-758f39c6de40]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tapb6de4fd3-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586816, 'tstamp': 586816}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261583, 'error': None, 'target': 'ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6de4fd3-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586819, 'tstamp': 586819}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261583, 'error': None, 'target': 'ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.443 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6de4fd3-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.448 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6de4fd3-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.449 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.449 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6de4fd3-30, col_values=(('external_ids', {'iface-id': 'fcdeb859-fcb6-4b02-954c-53ff74d10f3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.449 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.450 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 in datapath fdb7aec6-8fa5-4966-aee1-bf0ccd52182b unbound from our chassis#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.452 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fdb7aec6-8fa5-4966-aee1-bf0ccd52182b#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.466 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9936eacc-bcbe-4146-b550-d64b0e617f28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.495 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5e55a69f-0e40-4e76-8734-8d839a439d97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.497 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd66502-0571-420e-9ffc-30e270993144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.520 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2e768f5a-8afc-4606-aa37-8ce5417c4d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.537 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[abf1733c-e8ba-499d-9125-63206d089f79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfdb7aec6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:9e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586717, 'reachable_time': 20852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 644, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 644, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261589, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.552 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d7400fe9-6597-4a1f-a82e-e09f24d58f64]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfdb7aec6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586728, 'tstamp': 586728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261590, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tapfdb7aec6-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586731, 'tstamp': 586731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261590, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.554 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb7aec6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.557 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdb7aec6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.557 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.557 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfdb7aec6-80, col_values=(('external_ids', {'iface-id': 'b09e2acf-3bb2-4302-9b2a-c5cd2fbf3dd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:53.558 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:26:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:53 np0005466031 nova_compute[235803]: 2025-10-02 12:26:53.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:26:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:54.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:26:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:26:54.912 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.497 2 DEBUG nova.compute.manager [req-ebeda758-83b6-400f-950b-4422fd869e42 req-7246613b-175f-470a-abe0-5f96f0d541d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.498 2 DEBUG oslo_concurrency.lockutils [req-ebeda758-83b6-400f-950b-4422fd869e42 req-7246613b-175f-470a-abe0-5f96f0d541d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.498 2 DEBUG oslo_concurrency.lockutils [req-ebeda758-83b6-400f-950b-4422fd869e42 req-7246613b-175f-470a-abe0-5f96f0d541d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.498 2 DEBUG oslo_concurrency.lockutils [req-ebeda758-83b6-400f-950b-4422fd869e42 req-7246613b-175f-470a-abe0-5f96f0d541d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.498 2 DEBUG nova.compute.manager [req-ebeda758-83b6-400f-950b-4422fd869e42 req-7246613b-175f-470a-abe0-5f96f0d541d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No event matching network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 in dict_keys([('network-vif-plugged', 'dd45e845-2479-49a6-a571-33984e911f3c'), ('network-vif-plugged', '22f1362c-d698-4f08-b8a3-4a4f609ef2b5'), ('network-vif-plugged', 'd1b1a282-3a38-454d-bc99-885b75bac9cc'), ('network-vif-plugged', 'a619a50d-dbe2-4780-a273-9b1db89a98f7')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.499 2 WARNING nova.compute.manager [req-ebeda758-83b6-400f-950b-4422fd869e42 req-7246613b-175f-470a-abe0-5f96f0d541d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.602 2 DEBUG nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.602 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.602 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.603 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.603 2 DEBUG nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No event matching network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 in dict_keys([('network-vif-plugged', 'dd45e845-2479-49a6-a571-33984e911f3c'), ('network-vif-plugged', '22f1362c-d698-4f08-b8a3-4a4f609ef2b5'), ('network-vif-plugged', 'd1b1a282-3a38-454d-bc99-885b75bac9cc'), ('network-vif-plugged', 'a619a50d-dbe2-4780-a273-9b1db89a98f7')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.603 2 WARNING nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.603 2 DEBUG nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.603 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.604 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.604 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.604 2 DEBUG nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Processing event network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.604 2 DEBUG nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.604 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.605 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.605 2 DEBUG oslo_concurrency.lockutils [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.605 2 DEBUG nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No event matching network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc in dict_keys([('network-vif-plugged', 'dd45e845-2479-49a6-a571-33984e911f3c'), ('network-vif-plugged', '22f1362c-d698-4f08-b8a3-4a4f609ef2b5'), ('network-vif-plugged', 'a619a50d-dbe2-4780-a273-9b1db89a98f7')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:26:55 np0005466031 nova_compute[235803]: 2025-10-02 12:26:55.605 2 WARNING nova.compute.manager [req-033cbce6-a7c5-411e-8745-91949ec7b521 req-28b1b086-db14-4a8e-b802-6794bc59ff1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:26:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:55.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:56 np0005466031 nova_compute[235803]: 2025-10-02 12:26:56.240 2 DEBUG nova.compute.manager [req-6f555386-cf8c-493c-829b-d3f069556098 req-67cbda1e-3d7c-46e4-96c7-90a995b8cd7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:56 np0005466031 nova_compute[235803]: 2025-10-02 12:26:56.241 2 DEBUG oslo_concurrency.lockutils [req-6f555386-cf8c-493c-829b-d3f069556098 req-67cbda1e-3d7c-46e4-96c7-90a995b8cd7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:56 np0005466031 nova_compute[235803]: 2025-10-02 12:26:56.241 2 DEBUG oslo_concurrency.lockutils [req-6f555386-cf8c-493c-829b-d3f069556098 req-67cbda1e-3d7c-46e4-96c7-90a995b8cd7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:56 np0005466031 nova_compute[235803]: 2025-10-02 12:26:56.241 2 DEBUG oslo_concurrency.lockutils [req-6f555386-cf8c-493c-829b-d3f069556098 req-67cbda1e-3d7c-46e4-96c7-90a995b8cd7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:56 np0005466031 nova_compute[235803]: 2025-10-02 12:26:56.242 2 DEBUG nova.compute.manager [req-6f555386-cf8c-493c-829b-d3f069556098 req-67cbda1e-3d7c-46e4-96c7-90a995b8cd7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Processing event network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:26:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:56.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.005477) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018005532, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2011, "num_deletes": 255, "total_data_size": 4760992, "memory_usage": 4822984, "flush_reason": "Manual Compaction"}
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018018841, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1899287, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33788, "largest_seqno": 35794, "table_properties": {"data_size": 1892988, "index_size": 3245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16669, "raw_average_key_size": 21, "raw_value_size": 1879034, "raw_average_value_size": 2402, "num_data_blocks": 145, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407850, "oldest_key_time": 1759407850, "file_creation_time": 1759408018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 13411 microseconds, and 8274 cpu microseconds.
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.018895) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1899287 bytes OK
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.018919) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020521) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020548) EVENT_LOG_v1 {"time_micros": 1759408018020535, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4751916, prev total WAL file size 4751916, number of live WAL files 2.
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.022349) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1854KB)], [63(10225KB)]
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018022392, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 12370024, "oldest_snapshot_seqno": -1}
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6019 keys, 9706520 bytes, temperature: kUnknown
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018096302, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 9706520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9666139, "index_size": 24207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153487, "raw_average_key_size": 25, "raw_value_size": 9557977, "raw_average_value_size": 1587, "num_data_blocks": 978, "num_entries": 6019, "num_filter_entries": 6019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.096642) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9706520 bytes
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.097913) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.2 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(11.6) write-amplify(5.1) OK, records in: 6468, records dropped: 449 output_compression: NoCompression
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.097933) EVENT_LOG_v1 {"time_micros": 1759408018097924, "job": 38, "event": "compaction_finished", "compaction_time_micros": 73996, "compaction_time_cpu_micros": 40827, "output_level": 6, "num_output_files": 1, "total_output_size": 9706520, "num_input_records": 6468, "num_output_records": 6019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018098453, "job": 38, "event": "table_file_deletion", "file_number": 65}
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018100799, "job": 38, "event": "table_file_deletion", "file_number": 63}
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.022253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.100909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.100914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.100915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.100917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:26:58.100918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.469 2 DEBUG nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.469 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.470 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.470 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.470 2 DEBUG nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Processing event network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.471 2 DEBUG nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.471 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.472 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.472 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.472 2 DEBUG nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No event matching network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 in dict_keys([('network-vif-plugged', 'dd45e845-2479-49a6-a571-33984e911f3c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.473 2 WARNING nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.473 2 DEBUG nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.474 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.474 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.475 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.475 2 DEBUG nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Processing event network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.476 2 DEBUG nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.476 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.477 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.477 2 DEBUG oslo_concurrency.lockutils [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.478 2 DEBUG nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.478 2 WARNING nova.compute.manager [req-48ff6248-407b-4948-a5e0-f8c160f91e73 req-94769536-3cba-4d13-8148-405f30b962d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.480 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance event wait completed in 6 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.487 2 DEBUG nova.compute.manager [req-37322e32-d626-4189-a375-2904083025ed req-6e47a6fd-c909-4821-844c-f5c450a7d6bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.487 2 DEBUG oslo_concurrency.lockutils [req-37322e32-d626-4189-a375-2904083025ed req-6e47a6fd-c909-4821-844c-f5c450a7d6bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.487 2 DEBUG oslo_concurrency.lockutils [req-37322e32-d626-4189-a375-2904083025ed req-6e47a6fd-c909-4821-844c-f5c450a7d6bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.488 2 DEBUG oslo_concurrency.lockutils [req-37322e32-d626-4189-a375-2904083025ed req-6e47a6fd-c909-4821-844c-f5c450a7d6bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.488 2 DEBUG nova.compute.manager [req-37322e32-d626-4189-a375-2904083025ed req-6e47a6fd-c909-4821-844c-f5c450a7d6bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.488 2 WARNING nova.compute.manager [req-37322e32-d626-4189-a375-2904083025ed req-6e47a6fd-c909-4821-844c-f5c450a7d6bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.489 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408018.4892132, 776370c1-1213-4676-b85e-ce1c0491afc6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.489 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.491 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.495 2 INFO nova.virt.libvirt.driver [-] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance spawned successfully.#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.495 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:26:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:58.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.530 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.536 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.536 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.537 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.537 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.538 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.538 2 DEBUG nova.virt.libvirt.driver [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.543 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.583 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.632 2 INFO nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Took 44.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.633 2 DEBUG nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.739 2 INFO nova.compute.manager [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Took 52.83 seconds to build instance.#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.760 2 DEBUG oslo_concurrency.lockutils [None req-9fccf91e-10a0-4bd1-a68b-9ee1514876e8 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 53.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.761 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "776370c1-1213-4676-b85e-ce1c0491afc6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 37.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.762 2 INFO nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.762 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "776370c1-1213-4676-b85e-ce1c0491afc6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:58 np0005466031 nova_compute[235803]: 2025-10-02 12:26:58.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:26:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:59.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:00.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:01.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:27:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:27:02 np0005466031 nova_compute[235803]: 2025-10-02 12:27:02.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:02 np0005466031 NetworkManager[44907]: <info>  [1759408022.5447] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Oct  2 08:27:02 np0005466031 NetworkManager[44907]: <info>  [1759408022.5457] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct  2 08:27:02 np0005466031 nova_compute[235803]: 2025-10-02 12:27:02.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:02 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:02Z|00177|binding|INFO|Releasing lport fcdeb859-fcb6-4b02-954c-53ff74d10f3d from this chassis (sb_readonly=0)
Oct  2 08:27:02 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:02Z|00178|binding|INFO|Releasing lport 94f1acad-3082-4bf0-97ec-ca1be0eff0d8 from this chassis (sb_readonly=0)
Oct  2 08:27:02 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:02Z|00179|binding|INFO|Releasing lport b09e2acf-3bb2-4302-9b2a-c5cd2fbf3dd7 from this chassis (sb_readonly=0)
Oct  2 08:27:02 np0005466031 nova_compute[235803]: 2025-10-02 12:27:02.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:03 np0005466031 nova_compute[235803]: 2025-10-02 12:27:03.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:03 np0005466031 nova_compute[235803]: 2025-10-02 12:27:03.431 2 DEBUG nova.compute.manager [req-16cce3fa-70a6-4b90-95fc-53c355d79416 req-279dc8e0-0887-4de6-ade9-930d9cfa895a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-changed-c31a45fc-37b9-4809-89b1-839d4e85765d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:03 np0005466031 nova_compute[235803]: 2025-10-02 12:27:03.431 2 DEBUG nova.compute.manager [req-16cce3fa-70a6-4b90-95fc-53c355d79416 req-279dc8e0-0887-4de6-ade9-930d9cfa895a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing instance network info cache due to event network-changed-c31a45fc-37b9-4809-89b1-839d4e85765d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:27:03 np0005466031 nova_compute[235803]: 2025-10-02 12:27:03.432 2 DEBUG oslo_concurrency.lockutils [req-16cce3fa-70a6-4b90-95fc-53c355d79416 req-279dc8e0-0887-4de6-ade9-930d9cfa895a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:03 np0005466031 nova_compute[235803]: 2025-10-02 12:27:03.432 2 DEBUG oslo_concurrency.lockutils [req-16cce3fa-70a6-4b90-95fc-53c355d79416 req-279dc8e0-0887-4de6-ade9-930d9cfa895a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:03 np0005466031 nova_compute[235803]: 2025-10-02 12:27:03.432 2 DEBUG nova.network.neutron [req-16cce3fa-70a6-4b90-95fc-53c355d79416 req-279dc8e0-0887-4de6-ade9-930d9cfa895a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Refreshing network info cache for port c31a45fc-37b9-4809-89b1-839d4e85765d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:27:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:03.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:03 np0005466031 nova_compute[235803]: 2025-10-02 12:27:03.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:27:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:05.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:27:06 np0005466031 nova_compute[235803]: 2025-10-02 12:27:06.353 2 DEBUG nova.network.neutron [req-16cce3fa-70a6-4b90-95fc-53c355d79416 req-279dc8e0-0887-4de6-ade9-930d9cfa895a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updated VIF entry in instance network info cache for port c31a45fc-37b9-4809-89b1-839d4e85765d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:27:06 np0005466031 nova_compute[235803]: 2025-10-02 12:27:06.353 2 DEBUG nova.network.neutron [req-16cce3fa-70a6-4b90-95fc-53c355d79416 req-279dc8e0-0887-4de6-ade9-930d9cfa895a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:06 np0005466031 nova_compute[235803]: 2025-10-02 12:27:06.387 2 DEBUG oslo_concurrency.lockutils [req-16cce3fa-70a6-4b90-95fc-53c355d79416 req-279dc8e0-0887-4de6-ade9-930d9cfa895a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-776370c1-1213-4676-b85e-ce1c0491afc6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:07.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:08 np0005466031 nova_compute[235803]: 2025-10-02 12:27:08.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:08.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:08 np0005466031 podman[261601]: 2025-10-02 12:27:08.697665636 +0000 UTC m=+0.121177905 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:27:08 np0005466031 podman[261602]: 2025-10-02 12:27:08.721434101 +0000 UTC m=+0.128984526 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct  2 08:27:08 np0005466031 nova_compute[235803]: 2025-10-02 12:27:08.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:09.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:10.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:11.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:12 np0005466031 nova_compute[235803]: 2025-10-02 12:27:12.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:13 np0005466031 nova_compute[235803]: 2025-10-02 12:27:13.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:13.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:13 np0005466031 nova_compute[235803]: 2025-10-02 12:27:13.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:94:a5:91 10.100.0.3
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:a5:91 10.100.0.3
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:c6:b6 10.2.2.200
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:c6:b6 10.2.2.200
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:4a:42 10.1.1.166
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:4a:42 10.1.1.166
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:7d:47 10.2.2.100
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:7d:47 10.2.2.100
Oct  2 08:27:14 np0005466031 nova_compute[235803]: 2025-10-02 12:27:14.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:14.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:0b:3e 10.1.1.157
Oct  2 08:27:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:14Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:0b:3e 10.1.1.157
Oct  2 08:27:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:15Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:c5:38 10.1.1.81
Oct  2 08:27:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:15Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:c5:38 10.1.1.81
Oct  2 08:27:15 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  2 08:27:15 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct  2 08:27:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:15Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:bc:17 10.1.1.82
Oct  2 08:27:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:15Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:bc:17 10.1.1.82
Oct  2 08:27:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:15.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:16.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:17 np0005466031 podman[261702]: 2025-10-02 12:27:17.621165136 +0000 UTC m=+0.051043182 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:27:17 np0005466031 podman[261701]: 2025-10-02 12:27:17.638144438 +0000 UTC m=+0.070555126 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:27:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:17.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:18 np0005466031 nova_compute[235803]: 2025-10-02 12:27:18.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:18.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:18 np0005466031 nova_compute[235803]: 2025-10-02 12:27:18.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:19 np0005466031 nova_compute[235803]: 2025-10-02 12:27:19.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:19.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:20.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:21 np0005466031 nova_compute[235803]: 2025-10-02 12:27:21.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:21.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:22.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:23 np0005466031 nova_compute[235803]: 2025-10-02 12:27:23.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:23.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:23 np0005466031 nova_compute[235803]: 2025-10-02 12:27:23.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:24.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:25.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:25.832 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:25.832 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:25.833 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:26.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:28 np0005466031 nova_compute[235803]: 2025-10-02 12:27:28.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:28.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:28 np0005466031 nova_compute[235803]: 2025-10-02 12:27:28.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:30.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:31.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:31 np0005466031 nova_compute[235803]: 2025-10-02 12:27:31.756 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "061e91f3-8228-4afb-9420-d0764c3dd7ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:31 np0005466031 nova_compute[235803]: 2025-10-02 12:27:31.756 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:31 np0005466031 nova_compute[235803]: 2025-10-02 12:27:31.782 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:27:31 np0005466031 nova_compute[235803]: 2025-10-02 12:27:31.883 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:31 np0005466031 nova_compute[235803]: 2025-10-02 12:27:31.883 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:31 np0005466031 nova_compute[235803]: 2025-10-02 12:27:31.893 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:27:31 np0005466031 nova_compute[235803]: 2025-10-02 12:27:31.895 2 INFO nova.compute.claims [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:27:32 np0005466031 nova_compute[235803]: 2025-10-02 12:27:32.054 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:32.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3350442704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:32 np0005466031 nova_compute[235803]: 2025-10-02 12:27:32.936 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.881s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:32 np0005466031 nova_compute[235803]: 2025-10-02 12:27:32.946 2 DEBUG nova.compute.provider_tree [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.002 2 DEBUG nova.scheduler.client.report [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.085 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.086 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.215 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.215 2 DEBUG nova.network.neutron [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.262 2 INFO nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.363 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.380 2 DEBUG nova.policy [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e24ea6fbe7394bd8b4b06dd246587041', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a7572f2170094fb7a5d6e212abf9235d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.540 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.542 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.542 2 INFO nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Creating image(s)#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.586 2 DEBUG nova.storage.rbd_utils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] rbd image 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.634 2 DEBUG nova.storage.rbd_utils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] rbd image 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.671 2 DEBUG nova.storage.rbd_utils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] rbd image 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.680 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:33.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.768 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.770 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.771 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.771 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.820 2 DEBUG nova.storage.rbd_utils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] rbd image 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.826 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:33 np0005466031 nova_compute[235803]: 2025-10-02 12:27:33.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:27:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:34.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:27:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:35.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:36.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:36 np0005466031 nova_compute[235803]: 2025-10-02 12:27:36.678 2 DEBUG nova.network.neutron [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Successfully created port: 7271c02a-a19f-43d0-8351-bfa41c3af3e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:27:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:37.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:38.182 142025 DEBUG eventlet.wsgi.server [-] (142025) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:38.185 142025 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: Accept: */*#015
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: Connection: close#015
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: Content-Type: text/plain#015
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: Host: 169.254.169.254#015
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: User-Agent: curl/7.84.0#015
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: X-Forwarded-For: 10.100.0.3#015
Oct  2 08:27:38 np0005466031 ovn_metadata_agent[141893]: X-Ovn-Network-Id: 6d1afc59-3ec5-4518-a68b-f8ab041976c5 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  2 08:27:38 np0005466031 nova_compute[235803]: 2025-10-02 12:27:38.248 2 DEBUG nova.network.neutron [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Successfully updated port: 7271c02a-a19f-43d0-8351-bfa41c3af3e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:27:38 np0005466031 nova_compute[235803]: 2025-10-02 12:27:38.282 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:38 np0005466031 nova_compute[235803]: 2025-10-02 12:27:38.283 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquired lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:38 np0005466031 nova_compute[235803]: 2025-10-02 12:27:38.283 2 DEBUG nova.network.neutron [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:27:38 np0005466031 nova_compute[235803]: 2025-10-02 12:27:38.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:38.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:38 np0005466031 nova_compute[235803]: 2025-10-02 12:27:38.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:39 np0005466031 nova_compute[235803]: 2025-10-02 12:27:39.346 2 DEBUG nova.network.neutron [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:27:39 np0005466031 podman[261918]: 2025-10-02 12:27:39.639468315 +0000 UTC m=+0.063791870 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct  2 08:27:39 np0005466031 podman[261919]: 2025-10-02 12:27:39.687110498 +0000 UTC m=+0.095512644 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:39.689689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059689750, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 647, "num_deletes": 251, "total_data_size": 1017141, "memory_usage": 1029656, "flush_reason": "Manual Compaction"}
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Oct  2 08:27:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:39.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059761693, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 670427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35799, "largest_seqno": 36441, "table_properties": {"data_size": 667281, "index_size": 1054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7438, "raw_average_key_size": 19, "raw_value_size": 661007, "raw_average_value_size": 1699, "num_data_blocks": 47, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408019, "oldest_key_time": 1759408019, "file_creation_time": 1759408059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 72032 microseconds, and 2619 cpu microseconds.
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:39.761732) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 670427 bytes OK
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:39.761748) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:39.928097) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:39.928144) EVENT_LOG_v1 {"time_micros": 1759408059928133, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:39.928172) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1013585, prev total WAL file size 1059923, number of live WAL files 2.
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:39.929178) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(654KB)], [66(9479KB)]
Oct  2 08:27:39 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059929239, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 10376947, "oldest_snapshot_seqno": -1}
Oct  2 08:27:40 np0005466031 nova_compute[235803]: 2025-10-02 12:27:40.438 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 5897 keys, 8521823 bytes, temperature: kUnknown
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408060502655, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8521823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8483321, "index_size": 22648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 151647, "raw_average_key_size": 25, "raw_value_size": 8378246, "raw_average_value_size": 1420, "num_data_blocks": 906, "num_entries": 5897, "num_filter_entries": 5897, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:27:40 np0005466031 nova_compute[235803]: 2025-10-02 12:27:40.534 2 DEBUG nova.storage.rbd_utils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] resizing rbd image 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:27:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:40.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:40.502903) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8521823 bytes
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:40.644850) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 18.1 rd, 14.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.3 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(28.2) write-amplify(12.7) OK, records in: 6408, records dropped: 511 output_compression: NoCompression
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:40.644886) EVENT_LOG_v1 {"time_micros": 1759408060644873, "job": 40, "event": "compaction_finished", "compaction_time_micros": 573479, "compaction_time_cpu_micros": 18289, "output_level": 6, "num_output_files": 1, "total_output_size": 8521823, "num_input_records": 6408, "num_output_records": 5897, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408060645178, "job": 40, "event": "table_file_deletion", "file_number": 68}
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408060646794, "job": 40, "event": "table_file_deletion", "file_number": 66}
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:39.929037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:40.646906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:40.646913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:40.646915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:40.646917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:40 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:27:40.646920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:40.756 142025 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  2 08:27:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:40.757 142025 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2550 time: 2.5714598#033[00m
Oct  2 08:27:40 np0005466031 haproxy-metadata-proxy-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261355]: 10.100.0.3:44776 [02/Oct/2025:12:27:38.181] listener listener/metadata 0/0/0/2575/2575 200 2534 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct  2 08:27:40 np0005466031 nova_compute[235803]: 2025-10-02 12:27:40.843 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:40 np0005466031 nova_compute[235803]: 2025-10-02 12:27:40.845 2 DEBUG nova.compute.manager [req-9fb1a51f-e07e-4fbf-ae44-169996d4053d req-8c334de4-223f-4684-a74a-33edc64fe987 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received event network-changed-7271c02a-a19f-43d0-8351-bfa41c3af3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:40 np0005466031 nova_compute[235803]: 2025-10-02 12:27:40.845 2 DEBUG nova.compute.manager [req-9fb1a51f-e07e-4fbf-ae44-169996d4053d req-8c334de4-223f-4684-a74a-33edc64fe987 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Refreshing instance network info cache due to event network-changed-7271c02a-a19f-43d0-8351-bfa41c3af3e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:27:40 np0005466031 nova_compute[235803]: 2025-10-02 12:27:40.845 2 DEBUG oslo_concurrency.lockutils [req-9fb1a51f-e07e-4fbf-ae44-169996d4053d req-8c334de4-223f-4684-a74a-33edc64fe987 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.413 2 DEBUG nova.network.neutron [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Updating instance_info_cache with network_info: [{"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.482 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Releasing lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.483 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Instance network_info: |[{"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.485 2 DEBUG oslo_concurrency.lockutils [req-9fb1a51f-e07e-4fbf-ae44-169996d4053d req-8c334de4-223f-4684-a74a-33edc64fe987 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.485 2 DEBUG nova.network.neutron [req-9fb1a51f-e07e-4fbf-ae44-169996d4053d req-8c334de4-223f-4684-a74a-33edc64fe987 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Refreshing network info cache for port 7271c02a-a19f-43d0-8351-bfa41c3af3e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:41.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.914 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.915 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.915 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.916 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.916 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.919 2 INFO nova.compute.manager [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Terminating instance#033[00m
Oct  2 08:27:41 np0005466031 nova_compute[235803]: 2025-10-02 12:27:41.922 2 DEBUG nova.compute.manager [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:27:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:42.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:42 np0005466031 nova_compute[235803]: 2025-10-02 12:27:42.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:42 np0005466031 nova_compute[235803]: 2025-10-02 12:27:42.712 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:42 np0005466031 nova_compute[235803]: 2025-10-02 12:27:42.713 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:42 np0005466031 nova_compute[235803]: 2025-10-02 12:27:42.713 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:42 np0005466031 nova_compute[235803]: 2025-10-02 12:27:42.713 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:27:42 np0005466031 nova_compute[235803]: 2025-10-02 12:27:42.714 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2182550942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:43 np0005466031 nova_compute[235803]: 2025-10-02 12:27:43.332 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:43 np0005466031 nova_compute[235803]: 2025-10-02 12:27:43.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:43.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:43 np0005466031 nova_compute[235803]: 2025-10-02 12:27:43.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:44.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:45 np0005466031 kernel: tapc31a45fc-37 (unregistering): left promiscuous mode
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.2419] device (tapc31a45fc-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00180|binding|INFO|Releasing lport c31a45fc-37b9-4809-89b1-839d4e85765d from this chassis (sb_readonly=0)
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00181|binding|INFO|Setting lport c31a45fc-37b9-4809-89b1-839d4e85765d down in Southbound
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00182|binding|INFO|Removing iface tapc31a45fc-37 ovn-installed in OVS
Oct  2 08:27:45 np0005466031 kernel: tapdd45e845-24 (unregistering): left promiscuous mode
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.3014] device (tapdd45e845-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00183|binding|INFO|Releasing lport dd45e845-2479-49a6-a571-33984e911f3c from this chassis (sb_readonly=1)
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00184|binding|INFO|Removing iface tapdd45e845-24 ovn-installed in OVS
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00185|if_status|INFO|Not setting lport dd45e845-2479-49a6-a571-33984e911f3c down as sb is readonly
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.327 2 DEBUG nova.objects.instance [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lazy-loading 'migration_context' on Instance uuid 061e91f3-8228-4afb-9420-d0764c3dd7ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 kernel: tap705ea63d-4c (unregistering): left promiscuous mode
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.3460] device (tap705ea63d-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00186|binding|INFO|Releasing lport 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 from this chassis (sb_readonly=1)
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00187|binding|INFO|Removing iface tap705ea63d-4c ovn-installed in OVS
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 kernel: tap006d1393-a1 (unregistering): left promiscuous mode
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.3894] device (tap006d1393-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.408 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:a5:91 10.100.0.3'], port_security=['fa:16:3e:94:a5:91 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1afc59-3ec5-4518-a68b-f8ab041976c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a53c3168-1ef0-4852-abb6-568d97f42365, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=c31a45fc-37b9-4809-89b1-839d4e85765d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00188|binding|INFO|Setting lport dd45e845-2479-49a6-a571-33984e911f3c down in Southbound
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00189|binding|INFO|Setting lport 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 down in Southbound
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00190|binding|INFO|Releasing lport 006d1393-a12a-44ea-9d1c-ba017fde9058 from this chassis (sb_readonly=1)
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00191|binding|INFO|Removing iface tap006d1393-a1 ovn-installed in OVS
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.410 141898 INFO neutron.agent.ovn.metadata.agent [-] Port c31a45fc-37b9-4809-89b1-839d4e85765d in datapath 6d1afc59-3ec5-4518-a68b-f8ab041976c5 unbound from our chassis#033[00m
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.412 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1afc59-3ec5-4518-a68b-f8ab041976c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.414 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[151a5d40-0b32-4f9c-8900-e17815b51e56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.415 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5 namespace which is not needed anymore#033[00m
Oct  2 08:27:45 np0005466031 kernel: tap22f1362c-d6 (unregistering): left promiscuous mode
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.428 2 DEBUG nova.network.neutron [req-9fb1a51f-e07e-4fbf-ae44-169996d4053d req-8c334de4-223f-4684-a74a-33edc64fe987 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Updated VIF entry in instance network info cache for port 7271c02a-a19f-43d0-8351-bfa41c3af3e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.429 2 DEBUG nova.network.neutron [req-9fb1a51f-e07e-4fbf-ae44-169996d4053d req-8c334de4-223f-4684-a74a-33edc64fe987 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Updating instance_info_cache with network_info: [{"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.4301] device (tap22f1362c-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 kernel: tapd1b1a282-3a (unregistering): left promiscuous mode
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.4519] device (tapd1b1a282-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.449 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.449 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Ensure instance console log exists: /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.450 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.450 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.450 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.453 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Start _get_guest_xml network_info=[{"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00192|binding|INFO|Releasing lport 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 from this chassis (sb_readonly=1)
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00193|binding|INFO|Removing iface tap22f1362c-d6 ovn-installed in OVS
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.462 2 WARNING nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.468 2 DEBUG nova.virt.libvirt.host [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.469 2 DEBUG nova.virt.libvirt.host [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.473 2 DEBUG nova.virt.libvirt.host [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.474 2 DEBUG nova.virt.libvirt.host [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.475 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.476 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.476 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.476 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.477 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.477 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.477 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.478 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.478 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.478 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.478 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.479 2 DEBUG nova.virt.hardware [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.481 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00194|binding|INFO|Releasing lport d1b1a282-3a38-454d-bc99-885b75bac9cc from this chassis (sb_readonly=1)
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00195|binding|INFO|Removing iface tapd1b1a282-3a ovn-installed in OVS
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00196|binding|INFO|Setting lport d1b1a282-3a38-454d-bc99-885b75bac9cc down in Southbound
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00197|binding|INFO|Setting lport 006d1393-a12a-44ea-9d1c-ba017fde9058 down in Southbound
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00198|binding|INFO|Setting lport 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 down in Southbound
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.501 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:c5:38 10.1.1.81'], port_security=['fa:16:3e:7a:c5:38 10.1.1.81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-511179794', 'neutron:cidrs': '10.1.1.81/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-511179794', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '69e120d5-0955-4ba9-b571-b55e164419d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a508ac-640c-4078-bef8-1202d246fb47, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=705ea63d-4c9b-450a-ac81-c5bf6ef0c274) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:45 np0005466031 kernel: tapa619a50d-db (unregistering): left promiscuous mode
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.503 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:4a:42 10.1.1.166'], port_security=['fa:16:3e:dc:4a:42 10.1.1.166'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-942090899', 'neutron:cidrs': '10.1.1.166/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-942090899', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '69e120d5-0955-4ba9-b571-b55e164419d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a508ac-640c-4078-bef8-1202d246fb47, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=dd45e845-2479-49a6-a571-33984e911f3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.5073] device (tapa619a50d-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00199|binding|INFO|Releasing lport a619a50d-dbe2-4780-a273-9b1db89a98f7 from this chassis (sb_readonly=1)
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00200|binding|INFO|Removing iface tapa619a50d-db ovn-installed in OVS
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.558 2 DEBUG oslo_concurrency.lockutils [req-9fb1a51f-e07e-4fbf-ae44-169996d4053d req-8c334de4-223f-4684-a74a-33edc64fe987 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005466031 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct  2 08:27:45 np0005466031 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000039.scope: Consumed 16.887s CPU time.
Oct  2 08:27:45 np0005466031 systemd-machined[192227]: Machine qemu-22-instance-00000039 terminated.
Oct  2 08:27:45 np0005466031 neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261349]: [NOTICE]   (261353) : haproxy version is 2.8.14-c23fe91
Oct  2 08:27:45 np0005466031 neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261349]: [NOTICE]   (261353) : path to executable is /usr/sbin/haproxy
Oct  2 08:27:45 np0005466031 neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261349]: [WARNING]  (261353) : Exiting Master process...
Oct  2 08:27:45 np0005466031 neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261349]: [WARNING]  (261353) : Exiting Master process...
Oct  2 08:27:45 np0005466031 neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261349]: [ALERT]    (261353) : Current worker (261355) exited with code 143 (Terminated)
Oct  2 08:27:45 np0005466031 neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5[261349]: [WARNING]  (261353) : All workers exited. Exiting... (0)
Oct  2 08:27:45 np0005466031 systemd[1]: libpod-5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109.scope: Deactivated successfully.
Oct  2 08:27:45 np0005466031 podman[262117]: 2025-10-02 12:27:45.637788678 +0000 UTC m=+0.116971872 container died 5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:27:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:45Z|00201|binding|INFO|Setting lport a619a50d-dbe2-4780-a273-9b1db89a98f7 down in Southbound
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.702 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:bc:17 10.1.1.82'], port_security=['fa:16:3e:e6:bc:17 10.1.1.82'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.82/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a508ac-640c-4078-bef8-1202d246fb47, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=006d1393-a12a-44ea-9d1c-ba017fde9058) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.704 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:7d:47 10.2.2.100'], port_security=['fa:16:3e:d7:7d:47 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbdab1e7-04c3-4962-ba1f-23312665b37c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=d1b1a282-3a38-454d-bc99-885b75bac9cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.705 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:0b:3e 10.1.1.157'], port_security=['fa:16:3e:26:0b:3e 10.1.1.157'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.157/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a508ac-640c-4078-bef8-1202d246fb47, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=22f1362c-d698-4f08-b8a3-4a4f609ef2b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:45.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.7430] manager: (tapc31a45fc-37): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.7558] manager: (tapdd45e845-24): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.7683] manager: (tap705ea63d-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Oct  2 08:27:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:45.785 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:c6:b6 10.2.2.200'], port_security=['fa:16:3e:fa:c6:b6 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '776370c1-1213-4676-b85e-ce1c0491afc6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d96bae071ef4595bd93c956dd20796c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63f924af-92b6-418b-91c6-d81a9ac979c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbdab1e7-04c3-4962-ba1f-23312665b37c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=a619a50d-dbe2-4780-a273-9b1db89a98f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.7877] manager: (tap22f1362c-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Oct  2 08:27:45 np0005466031 NetworkManager[44907]: <info>  [1759408065.8058] manager: (tapa619a50d-db): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.829 2 INFO nova.virt.libvirt.driver [-] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Instance destroyed successfully.#033[00m
Oct  2 08:27:45 np0005466031 nova_compute[235803]: 2025-10-02 12:27:45.830 2 DEBUG nova.objects.instance [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lazy-loading 'resources' on Instance uuid 776370c1-1213-4676-b85e-ce1c0491afc6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:45 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109-userdata-shm.mount: Deactivated successfully.
Oct  2 08:27:45 np0005466031 systemd[1]: var-lib-containers-storage-overlay-5ff5417e5d553644566196df5420c1804747ad6fd417687276c37914c0258b09-merged.mount: Deactivated successfully.
Oct  2 08:27:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:27:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/36665426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.058 2 DEBUG nova.virt.libvirt.vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.059 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.061 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:94:a5:91,bridge_name='br-int',has_traffic_filtering=True,id=c31a45fc-37b9-4809-89b1-839d4e85765d,network=Network(6d1afc59-3ec5-4518-a68b-f8ab041976c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc31a45fc-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.062 2 DEBUG os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:a5:91,bridge_name='br-int',has_traffic_filtering=True,id=c31a45fc-37b9-4809-89b1-839d4e85765d,network=Network(6d1afc59-3ec5-4518-a68b-f8ab041976c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc31a45fc-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc31a45fc-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.087 2 INFO os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:a5:91,bridge_name='br-int',has_traffic_filtering=True,id=c31a45fc-37b9-4809-89b1-839d4e85765d,network=Network(6d1afc59-3ec5-4518-a68b-f8ab041976c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc31a45fc-37')#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.089 2 DEBUG nova.virt.libvirt.vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.089 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.090 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:42,bridge_name='br-int',has_traffic_filtering=True,id=dd45e845-2479-49a6-a571-33984e911f3c,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdd45e845-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.091 2 DEBUG os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:42,bridge_name='br-int',has_traffic_filtering=True,id=dd45e845-2479-49a6-a571-33984e911f3c,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdd45e845-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd45e845-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.115 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:46 np0005466031 podman[262117]: 2025-10-02 12:27:46.121228891 +0000 UTC m=+0.600412085 container cleanup 5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:27:46 np0005466031 systemd[1]: libpod-conmon-5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109.scope: Deactivated successfully.
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.151 2 DEBUG nova.storage.rbd_utils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] rbd image 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.156 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.191 2 INFO os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:4a:42,bridge_name='br-int',has_traffic_filtering=True,id=dd45e845-2479-49a6-a571-33984e911f3c,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdd45e845-24')#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.193 2 DEBUG nova.virt.libvirt.vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.194 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.195 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c5:38,bridge_name='br-int',has_traffic_filtering=True,id=705ea63d-4c9b-450a-ac81-c5bf6ef0c274,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap705ea63d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.196 2 DEBUG os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c5:38,bridge_name='br-int',has_traffic_filtering=True,id=705ea63d-4c9b-450a-ac81-c5bf6ef0c274,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap705ea63d-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705ea63d-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.217 2 INFO os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c5:38,bridge_name='br-int',has_traffic_filtering=True,id=705ea63d-4c9b-450a-ac81-c5bf6ef0c274,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap705ea63d-4c')#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.218 2 DEBUG nova.virt.libvirt.vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.219 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.220 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bc:17,bridge_name='br-int',has_traffic_filtering=True,id=006d1393-a12a-44ea-9d1c-ba017fde9058,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap006d1393-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.220 2 DEBUG os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bc:17,bridge_name='br-int',has_traffic_filtering=True,id=006d1393-a12a-44ea-9d1c-ba017fde9058,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap006d1393-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.222 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap006d1393-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.233 2 INFO os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bc:17,bridge_name='br-int',has_traffic_filtering=True,id=006d1393-a12a-44ea-9d1c-ba017fde9058,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap006d1393-a1')#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.234 2 DEBUG nova.virt.libvirt.vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.235 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "address": "fa:16:3e:26:0b:3e", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f1362c-d6", "ovs_interfaceid": "22f1362c-d698-4f08-b8a3-4a4f609ef2b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.235 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:0b:3e,bridge_name='br-int',has_traffic_filtering=True,id=22f1362c-d698-4f08-b8a3-4a4f609ef2b5,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f1362c-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.236 2 DEBUG os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:0b:3e,bridge_name='br-int',has_traffic_filtering=True,id=22f1362c-d698-4f08-b8a3-4a4f609ef2b5,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f1362c-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22f1362c-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.245 2 INFO os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:0b:3e,bridge_name='br-int',has_traffic_filtering=True,id=22f1362c-d698-4f08-b8a3-4a4f609ef2b5,network=Network(fdb7aec6-8fa5-4966-aee1-bf0ccd52182b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f1362c-d6')#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.246 2 DEBUG nova.virt.libvirt.vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.246 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.247 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:7d:47,bridge_name='br-int',has_traffic_filtering=True,id=d1b1a282-3a38-454d-bc99-885b75bac9cc,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1b1a282-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.247 2 DEBUG os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:7d:47,bridge_name='br-int',has_traffic_filtering=True,id=d1b1a282-3a38-454d-bc99-885b75bac9cc,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1b1a282-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.249 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1b1a282-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.255 2 INFO os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:7d:47,bridge_name='br-int',has_traffic_filtering=True,id=d1b1a282-3a38-454d-bc99-885b75bac9cc,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1b1a282-3a')#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.256 2 DEBUG nova.virt.libvirt.vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-667401928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-667401928',id=57,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLfu+ivKaL3C217tDEC3oR9VNbCKGt1J1SAT2756reYIrhFOeMBWz9dOy7VYEhJ19yDVRZFDFJ6ISCKWa3pM/daoytchvEsNHW/MM/OA/mJ5M59AfIcNK8jhY7AaM9U5cw==',key_name='tempest-keypair-1461680105',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:26:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d96bae071ef4595bd93c956dd20796c',ramdisk_id='',reservation_id='r-rfaqhvpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1211830922',owner_user_name='tempest-TaggedBootDevicesTest_v242-1211830922-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:26:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='94e0e2f26a1648368032ab7e6732655c',uuid=776370c1-1213-4676-b85e-ce1c0491afc6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.256 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converting VIF {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.257 2 DEBUG nova.network.os_vif_util [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=a619a50d-dbe2-4780-a273-9b1db89a98f7,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa619a50d-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.257 2 DEBUG os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=a619a50d-dbe2-4780-a273-9b1db89a98f7,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa619a50d-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa619a50d-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.265 2 INFO os_vif [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:c6:b6,bridge_name='br-int',has_traffic_filtering=True,id=a619a50d-dbe2-4780-a273-9b1db89a98f7,network=Network(b6de4fd3-3bc2-47d6-8842-1ef0515c43e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa619a50d-db')#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.301 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.302 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.302 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.302 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:27:46 np0005466031 podman[262282]: 2025-10-02 12:27:46.337659689 +0000 UTC m=+0.190331767 container remove 5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.343 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[01e64a12-3398-4fdc-9596-65a28ec0678c]: (4, ('Thu Oct  2 12:27:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5 (5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109)\n5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109\nThu Oct  2 12:27:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5 (5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109)\n5075cdb04ad49c3cb6d7e3bc2b1573420e1c5190fb558aed5aff6f4bc93e6109\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.344 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f6111c00-f14c-47d6-9882-c3e2d9d71c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.345 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1afc59-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 kernel: tap6d1afc59-30: left promiscuous mode
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.362 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aeefd329-21de-4ac1-b630-00d7e02be49f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.383 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b7acffb7-cfd9-42ac-8064-7e89ef424fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.384 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1699940a-5eb6-4281-9709-3c138ea0de69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.400 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c07d01-3a40-40b8-8d65-d375285ccc50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586618, 'reachable_time': 41655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262357, 'error': None, 'target': 'ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:46 np0005466031 systemd[1]: run-netns-ovnmeta\x2d6d1afc59\x2d3ec5\x2d4518\x2da68b\x2df8ab041976c5.mount: Deactivated successfully.
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.402 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1afc59-3ec5-4518-a68b-f8ab041976c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.402 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[2e416e91-8042-45b8-8639-37e969e304c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.402 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 705ea63d-4c9b-450a-ac81-c5bf6ef0c274 in datapath fdb7aec6-8fa5-4966-aee1-bf0ccd52182b unbound from our chassis#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.404 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.405 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[035e17ef-3da8-49bb-933b-1cd877984c2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:46.405 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b namespace which is not needed anymore#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.506 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.507 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4541MB free_disk=20.880069732666016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.507 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.507 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:46 np0005466031 neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b[261424]: [NOTICE]   (261428) : haproxy version is 2.8.14-c23fe91
Oct  2 08:27:46 np0005466031 neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b[261424]: [NOTICE]   (261428) : path to executable is /usr/sbin/haproxy
Oct  2 08:27:46 np0005466031 neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b[261424]: [WARNING]  (261428) : Exiting Master process...
Oct  2 08:27:46 np0005466031 neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b[261424]: [ALERT]    (261428) : Current worker (261430) exited with code 143 (Terminated)
Oct  2 08:27:46 np0005466031 neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b[261424]: [WARNING]  (261428) : All workers exited. Exiting... (0)
Oct  2 08:27:46 np0005466031 podman[262375]: 2025-10-02 12:27:46.557083743 +0000 UTC m=+0.072591553 container stop c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:27:46 np0005466031 systemd[1]: libpod-c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537.scope: Deactivated successfully.
Oct  2 08:27:46 np0005466031 podman[262375]: 2025-10-02 12:27:46.586678356 +0000 UTC m=+0.102186186 container died c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:27:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:27:46 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1623385835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.631 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.632 2 DEBUG nova.virt.libvirt.vif [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=61,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrSgPCRzMaXI2iBfGSc2TSHS4ZD2W5NzuZOttkXoqM7HXstn5uSaOt2OGxui+rdtS+XLvMX4iV2n3rrcJ5OzpPvW+RvlFzMZnnpDF1H/t3P+NdILCshxZBm5J4n62rBiw==',key_name='tempest-keypair-226967064',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a7572f2170094fb7a5d6e212abf9235d',ramdisk_id='',reservation_id='r-j1h6xb66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-830332672',owner_user_name='tempest-ServersTestFqdnHostnames-830332672-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e24ea6fbe7394bd8b4b06dd246587041',uuid=061e91f3-8228-4afb-9420-d0764c3dd7ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.633 2 DEBUG nova.network.os_vif_util [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Converting VIF {"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.633 2 DEBUG nova.network.os_vif_util [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:5b:05,bridge_name='br-int',has_traffic_filtering=True,id=7271c02a-a19f-43d0-8351-bfa41c3af3e4,network=Network(9873bab7-ad9a-4e38-adba-35d281231cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7271c02a-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.634 2 DEBUG nova.objects.instance [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lazy-loading 'pci_devices' on Instance uuid 061e91f3-8228-4afb-9420-d0764c3dd7ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537-userdata-shm.mount: Deactivated successfully.
Oct  2 08:27:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay-c593c5188fca2c53c727a8497835ca525b617d4ec51d82875bc1e4acc0ff54fe-merged.mount: Deactivated successfully.
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.794 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 776370c1-1213-4676-b85e-ce1c0491afc6 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.795 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 061e91f3-8228-4afb-9420-d0764c3dd7ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.795 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.795 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.808 2 DEBUG nova.compute.manager [req-5679f427-6433-4b8b-96ac-90847a7d9c2b req-c63becd2-ad77-46c7-9cb6-7a96ab357519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.808 2 DEBUG oslo_concurrency.lockutils [req-5679f427-6433-4b8b-96ac-90847a7d9c2b req-c63becd2-ad77-46c7-9cb6-7a96ab357519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.808 2 DEBUG oslo_concurrency.lockutils [req-5679f427-6433-4b8b-96ac-90847a7d9c2b req-c63becd2-ad77-46c7-9cb6-7a96ab357519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.808 2 DEBUG oslo_concurrency.lockutils [req-5679f427-6433-4b8b-96ac-90847a7d9c2b req-c63becd2-ad77-46c7-9cb6-7a96ab357519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.809 2 DEBUG nova.compute.manager [req-5679f427-6433-4b8b-96ac-90847a7d9c2b req-c63becd2-ad77-46c7-9cb6-7a96ab357519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-unplugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.809 2 DEBUG nova.compute.manager [req-5679f427-6433-4b8b-96ac-90847a7d9c2b req-c63becd2-ad77-46c7-9cb6-7a96ab357519 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.854 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <uuid>061e91f3-8228-4afb-9420-d0764c3dd7ee</uuid>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <name>instance-0000003d</name>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <nova:name>guest-instance-1.domain.com</nova:name>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:27:45</nova:creationTime>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <nova:user uuid="e24ea6fbe7394bd8b4b06dd246587041">tempest-ServersTestFqdnHostnames-830332672-project-member</nova:user>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <nova:project uuid="a7572f2170094fb7a5d6e212abf9235d">tempest-ServersTestFqdnHostnames-830332672</nova:project>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <nova:port uuid="7271c02a-a19f-43d0-8351-bfa41c3af3e4">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <entry name="serial">061e91f3-8228-4afb-9420-d0764c3dd7ee</entry>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <entry name="uuid">061e91f3-8228-4afb-9420-d0764c3dd7ee</entry>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/061e91f3-8228-4afb-9420-d0764c3dd7ee_disk">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/061e91f3-8228-4afb-9420-d0764c3dd7ee_disk.config">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:0d:5b:05"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <target dev="tap7271c02a-a1"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee/console.log" append="off"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:27:46 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:27:46 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:27:46 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:27:46 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.854 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Preparing to wait for external event network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.854 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.855 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.855 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.856 2 DEBUG nova.virt.libvirt.vif [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=61,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrSgPCRzMaXI2iBfGSc2TSHS4ZD2W5NzuZOttkXoqM7HXstn5uSaOt2OGxui+rdtS+XLvMX4iV2n3rrcJ5OzpPvW+RvlFzMZnnpDF1H/t3P+NdILCshxZBm5J4n62rBiw==',key_name='tempest-keypair-226967064',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a7572f2170094fb7a5d6e212abf9235d',ramdisk_id='',reservation_id='r-j1h6xb66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-830332672',owner_user_name='tempest-ServersTestFqdnHostnames-830332672-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e24ea6fbe7394bd8b4b06dd246587041',uuid=061e91f3-8228-4afb-9420-d0764c3dd7ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.856 2 DEBUG nova.network.os_vif_util [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Converting VIF {"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.857 2 DEBUG nova.network.os_vif_util [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:5b:05,bridge_name='br-int',has_traffic_filtering=True,id=7271c02a-a19f-43d0-8351-bfa41c3af3e4,network=Network(9873bab7-ad9a-4e38-adba-35d281231cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7271c02a-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.857 2 DEBUG os_vif [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:5b:05,bridge_name='br-int',has_traffic_filtering=True,id=7271c02a-a19f-43d0-8351-bfa41c3af3e4,network=Network(9873bab7-ad9a-4e38-adba-35d281231cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7271c02a-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 podman[262375]: 2025-10-02 12:27:46.858157201 +0000 UTC m=+0.373665011 container cleanup c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.858 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7271c02a-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.862 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7271c02a-a1, col_values=(('external_ids', {'iface-id': '7271c02a-a19f-43d0-8351-bfa41c3af3e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:5b:05', 'vm-uuid': '061e91f3-8228-4afb-9420-d0764c3dd7ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 NetworkManager[44907]: <info>  [1759408066.8642] manager: (tap7271c02a-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:46 np0005466031 systemd[1]: libpod-conmon-c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537.scope: Deactivated successfully.
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.875 2 INFO os_vif [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:5b:05,bridge_name='br-int',has_traffic_filtering=True,id=7271c02a-a19f-43d0-8351-bfa41c3af3e4,network=Network(9873bab7-ad9a-4e38-adba-35d281231cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7271c02a-a1')#033[00m
Oct  2 08:27:46 np0005466031 nova_compute[235803]: 2025-10-02 12:27:46.946 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:47 np0005466031 podman[262408]: 2025-10-02 12:27:47.106288013 +0000 UTC m=+0.227820447 container remove c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.121 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6f9c5e-b873-4986-a3e7-fcdd4f06d520]: (4, ('Thu Oct  2 12:27:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b (c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537)\nc6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537\nThu Oct  2 12:27:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b (c6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537)\nc6c9daeea5dd3fa5b8a79066460cc89398e1cf5644fb66701991f09731642537\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.123 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[025b4ca9-98f2-4de6-96de-2dc4a438b946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.124 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb7aec6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:47 np0005466031 kernel: tapfdb7aec6-80: left promiscuous mode
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.146 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdb0898-bc08-480b-b222-a5ab8b01dde5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.180 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[affb0f36-b015-4da5-b17b-c164dcedb285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.181 2 DEBUG nova.compute.manager [req-afed32d3-594f-4964-a70f-a93f268c55f1 req-f373f8d7-fbb2-445f-a3a0-88b0acfd3b3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-dd45e845-2479-49a6-a571-33984e911f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.181 2 DEBUG oslo_concurrency.lockutils [req-afed32d3-594f-4964-a70f-a93f268c55f1 req-f373f8d7-fbb2-445f-a3a0-88b0acfd3b3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.181 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[be96a85c-8326-4e32-9c68-dd02cccbcb81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.181 2 DEBUG oslo_concurrency.lockutils [req-afed32d3-594f-4964-a70f-a93f268c55f1 req-f373f8d7-fbb2-445f-a3a0-88b0acfd3b3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.182 2 DEBUG oslo_concurrency.lockutils [req-afed32d3-594f-4964-a70f-a93f268c55f1 req-f373f8d7-fbb2-445f-a3a0-88b0acfd3b3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.182 2 DEBUG nova.compute.manager [req-afed32d3-594f-4964-a70f-a93f268c55f1 req-f373f8d7-fbb2-445f-a3a0-88b0acfd3b3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-unplugged-dd45e845-2479-49a6-a571-33984e911f3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.182 2 DEBUG nova.compute.manager [req-afed32d3-594f-4964-a70f-a93f268c55f1 req-f373f8d7-fbb2-445f-a3a0-88b0acfd3b3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-dd45e845-2479-49a6-a571-33984e911f3c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.194 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7db59fd6-6e05-4393-baaf-300c8bb5115e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586710, 'reachable_time': 22263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262447, 'error': None, 'target': 'ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 systemd[1]: run-netns-ovnmeta\x2dfdb7aec6\x2d8fa5\x2d4966\x2daee1\x2dbf0ccd52182b.mount: Deactivated successfully.
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.196 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fdb7aec6-8fa5-4966-aee1-bf0ccd52182b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.196 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[28cf6114-56ff-4a59-82cc-14be4a6bcf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.197 141898 INFO neutron.agent.ovn.metadata.agent [-] Port dd45e845-2479-49a6-a571-33984e911f3c in datapath fdb7aec6-8fa5-4966-aee1-bf0ccd52182b unbound from our chassis#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.199 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.199 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ec453a50-601b-41d9-aecd-44986d116ca8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.200 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 006d1393-a12a-44ea-9d1c-ba017fde9058 in datapath fdb7aec6-8fa5-4966-aee1-bf0ccd52182b unbound from our chassis#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.202 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.202 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1db66e-ede1-4126-b8af-7f13d5fbce04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.202 141898 INFO neutron.agent.ovn.metadata.agent [-] Port d1b1a282-3a38-454d-bc99-885b75bac9cc in datapath b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 unbound from our chassis#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.204 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6de4fd3-3bc2-47d6-8842-1ef0515c43e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.205 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae51454-a472-466a-a4af-bf5f6e1df655]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.205 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 namespace which is not needed anymore#033[00m
Oct  2 08:27:47 np0005466031 neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0[261505]: [NOTICE]   (261509) : haproxy version is 2.8.14-c23fe91
Oct  2 08:27:47 np0005466031 neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0[261505]: [NOTICE]   (261509) : path to executable is /usr/sbin/haproxy
Oct  2 08:27:47 np0005466031 neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0[261505]: [WARNING]  (261509) : Exiting Master process...
Oct  2 08:27:47 np0005466031 neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0[261505]: [WARNING]  (261509) : Exiting Master process...
Oct  2 08:27:47 np0005466031 neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0[261505]: [ALERT]    (261509) : Current worker (261511) exited with code 143 (Terminated)
Oct  2 08:27:47 np0005466031 neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0[261505]: [WARNING]  (261509) : All workers exited. Exiting... (0)
Oct  2 08:27:47 np0005466031 systemd[1]: libpod-e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6.scope: Deactivated successfully.
Oct  2 08:27:47 np0005466031 conmon[261505]: conmon e850cdb93242bd28bd4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6.scope/container/memory.events
Oct  2 08:27:47 np0005466031 podman[262465]: 2025-10-02 12:27:47.346882117 +0000 UTC m=+0.063048808 container died e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:27:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3220107927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.391 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.397 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.476 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.477 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.477 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] No VIF found with MAC fa:16:3e:0d:5b:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.478 2 INFO nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Using config drive#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.507 2 DEBUG nova.storage.rbd_utils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] rbd image 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.524 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:47 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6-userdata-shm.mount: Deactivated successfully.
Oct  2 08:27:47 np0005466031 systemd[1]: var-lib-containers-storage-overlay-3ca09e7ee1ffb70dba80d53fadd1c1ef28f77c79485791e247559df6a6cc100d-merged.mount: Deactivated successfully.
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.619 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.620 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.629 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466031 podman[262465]: 2025-10-02 12:27:47.640266713 +0000 UTC m=+0.356433444 container cleanup e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:27:47 np0005466031 systemd[1]: libpod-conmon-e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6.scope: Deactivated successfully.
Oct  2 08:27:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:47.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:47 np0005466031 podman[262520]: 2025-10-02 12:27:47.850076 +0000 UTC m=+0.177838646 container remove e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.860 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8cefc2-7006-4b8a-931a-4f8f4ed6321a]: (4, ('Thu Oct  2 12:27:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 (e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6)\ne850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6\nThu Oct  2 12:27:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 (e850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6)\ne850cdb93242bd28bd4b3e5dbaf8a4ca16637b6dc9b900243b42150122525ba6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.863 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d20be5-b37e-4773-8b9f-5381dd0837a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.864 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6de4fd3-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:47 np0005466031 kernel: tapb6de4fd3-30: left promiscuous mode
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.892 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b2787dca-2d9b-451a-b09a-73b75ee2752c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.915 2 INFO nova.virt.libvirt.driver [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Deleting instance files /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6_del#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.917 2 INFO nova.virt.libvirt.driver [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Deletion of /var/lib/nova/instances/776370c1-1213-4676-b85e-ce1c0491afc6_del complete#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.921 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[345455c5-8133-43f3-9ec9-e639759b8eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.922 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3438913c-0f4d-4e3f-a871-fe86f4cb4584]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 podman[262527]: 2025-10-02 12:27:47.939474907 +0000 UTC m=+0.240575305 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.944 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[edff57c4-1f6b-4e85-aa1f-2ed5bfd020f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586797, 'reachable_time': 30838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262581, 'error': None, 'target': 'ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.946 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.946 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[c13a5ec2-40b5-4fd5-b09f-81af51d16bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.947 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 22f1362c-d698-4f08-b8a3-4a4f609ef2b5 in datapath fdb7aec6-8fa5-4966-aee1-bf0ccd52182b unbound from our chassis#033[00m
Oct  2 08:27:47 np0005466031 systemd[1]: run-netns-ovnmeta\x2db6de4fd3\x2d3bc2\x2d47d6\x2d8842\x2d1ef0515c43e0.mount: Deactivated successfully.
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.949 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fdb7aec6-8fa5-4966-aee1-bf0ccd52182b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.950 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9effb1-df79-4618-ad5b-cbba6cff796e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.950 141898 INFO neutron.agent.ovn.metadata.agent [-] Port a619a50d-dbe2-4780-a273-9b1db89a98f7 in datapath b6de4fd3-3bc2-47d6-8842-1ef0515c43e0 unbound from our chassis#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.952 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6de4fd3-3bc2-47d6-8842-1ef0515c43e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:47 np0005466031 podman[262519]: 2025-10-02 12:27:47.953163411 +0000 UTC m=+0.257219294 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.952 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fd669ac7-9930-4ba0-a2a3-1cd0c1ceedd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:47.953 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.964 2 DEBUG nova.compute.manager [req-c06993fc-1057-4535-b8fb-976daea36888 req-dfa46c50-1924-4b23-a31f-03dfcf7e9ef1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-c31a45fc-37b9-4809-89b1-839d4e85765d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.965 2 DEBUG oslo_concurrency.lockutils [req-c06993fc-1057-4535-b8fb-976daea36888 req-dfa46c50-1924-4b23-a31f-03dfcf7e9ef1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.965 2 DEBUG oslo_concurrency.lockutils [req-c06993fc-1057-4535-b8fb-976daea36888 req-dfa46c50-1924-4b23-a31f-03dfcf7e9ef1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.965 2 DEBUG oslo_concurrency.lockutils [req-c06993fc-1057-4535-b8fb-976daea36888 req-dfa46c50-1924-4b23-a31f-03dfcf7e9ef1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.966 2 DEBUG nova.compute.manager [req-c06993fc-1057-4535-b8fb-976daea36888 req-dfa46c50-1924-4b23-a31f-03dfcf7e9ef1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-unplugged-c31a45fc-37b9-4809-89b1-839d4e85765d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:47 np0005466031 nova_compute[235803]: 2025-10-02 12:27:47.966 2 DEBUG nova.compute.manager [req-c06993fc-1057-4535-b8fb-976daea36888 req-dfa46c50-1924-4b23-a31f-03dfcf7e9ef1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-c31a45fc-37b9-4809-89b1-839d4e85765d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:48 np0005466031 nova_compute[235803]: 2025-10-02 12:27:48.009 2 INFO nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Creating config drive at /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee/disk.config#033[00m
Oct  2 08:27:48 np0005466031 nova_compute[235803]: 2025-10-02 12:27:48.021 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpalagi67a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:48 np0005466031 nova_compute[235803]: 2025-10-02 12:27:48.173 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpalagi67a" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:48 np0005466031 nova_compute[235803]: 2025-10-02 12:27:48.227 2 DEBUG nova.storage.rbd_utils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] rbd image 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:48 np0005466031 nova_compute[235803]: 2025-10-02 12:27:48.230 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee/disk.config 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:48 np0005466031 nova_compute[235803]: 2025-10-02 12:27:48.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:48.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:48 np0005466031 nova_compute[235803]: 2025-10-02 12:27:48.987 2 DEBUG oslo_concurrency.processutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee/disk.config 061e91f3-8228-4afb-9420-d0764c3dd7ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:48 np0005466031 nova_compute[235803]: 2025-10-02 12:27:48.988 2 INFO nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Deleting local config drive /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee/disk.config because it was imported into RBD.#033[00m
Oct  2 08:27:49 np0005466031 kernel: tap7271c02a-a1: entered promiscuous mode
Oct  2 08:27:49 np0005466031 NetworkManager[44907]: <info>  [1759408069.0634] manager: (tap7271c02a-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:49Z|00202|binding|INFO|Claiming lport 7271c02a-a19f-43d0-8351-bfa41c3af3e4 for this chassis.
Oct  2 08:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:49Z|00203|binding|INFO|7271c02a-a19f-43d0-8351-bfa41c3af3e4: Claiming fa:16:3e:0d:5b:05 10.100.0.8
Oct  2 08:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:49Z|00204|binding|INFO|Setting lport 7271c02a-a19f-43d0-8351-bfa41c3af3e4 ovn-installed in OVS
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:49 np0005466031 systemd-machined[192227]: New machine qemu-23-instance-0000003d.
Oct  2 08:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:49Z|00205|binding|INFO|Setting lport 7271c02a-a19f-43d0-8351-bfa41c3af3e4 up in Southbound
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.114 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:5b:05 10.100.0.8'], port_security=['fa:16:3e:0d:5b:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '061e91f3-8228-4afb-9420-d0764c3dd7ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9873bab7-ad9a-4e38-adba-35d281231cb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7572f2170094fb7a5d6e212abf9235d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c1c641d-e9c5-45aa-9e44-637091ff36cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5a9f796-753c-4987-acc5-3f078b337893, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7271c02a-a19f-43d0-8351-bfa41c3af3e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.115 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7271c02a-a19f-43d0-8351-bfa41c3af3e4 in datapath 9873bab7-ad9a-4e38-adba-35d281231cb7 bound to our chassis#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.118 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9873bab7-ad9a-4e38-adba-35d281231cb7#033[00m
Oct  2 08:27:49 np0005466031 systemd[1]: Started Virtual Machine qemu-23-instance-0000003d.
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.126 2 INFO nova.compute.manager [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Took 7.20 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.127 2 DEBUG oslo.service.loopingcall [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.128 2 DEBUG nova.compute.manager [-] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.129 2 DEBUG nova.network.neutron [-] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:27:49 np0005466031 systemd-udevd[262636]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.131 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9a2899-7ab5-49bb-8722-38abd8143287]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.132 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9873bab7-a1 in ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.134 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9873bab7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.135 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[33c9089c-7f4a-453a-8fcc-05ac8f8f281b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.135 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e55107-8ddf-42a7-a044-efda0238ffba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 NetworkManager[44907]: <info>  [1759408069.1492] device (tap7271c02a-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:27:49 np0005466031 NetworkManager[44907]: <info>  [1759408069.1500] device (tap7271c02a-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.150 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[360e5bd6-0108-4925-acd8-aad0cb8398a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.168 2 DEBUG nova.compute.manager [req-815b2469-bc72-4d46-941c-7dea93ab69ed req-61a980c3-6bea-44f9-8edb-2ae37e3b430a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.169 2 DEBUG oslo_concurrency.lockutils [req-815b2469-bc72-4d46-941c-7dea93ab69ed req-61a980c3-6bea-44f9-8edb-2ae37e3b430a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.169 2 DEBUG oslo_concurrency.lockutils [req-815b2469-bc72-4d46-941c-7dea93ab69ed req-61a980c3-6bea-44f9-8edb-2ae37e3b430a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.169 2 DEBUG oslo_concurrency.lockutils [req-815b2469-bc72-4d46-941c-7dea93ab69ed req-61a980c3-6bea-44f9-8edb-2ae37e3b430a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.169 2 DEBUG nova.compute.manager [req-815b2469-bc72-4d46-941c-7dea93ab69ed req-61a980c3-6bea-44f9-8edb-2ae37e3b430a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.169 2 WARNING nova.compute.manager [req-815b2469-bc72-4d46-941c-7dea93ab69ed req-61a980c3-6bea-44f9-8edb-2ae37e3b430a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.175 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c3057a5f-efb1-4d10-b8be-ad3571d87f15]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.198 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e5d078-12f2-4447-bc78-0736b7394fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.202 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7160af9d-b7fb-4808-9900-3d5cfd200691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 NetworkManager[44907]: <info>  [1759408069.2037] manager: (tap9873bab7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.230 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d9ef0c-fd81-468f-9d40-072c5152b8a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.233 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3a021106-62ce-440b-bbed-0958a67e1fe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 NetworkManager[44907]: <info>  [1759408069.2560] device (tap9873bab7-a0): carrier: link connected
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.261 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bce578f8-c7f1-4cee-89a5-0ee78068b38c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.278 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[75df1209-806b-40db-8ff4-7d36104523b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9873bab7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:59:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592484, 'reachable_time': 31892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262668, 'error': None, 'target': 'ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.293 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1181c39a-0f6d-4906-9cdc-4a26ee803fd5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:59b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592484, 'tstamp': 592484}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262669, 'error': None, 'target': 'ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.312 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[897e264a-d9c2-4faf-9b02-6bf5583b2d62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9873bab7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:59:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592484, 'reachable_time': 31892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262670, 'error': None, 'target': 'ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.348 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9568787c-9967-4673-968a-b2b7a4dfb7c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.406 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3e0810-dce3-4dc3-9c06-a82d270b65a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.407 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9873bab7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.408 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.408 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9873bab7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:49 np0005466031 NetworkManager[44907]: <info>  [1759408069.4103] manager: (tap9873bab7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:49 np0005466031 kernel: tap9873bab7-a0: entered promiscuous mode
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.414 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9873bab7-a0, col_values=(('external_ids', {'iface-id': 'bf70abfc-9300-43b1-849f-3ce1505e3449'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:27:49Z|00206|binding|INFO|Releasing lport bf70abfc-9300-43b1-849f-3ce1505e3449 from this chassis (sb_readonly=0)
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.416 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9873bab7-ad9a-4e38-adba-35d281231cb7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9873bab7-ad9a-4e38-adba-35d281231cb7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.417 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[45d77732-e9fd-44c6-84a7-20c1122ac7a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.418 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-9873bab7-ad9a-4e38-adba-35d281231cb7
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/9873bab7-ad9a-4e38-adba-35d281231cb7.pid.haproxy
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 9873bab7-ad9a-4e38-adba-35d281231cb7
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:49.419 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7', 'env', 'PROCESS_TAG=haproxy-9873bab7-ad9a-4e38-adba-35d281231cb7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9873bab7-ad9a-4e38-adba-35d281231cb7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.493 2 DEBUG nova.compute.manager [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.493 2 DEBUG oslo_concurrency.lockutils [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.494 2 DEBUG oslo_concurrency.lockutils [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.494 2 DEBUG oslo_concurrency.lockutils [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.494 2 DEBUG nova.compute.manager [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.495 2 WARNING nova.compute.manager [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-dd45e845-2479-49a6-a571-33984e911f3c for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.495 2 DEBUG nova.compute.manager [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-006d1393-a12a-44ea-9d1c-ba017fde9058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.495 2 DEBUG oslo_concurrency.lockutils [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.495 2 DEBUG oslo_concurrency.lockutils [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.496 2 DEBUG oslo_concurrency.lockutils [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.496 2 DEBUG nova.compute.manager [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-unplugged-006d1393-a12a-44ea-9d1c-ba017fde9058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.496 2 DEBUG nova.compute.manager [req-185790b5-6841-4115-852b-541460ffed22 req-98dd1dcb-fd58-4d98-9f32-a4c88c23221d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-006d1393-a12a-44ea-9d1c-ba017fde9058 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.620 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.621 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.621 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:27:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:49 np0005466031 podman[262738]: 2025-10-02 12:27:49.723473714 +0000 UTC m=+0.022241462 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:27:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:49 np0005466031 podman[262738]: 2025-10-02 12:27:49.951162166 +0000 UTC m=+0.249929904 container create 091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.980 2 DEBUG nova.compute.manager [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.982 2 DEBUG oslo_concurrency.lockutils [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.982 2 DEBUG oslo_concurrency.lockutils [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.983 2 DEBUG oslo_concurrency.lockutils [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.983 2 DEBUG nova.compute.manager [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-unplugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.984 2 DEBUG nova.compute.manager [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.985 2 DEBUG nova.compute.manager [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.986 2 DEBUG oslo_concurrency.lockutils [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.987 2 DEBUG oslo_concurrency.lockutils [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.987 2 DEBUG oslo_concurrency.lockutils [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.987 2 DEBUG nova.compute.manager [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.988 2 WARNING nova.compute.manager [req-cd2ff7ff-095a-491a-b1ef-320a0c89a917 req-deda610d-b864-493f-96ff-a78b1c602d84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-705ea63d-4c9b-450a-ac81-c5bf6ef0c274 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.992 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.992 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.992 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.993 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.994 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.994 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:49 np0005466031 nova_compute[235803]: 2025-10-02 12:27:49.995 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.004 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:50 np0005466031 systemd[1]: Started libpod-conmon-091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde.scope.
Oct  2 08:27:50 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:27:50 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c69bbe3e3c65b31a94729ec9208b910bb28bfc553fa5844838424196d8a283/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:27:50 np0005466031 podman[262738]: 2025-10-02 12:27:50.068001354 +0000 UTC m=+0.366769142 container init 091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 08:27:50 np0005466031 podman[262738]: 2025-10-02 12:27:50.077971821 +0000 UTC m=+0.376739559 container start 091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:27:50 np0005466031 neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7[262755]: [NOTICE]   (262759) : New worker (262761) forked
Oct  2 08:27:50 np0005466031 neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7[262755]: [NOTICE]   (262759) : Loading success.
Oct  2 08:27:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:50.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.906 2 DEBUG nova.compute.manager [req-ef35b0c1-124e-4a3b-859d-01a00656493b req-410c48b2-d5c3-4251-94d2-775d497efa08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.907 2 DEBUG oslo_concurrency.lockutils [req-ef35b0c1-124e-4a3b-859d-01a00656493b req-410c48b2-d5c3-4251-94d2-775d497efa08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.908 2 DEBUG oslo_concurrency.lockutils [req-ef35b0c1-124e-4a3b-859d-01a00656493b req-410c48b2-d5c3-4251-94d2-775d497efa08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.908 2 DEBUG oslo_concurrency.lockutils [req-ef35b0c1-124e-4a3b-859d-01a00656493b req-410c48b2-d5c3-4251-94d2-775d497efa08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.909 2 DEBUG nova.compute.manager [req-ef35b0c1-124e-4a3b-859d-01a00656493b req-410c48b2-d5c3-4251-94d2-775d497efa08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:50 np0005466031 nova_compute[235803]: 2025-10-02 12:27:50.909 2 WARNING nova.compute.manager [req-ef35b0c1-124e-4a3b-859d-01a00656493b req-410c48b2-d5c3-4251-94d2-775d497efa08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-c31a45fc-37b9-4809-89b1-839d4e85765d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:27:51 np0005466031 nova_compute[235803]: 2025-10-02 12:27:51.266 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408071.2654786, 061e91f3-8228-4afb-9420-d0764c3dd7ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:51 np0005466031 nova_compute[235803]: 2025-10-02 12:27:51.266 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] VM Started (Lifecycle Event)#033[00m
Oct  2 08:27:51 np0005466031 nova_compute[235803]: 2025-10-02 12:27:51.423 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:51 np0005466031 nova_compute[235803]: 2025-10-02 12:27:51.428 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408071.2660296, 061e91f3-8228-4afb-9420-d0764c3dd7ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:51 np0005466031 nova_compute[235803]: 2025-10-02 12:27:51.428 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:27:51 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct  2 08:27:51 np0005466031 nova_compute[235803]: 2025-10-02 12:27:51.620 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:51 np0005466031 nova_compute[235803]: 2025-10-02 12:27:51.623 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:27:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:51 np0005466031 nova_compute[235803]: 2025-10-02 12:27:51.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.011 2 DEBUG nova.compute.manager [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.011 2 DEBUG oslo_concurrency.lockutils [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.011 2 DEBUG oslo_concurrency.lockutils [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.011 2 DEBUG oslo_concurrency.lockutils [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.012 2 DEBUG nova.compute.manager [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.012 2 WARNING nova.compute.manager [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-006d1393-a12a-44ea-9d1c-ba017fde9058 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.012 2 DEBUG nova.compute.manager [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-d1b1a282-3a38-454d-bc99-885b75bac9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.012 2 DEBUG oslo_concurrency.lockutils [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.012 2 DEBUG oslo_concurrency.lockutils [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.012 2 DEBUG oslo_concurrency.lockutils [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.014 2 DEBUG nova.compute.manager [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-unplugged-d1b1a282-3a38-454d-bc99-885b75bac9cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.014 2 DEBUG nova.compute.manager [req-27191bb1-95bd-4f69-9c43-3f3f83dbe3aa req-978de93f-3ad6-4eff-9245-c00e4ccedb81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-d1b1a282-3a38-454d-bc99-885b75bac9cc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:52 np0005466031 nova_compute[235803]: 2025-10-02 12:27:52.054 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:27:52 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:27:52 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:27:52 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:27:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:52.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:27:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:27:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:27:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:27:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:27:53 np0005466031 nova_compute[235803]: 2025-10-02 12:27:53.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:53.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.466 2 DEBUG nova.compute.manager [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.467 2 DEBUG oslo_concurrency.lockutils [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.467 2 DEBUG oslo_concurrency.lockutils [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.467 2 DEBUG oslo_concurrency.lockutils [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.467 2 DEBUG nova.compute.manager [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.467 2 WARNING nova.compute.manager [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-d1b1a282-3a38-454d-bc99-885b75bac9cc for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.467 2 DEBUG nova.compute.manager [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.468 2 DEBUG oslo_concurrency.lockutils [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.468 2 DEBUG oslo_concurrency.lockutils [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.468 2 DEBUG oslo_concurrency.lockutils [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.468 2 DEBUG nova.compute.manager [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-unplugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:54 np0005466031 nova_compute[235803]: 2025-10-02 12:27:54.468 2 DEBUG nova.compute.manager [req-866c697f-71d9-4e34-b52d-b51c6e8ee9c8 req-262a59b7-c0cf-440b-a89f-e98ab98d0869 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-unplugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:27:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:27:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:55.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:56.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.743 2 DEBUG nova.compute.manager [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.743 2 DEBUG oslo_concurrency.lockutils [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.744 2 DEBUG oslo_concurrency.lockutils [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.744 2 DEBUG oslo_concurrency.lockutils [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.744 2 DEBUG nova.compute.manager [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] No waiting events found dispatching network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.744 2 WARNING nova.compute.manager [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received unexpected event network-vif-plugged-a619a50d-dbe2-4780-a273-9b1db89a98f7 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.744 2 DEBUG nova.compute.manager [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received event network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.745 2 DEBUG oslo_concurrency.lockutils [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.745 2 DEBUG oslo_concurrency.lockutils [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.745 2 DEBUG oslo_concurrency.lockutils [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.745 2 DEBUG nova.compute.manager [req-e66c8044-6643-438d-ac33-e58ac7fb0806 req-243e1efa-ee65-49c2-925c-29f483c19a18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Processing event network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.746 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.750 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408076.7503536, 061e91f3-8228-4afb-9420-d0764c3dd7ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.751 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.753 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.757 2 INFO nova.virt.libvirt.driver [-] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Instance spawned successfully.#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.757 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.872 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:56 np0005466031 nova_compute[235803]: 2025-10-02 12:27:56.876 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.201 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.202 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.203 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.203 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.204 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.204 2 DEBUG nova.virt.libvirt.driver [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.269 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:27:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:57.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.836 2 INFO nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Took 24.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:27:57 np0005466031 nova_compute[235803]: 2025-10-02 12:27:57.837 2 DEBUG nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:27:57.954 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:58 np0005466031 nova_compute[235803]: 2025-10-02 12:27:58.087 2 INFO nova.compute.manager [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Took 26.24 seconds to build instance.#033[00m
Oct  2 08:27:58 np0005466031 nova_compute[235803]: 2025-10-02 12:27:58.126 2 DEBUG oslo_concurrency.lockutils [None req-6daeafe2-68eb-48b8-a159-79c46dfe5cfe e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:58 np0005466031 nova_compute[235803]: 2025-10-02 12:27:58.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:58.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:58.999 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received event network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:58.999 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:58.999 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:58.999 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:58.999 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] No waiting events found dispatching network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.000 2 WARNING nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received unexpected event network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.000 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-deleted-22f1362c-d698-4f08-b8a3-4a4f609ef2b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.000 2 INFO nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Neutron deleted interface 22f1362c-d698-4f08-b8a3-4a4f609ef2b5; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.000 2 DEBUG nova.network.neutron [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "address": "fa:16:3e:d7:7d:47", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1b1a282-3a", "ovs_interfaceid": "d1b1a282-3a38-454d-bc99-885b75bac9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.159 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Detach interface failed, port_id=22f1362c-d698-4f08-b8a3-4a4f609ef2b5, reason: Instance 776370c1-1213-4676-b85e-ce1c0491afc6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.160 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-deleted-d1b1a282-3a38-454d-bc99-885b75bac9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.160 2 INFO nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Neutron deleted interface d1b1a282-3a38-454d-bc99-885b75bac9cc; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.160 2 DEBUG nova.network.neutron [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "address": "fa:16:3e:fa:c6:b6", "network": {"id": "b6de4fd3-3bc2-47d6-8842-1ef0515c43e0", "bridge": "br-int", "label": "tempest-device-tagging-net2-1303674730", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa619a50d-db", "ovs_interfaceid": "a619a50d-dbe2-4780-a273-9b1db89a98f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:59 np0005466031 nova_compute[235803]: 2025-10-02 12:27:59.363 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Detach interface failed, port_id=d1b1a282-3a38-454d-bc99-885b75bac9cc, reason: Instance 776370c1-1213-4676-b85e-ce1c0491afc6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:27:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:27:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:59.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:00.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:00 np0005466031 nova_compute[235803]: 2025-10-02 12:28:00.828 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408065.8243425, 776370c1-1213-4676-b85e-ce1c0491afc6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:00 np0005466031 nova_compute[235803]: 2025-10-02 12:28:00.828 2 INFO nova.compute.manager [-] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:28:00 np0005466031 nova_compute[235803]: 2025-10-02 12:28:00.892 2 DEBUG nova.compute.manager [None req-c2b80004-55d4-4ad6-af2c-4d944d2da6de - - - - - -] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:01 np0005466031 nova_compute[235803]: 2025-10-02 12:28:01.203 2 DEBUG nova.compute.manager [req-5b094a41-0b59-4317-8b80-279738ff5b56 req-ebee1899-7984-47c1-b55d-c7a29f6ff0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-deleted-a619a50d-dbe2-4780-a273-9b1db89a98f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:01 np0005466031 nova_compute[235803]: 2025-10-02 12:28:01.203 2 INFO nova.compute.manager [req-5b094a41-0b59-4317-8b80-279738ff5b56 req-ebee1899-7984-47c1-b55d-c7a29f6ff0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Neutron deleted interface a619a50d-dbe2-4780-a273-9b1db89a98f7; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:28:01 np0005466031 nova_compute[235803]: 2025-10-02 12:28:01.203 2 DEBUG nova.network.neutron [req-5b094a41-0b59-4317-8b80-279738ff5b56 req-ebee1899-7984-47c1-b55d-c7a29f6ff0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "006d1393-a12a-44ea-9d1c-ba017fde9058", "address": "fa:16:3e:e6:bc:17", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap006d1393-a1", "ovs_interfaceid": "006d1393-a12a-44ea-9d1c-ba017fde9058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:01 np0005466031 nova_compute[235803]: 2025-10-02 12:28:01.698 2 DEBUG nova.compute.manager [req-5b094a41-0b59-4317-8b80-279738ff5b56 req-ebee1899-7984-47c1-b55d-c7a29f6ff0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Detach interface failed, port_id=a619a50d-dbe2-4780-a273-9b1db89a98f7, reason: Instance 776370c1-1213-4676-b85e-ce1c0491afc6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:28:01 np0005466031 nova_compute[235803]: 2025-10-02 12:28:01.698 2 DEBUG nova.compute.manager [req-5b094a41-0b59-4317-8b80-279738ff5b56 req-ebee1899-7984-47c1-b55d-c7a29f6ff0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-deleted-006d1393-a12a-44ea-9d1c-ba017fde9058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:01 np0005466031 nova_compute[235803]: 2025-10-02 12:28:01.698 2 INFO nova.compute.manager [req-5b094a41-0b59-4317-8b80-279738ff5b56 req-ebee1899-7984-47c1-b55d-c7a29f6ff0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Neutron deleted interface 006d1393-a12a-44ea-9d1c-ba017fde9058; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:28:01 np0005466031 nova_compute[235803]: 2025-10-02 12:28:01.698 2 DEBUG nova.network.neutron [req-5b094a41-0b59-4317-8b80-279738ff5b56 req-ebee1899-7984-47c1-b55d-c7a29f6ff0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [{"id": "c31a45fc-37b9-4809-89b1-839d4e85765d", "address": "fa:16:3e:94:a5:91", "network": {"id": "6d1afc59-3ec5-4518-a68b-f8ab041976c5", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1343529431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc31a45fc-37", "ovs_interfaceid": "c31a45fc-37b9-4809-89b1-839d4e85765d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd45e845-2479-49a6-a571-33984e911f3c", "address": "fa:16:3e:dc:4a:42", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd45e845-24", "ovs_interfaceid": "dd45e845-2479-49a6-a571-33984e911f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "address": "fa:16:3e:7a:c5:38", "network": {"id": "fdb7aec6-8fa5-4966-aee1-bf0ccd52182b", "bridge": "br-int", "label": "tempest-device-tagging-net1-757537172", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d96bae071ef4595bd93c956dd20796c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap705ea63d-4c", "ovs_interfaceid": "705ea63d-4c9b-450a-ac81-c5bf6ef0c274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:01.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:01 np0005466031 nova_compute[235803]: 2025-10-02 12:28:01.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:02 np0005466031 nova_compute[235803]: 2025-10-02 12:28:02.080 2 DEBUG nova.compute.manager [req-5b094a41-0b59-4317-8b80-279738ff5b56 req-ebee1899-7984-47c1-b55d-c7a29f6ff0e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Detach interface failed, port_id=006d1393-a12a-44ea-9d1c-ba017fde9058, reason: Instance 776370c1-1213-4676-b85e-ce1c0491afc6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:28:02 np0005466031 nova_compute[235803]: 2025-10-02 12:28:02.254 2 DEBUG nova.network.neutron [-] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:02 np0005466031 nova_compute[235803]: 2025-10-02 12:28:02.566 2 INFO nova.compute.manager [-] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Took 13.44 seconds to deallocate network for instance.#033[00m
Oct  2 08:28:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:02.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:03 np0005466031 nova_compute[235803]: 2025-10-02 12:28:03.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:03.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:04 np0005466031 nova_compute[235803]: 2025-10-02 12:28:04.312 2 DEBUG nova.compute.manager [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Received event network-vif-deleted-c31a45fc-37b9-4809-89b1-839d4e85765d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:04 np0005466031 nova_compute[235803]: 2025-10-02 12:28:04.312 2 DEBUG nova.compute.manager [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received event network-changed-7271c02a-a19f-43d0-8351-bfa41c3af3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:04 np0005466031 nova_compute[235803]: 2025-10-02 12:28:04.312 2 DEBUG nova.compute.manager [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Refreshing instance network info cache due to event network-changed-7271c02a-a19f-43d0-8351-bfa41c3af3e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:28:04 np0005466031 nova_compute[235803]: 2025-10-02 12:28:04.313 2 DEBUG oslo_concurrency.lockutils [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:04 np0005466031 nova_compute[235803]: 2025-10-02 12:28:04.313 2 DEBUG oslo_concurrency.lockutils [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:04 np0005466031 nova_compute[235803]: 2025-10-02 12:28:04.313 2 DEBUG nova.network.neutron [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Refreshing network info cache for port 7271c02a-a19f-43d0-8351-bfa41c3af3e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:28:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:28:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:04.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:28:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:28:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1435707135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:28:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:28:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1435707135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:28:05 np0005466031 nova_compute[235803]: 2025-10-02 12:28:05.313 2 INFO nova.compute.manager [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] [instance: 776370c1-1213-4676-b85e-ce1c0491afc6] Took 2.75 seconds to detach 3 volumes for instance.#033[00m
Oct  2 08:28:05 np0005466031 nova_compute[235803]: 2025-10-02 12:28:05.734 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:05 np0005466031 nova_compute[235803]: 2025-10-02 12:28:05.734 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:05.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:05 np0005466031 nova_compute[235803]: 2025-10-02 12:28:05.826 2 DEBUG oslo_concurrency.processutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1381430579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.256 2 DEBUG oslo_concurrency.processutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.262 2 DEBUG nova.compute.provider_tree [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.306 2 DEBUG nova.scheduler.client.report [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.326 2 DEBUG nova.network.neutron [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Updated VIF entry in instance network info cache for port 7271c02a-a19f-43d0-8351-bfa41c3af3e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.327 2 DEBUG nova.network.neutron [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Updating instance_info_cache with network_info: [{"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:28:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.386 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.443 2 DEBUG oslo_concurrency.lockutils [req-97a49d1f-1411-44eb-b5dd-48c0c9a667b1 req-42802e40-5020-4afc-9393-2387a760b49e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-061e91f3-8228-4afb-9420-d0764c3dd7ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.485 2 INFO nova.scheduler.client.report [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Deleted allocations for instance 776370c1-1213-4676-b85e-ce1c0491afc6#033[00m
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.613 2 DEBUG oslo_concurrency.lockutils [None req-81177f97-c004-4625-ac06-9ac590082070 94e0e2f26a1648368032ab7e6732655c 6d96bae071ef4595bd93c956dd20796c - - default default] Lock "776370c1-1213-4676-b85e-ce1c0491afc6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 24.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:06.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:06 np0005466031 nova_compute[235803]: 2025-10-02 12:28:06.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:07.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:08 np0005466031 nova_compute[235803]: 2025-10-02 12:28:08.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:08.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:09.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:09 np0005466031 podman[263159]: 2025-10-02 12:28:09.830370775 +0000 UTC m=+0.054866522 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:28:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:09 np0005466031 podman[263160]: 2025-10-02 12:28:09.862284415 +0000 UTC m=+0.086657128 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 08:28:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:10Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:5b:05 10.100.0.8
Oct  2 08:28:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:10Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:5b:05 10.100.0.8
Oct  2 08:28:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:10.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:11.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:11 np0005466031 nova_compute[235803]: 2025-10-02 12:28:11.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:12.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:13 np0005466031 nova_compute[235803]: 2025-10-02 12:28:13.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:13.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:14Z|00207|binding|INFO|Releasing lport bf70abfc-9300-43b1-849f-3ce1505e3449 from this chassis (sb_readonly=0)
Oct  2 08:28:14 np0005466031 nova_compute[235803]: 2025-10-02 12:28:14.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:14.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:15.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:16.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:16 np0005466031 nova_compute[235803]: 2025-10-02 12:28:16.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:17.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "061e91f3-8228-4afb-9420-d0764c3dd7ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.115 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.115 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.115 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.115 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.116 2 INFO nova.compute.manager [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Terminating instance#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.117 2 DEBUG nova.compute.manager [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 kernel: tap7271c02a-a1 (unregistering): left promiscuous mode
Oct  2 08:28:18 np0005466031 NetworkManager[44907]: <info>  [1759408098.4402] device (tap7271c02a-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:18Z|00208|binding|INFO|Releasing lport 7271c02a-a19f-43d0-8351-bfa41c3af3e4 from this chassis (sb_readonly=0)
Oct  2 08:28:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:18Z|00209|binding|INFO|Setting lport 7271c02a-a19f-43d0-8351-bfa41c3af3e4 down in Southbound
Oct  2 08:28:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:18Z|00210|binding|INFO|Removing iface tap7271c02a-a1 ovn-installed in OVS
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.458 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:5b:05 10.100.0.8'], port_security=['fa:16:3e:0d:5b:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '061e91f3-8228-4afb-9420-d0764c3dd7ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9873bab7-ad9a-4e38-adba-35d281231cb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7572f2170094fb7a5d6e212abf9235d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c1c641d-e9c5-45aa-9e44-637091ff36cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5a9f796-753c-4987-acc5-3f078b337893, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7271c02a-a19f-43d0-8351-bfa41c3af3e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.459 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7271c02a-a19f-43d0-8351-bfa41c3af3e4 in datapath 9873bab7-ad9a-4e38-adba-35d281231cb7 unbound from our chassis#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.461 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9873bab7-ad9a-4e38-adba-35d281231cb7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.463 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[79816227-b2a1-4d75-8f47-18bb195f4eb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.463 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7 namespace which is not needed anymore#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Oct  2 08:28:18 np0005466031 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000003d.scope: Consumed 13.617s CPU time.
Oct  2 08:28:18 np0005466031 systemd-machined[192227]: Machine qemu-23-instance-0000003d terminated.
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 podman[263256]: 2025-10-02 12:28:18.54786584 +0000 UTC m=+0.078213426 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.553 2 INFO nova.virt.libvirt.driver [-] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Instance destroyed successfully.#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.553 2 DEBUG nova.objects.instance [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lazy-loading 'resources' on Instance uuid 061e91f3-8228-4afb-9420-d0764c3dd7ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:18 np0005466031 podman[263259]: 2025-10-02 12:28:18.566527978 +0000 UTC m=+0.097112920 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.580 2 DEBUG nova.virt.libvirt.vif [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:27:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=61,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrSgPCRzMaXI2iBfGSc2TSHS4ZD2W5NzuZOttkXoqM7HXstn5uSaOt2OGxui+rdtS+XLvMX4iV2n3rrcJ5OzpPvW+RvlFzMZnnpDF1H/t3P+NdILCshxZBm5J4n62rBiw==',key_name='tempest-keypair-226967064',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:27:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a7572f2170094fb7a5d6e212abf9235d',ramdisk_id='',reservation_id='r-j1h6xb66',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestFqdnHostnames-830332672',owner_user_name='tempest-ServersTestFqdnHostnames-830332672-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:27:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e24ea6fbe7394bd8b4b06dd246587041',uuid=061e91f3-8228-4afb-9420-d0764c3dd7ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.580 2 DEBUG nova.network.os_vif_util [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Converting VIF {"id": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "address": "fa:16:3e:0d:5b:05", "network": {"id": "9873bab7-ad9a-4e38-adba-35d281231cb7", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1423243976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7572f2170094fb7a5d6e212abf9235d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7271c02a-a1", "ovs_interfaceid": "7271c02a-a19f-43d0-8351-bfa41c3af3e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.581 2 DEBUG nova.network.os_vif_util [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:5b:05,bridge_name='br-int',has_traffic_filtering=True,id=7271c02a-a19f-43d0-8351-bfa41c3af3e4,network=Network(9873bab7-ad9a-4e38-adba-35d281231cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7271c02a-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.581 2 DEBUG os_vif [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:5b:05,bridge_name='br-int',has_traffic_filtering=True,id=7271c02a-a19f-43d0-8351-bfa41c3af3e4,network=Network(9873bab7-ad9a-4e38-adba-35d281231cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7271c02a-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:28:18 np0005466031 neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7[262755]: [NOTICE]   (262759) : haproxy version is 2.8.14-c23fe91
Oct  2 08:28:18 np0005466031 neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7[262755]: [NOTICE]   (262759) : path to executable is /usr/sbin/haproxy
Oct  2 08:28:18 np0005466031 neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7[262755]: [WARNING]  (262759) : Exiting Master process...
Oct  2 08:28:18 np0005466031 neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7[262755]: [ALERT]    (262759) : Current worker (262761) exited with code 143 (Terminated)
Oct  2 08:28:18 np0005466031 neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7[262755]: [WARNING]  (262759) : All workers exited. Exiting... (0)
Oct  2 08:28:18 np0005466031 systemd[1]: libpod-091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde.scope: Deactivated successfully.
Oct  2 08:28:18 np0005466031 podman[263326]: 2025-10-02 12:28:18.622079779 +0000 UTC m=+0.049332313 container died 091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:28:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:18.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:18 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde-userdata-shm.mount: Deactivated successfully.
Oct  2 08:28:18 np0005466031 systemd[1]: var-lib-containers-storage-overlay-41c69bbe3e3c65b31a94729ec9208b910bb28bfc553fa5844838424196d8a283-merged.mount: Deactivated successfully.
Oct  2 08:28:18 np0005466031 podman[263326]: 2025-10-02 12:28:18.667025044 +0000 UTC m=+0.094277578 container cleanup 091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:28:18 np0005466031 systemd[1]: libpod-conmon-091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde.scope: Deactivated successfully.
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7271c02a-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.715 2 INFO os_vif [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:5b:05,bridge_name='br-int',has_traffic_filtering=True,id=7271c02a-a19f-43d0-8351-bfa41c3af3e4,network=Network(9873bab7-ad9a-4e38-adba-35d281231cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7271c02a-a1')#033[00m
Oct  2 08:28:18 np0005466031 podman[263355]: 2025-10-02 12:28:18.739153793 +0000 UTC m=+0.050371233 container remove 091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.744 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a78f10d5-2f12-4b65-9494-84cee5e42b23]: (4, ('Thu Oct  2 12:28:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7 (091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde)\n091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde\nThu Oct  2 12:28:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7 (091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde)\n091e68c25a81446b2538f98a9aabe966ca7d3a928480423e914ba1325222afde\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.745 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0a149de7-f09a-4092-baaf-d28509447990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.746 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9873bab7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:18 np0005466031 kernel: tap9873bab7-a0: left promiscuous mode
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.776 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b6121c56-48f7-49d4-9405-4c5637083142]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.786 2 DEBUG nova.compute.manager [req-b65ffae8-440a-46e5-8c20-ce70982c018b req-3feec095-6075-408e-889b-dd2b05b4fb43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received event network-vif-unplugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.786 2 DEBUG oslo_concurrency.lockutils [req-b65ffae8-440a-46e5-8c20-ce70982c018b req-3feec095-6075-408e-889b-dd2b05b4fb43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.790 2 DEBUG oslo_concurrency.lockutils [req-b65ffae8-440a-46e5-8c20-ce70982c018b req-3feec095-6075-408e-889b-dd2b05b4fb43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.790 2 DEBUG oslo_concurrency.lockutils [req-b65ffae8-440a-46e5-8c20-ce70982c018b req-3feec095-6075-408e-889b-dd2b05b4fb43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.790 2 DEBUG nova.compute.manager [req-b65ffae8-440a-46e5-8c20-ce70982c018b req-3feec095-6075-408e-889b-dd2b05b4fb43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] No waiting events found dispatching network-vif-unplugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:18 np0005466031 nova_compute[235803]: 2025-10-02 12:28:18.790 2 DEBUG nova.compute.manager [req-b65ffae8-440a-46e5-8c20-ce70982c018b req-3feec095-6075-408e-889b-dd2b05b4fb43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received event network-vif-unplugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.804 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4878e19c-2e12-4e15-bc39-afdba4b7e3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.805 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[62913461-4615-4f06-92dd-8fdac146e032]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.820 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[59be4fa2-ff4a-4fb0-a928-74c66efa5adc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592478, 'reachable_time': 15565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263388, 'error': None, 'target': 'ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:18 np0005466031 systemd[1]: run-netns-ovnmeta\x2d9873bab7\x2dad9a\x2d4e38\x2dadba\x2d35d281231cb7.mount: Deactivated successfully.
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.823 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9873bab7-ad9a-4e38-adba-35d281231cb7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:28:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:18.823 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[e84ea6cf-ac46-4b40-98ae-ea3562f6d88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:19.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:28:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4210674111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:28:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:28:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4210674111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:28:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:20.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.107 2 INFO nova.virt.libvirt.driver [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Deleting instance files /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee_del#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.108 2 INFO nova.virt.libvirt.driver [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Deletion of /var/lib/nova/instances/061e91f3-8228-4afb-9420-d0764c3dd7ee_del complete#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.168 2 INFO nova.compute.manager [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Took 3.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.169 2 DEBUG oslo.service.loopingcall [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.169 2 DEBUG nova.compute.manager [-] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.169 2 DEBUG nova.network.neutron [-] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.380 2 DEBUG nova.compute.manager [req-2db9db64-fb76-4392-b448-0edf607238db req-0506037c-2898-4560-86aa-933376f9e55e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received event network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.381 2 DEBUG oslo_concurrency.lockutils [req-2db9db64-fb76-4392-b448-0edf607238db req-0506037c-2898-4560-86aa-933376f9e55e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.382 2 DEBUG oslo_concurrency.lockutils [req-2db9db64-fb76-4392-b448-0edf607238db req-0506037c-2898-4560-86aa-933376f9e55e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.382 2 DEBUG oslo_concurrency.lockutils [req-2db9db64-fb76-4392-b448-0edf607238db req-0506037c-2898-4560-86aa-933376f9e55e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.382 2 DEBUG nova.compute.manager [req-2db9db64-fb76-4392-b448-0edf607238db req-0506037c-2898-4560-86aa-933376f9e55e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] No waiting events found dispatching network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.383 2 WARNING nova.compute.manager [req-2db9db64-fb76-4392-b448-0edf607238db req-0506037c-2898-4560-86aa-933376f9e55e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received unexpected event network-vif-plugged-7271c02a-a19f-43d0-8351-bfa41c3af3e4 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:28:21 np0005466031 nova_compute[235803]: 2025-10-02 12:28:21.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:21.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.279 2 DEBUG nova.network.neutron [-] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.297 2 INFO nova.compute.manager [-] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Took 1.13 seconds to deallocate network for instance.#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.348 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.348 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.385 2 DEBUG nova.compute.manager [req-e44f6730-4eae-4964-b65a-d57ffdaaec4d req-c21908e1-67ea-413a-bb79-559f1da6c471 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Received event network-vif-deleted-7271c02a-a19f-43d0-8351-bfa41c3af3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.397 2 DEBUG oslo_concurrency.processutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:22.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3000373398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.839 2 DEBUG oslo_concurrency.processutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.848 2 DEBUG nova.compute.provider_tree [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.869 2 DEBUG nova.scheduler.client.report [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.903 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:22 np0005466031 nova_compute[235803]: 2025-10-02 12:28:22.959 2 INFO nova.scheduler.client.report [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Deleted allocations for instance 061e91f3-8228-4afb-9420-d0764c3dd7ee#033[00m
Oct  2 08:28:23 np0005466031 nova_compute[235803]: 2025-10-02 12:28:23.034 2 DEBUG oslo_concurrency.lockutils [None req-41305299-9352-45a7-aae2-18578a111c8b e24ea6fbe7394bd8b4b06dd246587041 a7572f2170094fb7a5d6e212abf9235d - - default default] Lock "061e91f3-8228-4afb-9420-d0764c3dd7ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:23 np0005466031 nova_compute[235803]: 2025-10-02 12:28:23.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:28:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121041882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:28:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:28:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121041882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:28:23 np0005466031 nova_compute[235803]: 2025-10-02 12:28:23.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:23.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:24.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:25.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:25.833 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:25.834 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:25.834 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:26.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:27.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:28 np0005466031 nova_compute[235803]: 2025-10-02 12:28:28.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:28:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:28.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:28:28 np0005466031 nova_compute[235803]: 2025-10-02 12:28:28.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:29.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e214 e214: 3 total, 3 up, 3 in
Oct  2 08:28:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:30.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:31.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:28:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:32.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:28:33 np0005466031 nova_compute[235803]: 2025-10-02 12:28:33.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:33 np0005466031 nova_compute[235803]: 2025-10-02 12:28:33.552 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408098.5507035, 061e91f3-8228-4afb-9420-d0764c3dd7ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:33 np0005466031 nova_compute[235803]: 2025-10-02 12:28:33.552 2 INFO nova.compute.manager [-] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:28:33 np0005466031 nova_compute[235803]: 2025-10-02 12:28:33.574 2 DEBUG nova.compute.manager [None req-72bcf776-a673-4a87-a63f-722b7a9a9977 - - - - - -] [instance: 061e91f3-8228-4afb-9420-d0764c3dd7ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:33 np0005466031 nova_compute[235803]: 2025-10-02 12:28:33.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:33.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:35.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e215 e215: 3 total, 3 up, 3 in
Oct  2 08:28:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:36.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:37.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:38 np0005466031 nova_compute[235803]: 2025-10-02 12:28:38.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:38.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:38 np0005466031 nova_compute[235803]: 2025-10-02 12:28:38.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:39.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:40 np0005466031 podman[263475]: 2025-10-02 12:28:40.621680101 +0000 UTC m=+0.055002437 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:28:40 np0005466031 podman[263476]: 2025-10-02 12:28:40.646111545 +0000 UTC m=+0.078775912 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:28:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e216 e216: 3 total, 3 up, 3 in
Oct  2 08:28:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:41.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:42 np0005466031 nova_compute[235803]: 2025-10-02 12:28:42.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:42 np0005466031 nova_compute[235803]: 2025-10-02 12:28:42.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:42 np0005466031 nova_compute[235803]: 2025-10-02 12:28:42.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:42 np0005466031 nova_compute[235803]: 2025-10-02 12:28:42.661 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:42 np0005466031 nova_compute[235803]: 2025-10-02 12:28:42.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:42 np0005466031 nova_compute[235803]: 2025-10-02 12:28:42.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:42 np0005466031 nova_compute[235803]: 2025-10-02 12:28:42.662 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:28:42 np0005466031 nova_compute[235803]: 2025-10-02 12:28:42.663 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:42.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2909641701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.114 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.267 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.268 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4691MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.269 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.269 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.332 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.332 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.349 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1417505443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:43.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.797 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.804 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.830 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.856 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:28:43 np0005466031 nova_compute[235803]: 2025-10-02 12:28:43.856 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:45.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:46.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:46 np0005466031 nova_compute[235803]: 2025-10-02 12:28:46.857 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:46 np0005466031 nova_compute[235803]: 2025-10-02 12:28:46.857 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:28:46 np0005466031 nova_compute[235803]: 2025-10-02 12:28:46.858 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:28:46 np0005466031 nova_compute[235803]: 2025-10-02 12:28:46.958 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:28:46 np0005466031 nova_compute[235803]: 2025-10-02 12:28:46.958 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:46 np0005466031 nova_compute[235803]: 2025-10-02 12:28:46.959 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.164 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.165 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.181 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.265 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.266 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.271 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.271 2 INFO nova.compute.claims [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.404 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:47.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1154819649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.841 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.847 2 DEBUG nova.compute.provider_tree [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.862 2 DEBUG nova.scheduler.client.report [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.887 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.888 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.947 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.948 2 DEBUG nova.network.neutron [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:28:47 np0005466031 nova_compute[235803]: 2025-10-02 12:28:47.994 2 INFO nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.021 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.061 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.062 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.117 2 DEBUG nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.178 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.180 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.180 2 INFO nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Creating image(s)#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.210 2 DEBUG nova.storage.rbd_utils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] rbd image f03c0d03-67d1-4162-abbf-748eeda01512_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.236 2 DEBUG nova.storage.rbd_utils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] rbd image f03c0d03-67d1-4162-abbf-748eeda01512_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.261 2 DEBUG nova.storage.rbd_utils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] rbd image f03c0d03-67d1-4162-abbf-748eeda01512_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.264 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.287 2 DEBUG nova.policy [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '61e6149575e848ffb09a0adcb1fc0829', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aee47e8927a149149f8b2da7f91e512d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.314 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.315 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.319 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.319 2 INFO nova.compute.claims [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.327 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.328 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.328 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.328 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.354 2 DEBUG nova.storage.rbd_utils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] rbd image f03c0d03-67d1-4162-abbf-748eeda01512_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.357 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f03c0d03-67d1-4162-abbf-748eeda01512_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:48.381 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:48.383 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.497 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:48.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.846 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f03c0d03-67d1-4162-abbf-748eeda01512_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.931 2 DEBUG nova.storage.rbd_utils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] resizing rbd image f03c0d03-67d1-4162-abbf-748eeda01512_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:28:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:48 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/139225508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.973 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:48 np0005466031 nova_compute[235803]: 2025-10-02 12:28:48.982 2 DEBUG nova.compute.provider_tree [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.002 2 DEBUG nova.scheduler.client.report [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.049 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.051 2 DEBUG nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.062 2 DEBUG nova.objects.instance [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lazy-loading 'migration_context' on Instance uuid f03c0d03-67d1-4162-abbf-748eeda01512 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.077 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.078 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Ensure instance console log exists: /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.078 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.078 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.079 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.104 2 DEBUG nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.105 2 DEBUG nova.network.neutron [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.124 2 INFO nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.143 2 DEBUG nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.226 2 DEBUG nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.227 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.228 2 INFO nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Creating image(s)#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.253 2 DEBUG nova.storage.rbd_utils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.282 2 DEBUG nova.storage.rbd_utils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.308 2 DEBUG nova.storage.rbd_utils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.312 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.371 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.372 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.372 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.372 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.398 2 DEBUG nova.storage.rbd_utils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.402 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.431 2 DEBUG nova.network.neutron [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.432 2 DEBUG nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:28:49 np0005466031 podman[263871]: 2025-10-02 12:28:49.636795184 +0000 UTC m=+0.061620267 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:49 np0005466031 nova_compute[235803]: 2025-10-02 12:28:49.650 2 DEBUG nova.network.neutron [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Successfully created port: 7f4de623-85e2-4c78-9cfe-664d6515907f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:28:49 np0005466031 podman[263869]: 2025-10-02 12:28:49.653120024 +0000 UTC m=+0.080441289 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:28:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:49.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.528 2 DEBUG nova.network.neutron [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Successfully updated port: 7f4de623-85e2-4c78-9cfe-664d6515907f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.546 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.547 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquired lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.547 2 DEBUG nova.network.neutron [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.621 2 DEBUG nova.compute.manager [req-5d827c7f-2145-4918-b01f-0f4a24e2223a req-6256ec38-6568-47b6-a13b-d2d60cfe47e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-changed-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.622 2 DEBUG nova.compute.manager [req-5d827c7f-2145-4918-b01f-0f4a24e2223a req-6256ec38-6568-47b6-a13b-d2d60cfe47e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Refreshing instance network info cache due to event network-changed-7f4de623-85e2-4c78-9cfe-664d6515907f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.622 2 DEBUG oslo_concurrency.lockutils [req-5d827c7f-2145-4918-b01f-0f4a24e2223a req-6256ec38-6568-47b6-a13b-d2d60cfe47e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.625 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.658 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.659 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:28:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:50.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.691 2 DEBUG nova.network.neutron [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.702 2 DEBUG nova.storage.rbd_utils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] resizing rbd image e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.822 2 DEBUG nova.objects.instance [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'migration_context' on Instance uuid e985fe5c-e98d-4f5b-8985-61a156cde5f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.839 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.839 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Ensure instance console log exists: /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.840 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.840 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.840 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.842 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.846 2 WARNING nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.851 2 DEBUG nova.virt.libvirt.host [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.852 2 DEBUG nova.virt.libvirt.host [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.854 2 DEBUG nova.virt.libvirt.host [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.855 2 DEBUG nova.virt.libvirt.host [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.856 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.856 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.857 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.858 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.858 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.858 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.858 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.859 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.859 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.859 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.859 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.860 2 DEBUG nova.virt.hardware [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:28:50 np0005466031 nova_compute[235803]: 2025-10-02 12:28:50.862 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/988252884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:51.385 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:51 np0005466031 nova_compute[235803]: 2025-10-02 12:28:51.503 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:51 np0005466031 nova_compute[235803]: 2025-10-02 12:28:51.538 2 DEBUG nova.storage.rbd_utils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:51 np0005466031 nova_compute[235803]: 2025-10-02 12:28:51.544 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:51.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:51 np0005466031 nova_compute[235803]: 2025-10-02 12:28:51.964 2 DEBUG nova.network.neutron [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Updating instance_info_cache with network_info: [{"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.017 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Releasing lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.018 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Instance network_info: |[{"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.018 2 DEBUG oslo_concurrency.lockutils [req-5d827c7f-2145-4918-b01f-0f4a24e2223a req-6256ec38-6568-47b6-a13b-d2d60cfe47e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.019 2 DEBUG nova.network.neutron [req-5d827c7f-2145-4918-b01f-0f4a24e2223a req-6256ec38-6568-47b6-a13b-d2d60cfe47e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Refreshing network info cache for port 7f4de623-85e2-4c78-9cfe-664d6515907f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.021 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Start _get_guest_xml network_info=[{"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.025 2 WARNING nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.028 2 DEBUG nova.virt.libvirt.host [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:28:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2019023000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.029 2 DEBUG nova.virt.libvirt.host [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.033 2 DEBUG nova.virt.libvirt.host [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.033 2 DEBUG nova.virt.libvirt.host [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.034 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.035 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.035 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.036 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.036 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.036 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.037 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.037 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.037 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.037 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.038 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.038 2 DEBUG nova.virt.hardware [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.041 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.066 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.071 2 DEBUG nova.objects.instance [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'pci_devices' on Instance uuid e985fe5c-e98d-4f5b-8985-61a156cde5f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.097 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <uuid>e985fe5c-e98d-4f5b-8985-61a156cde5f9</uuid>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <name>instance-00000042</name>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1985137111</nova:name>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:28:50</nova:creationTime>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:user uuid="87db7657bb324d029ff3d66f218f1d8d">tempest-ListImageFiltersTestJSON-1602275258-project-member</nova:user>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:project uuid="494736d8288b414094eb0bc6fbaa8cb7">tempest-ListImageFiltersTestJSON-1602275258</nova:project>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="serial">e985fe5c-e98d-4f5b-8985-61a156cde5f9</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="uuid">e985fe5c-e98d-4f5b-8985-61a156cde5f9</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk.config">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9/console.log" append="off"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:28:52 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:28:52 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.150 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.151 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.151 2 INFO nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Using config drive#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.182 2 DEBUG nova.storage.rbd_utils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.344 2 INFO nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Creating config drive at /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9/disk.config#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.349 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkilfi62q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2513697758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.464 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.487 2 DEBUG nova.storage.rbd_utils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] rbd image f03c0d03-67d1-4162-abbf-748eeda01512_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.491 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.511 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkilfi62q" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.538 2 DEBUG nova.storage.rbd_utils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.542 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9/disk.config e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:52.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3622040677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.912 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.914 2 DEBUG nova.virt.libvirt.vif [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-953315499',display_name='tempest-InstanceActionsTestJSON-server-953315499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-953315499',id=65,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee47e8927a149149f8b2da7f91e512d',ramdisk_id='',reservation_id='r-hen0wn0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-604575546',owner_user_name='tempest-InstanceActionsTestJSON-604575546-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:48Z,user_data=None,user_id='61e6149575e848ffb09a0adcb1fc0829',uuid=f03c0d03-67d1-4162-abbf-748eeda01512,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.914 2 DEBUG nova.network.os_vif_util [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converting VIF {"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.915 2 DEBUG nova.network.os_vif_util [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.916 2 DEBUG nova.objects.instance [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lazy-loading 'pci_devices' on Instance uuid f03c0d03-67d1-4162-abbf-748eeda01512 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.935 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <uuid>f03c0d03-67d1-4162-abbf-748eeda01512</uuid>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <name>instance-00000041</name>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:name>tempest-InstanceActionsTestJSON-server-953315499</nova:name>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:28:52</nova:creationTime>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:user uuid="61e6149575e848ffb09a0adcb1fc0829">tempest-InstanceActionsTestJSON-604575546-project-member</nova:user>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:project uuid="aee47e8927a149149f8b2da7f91e512d">tempest-InstanceActionsTestJSON-604575546</nova:project>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <nova:port uuid="7f4de623-85e2-4c78-9cfe-664d6515907f">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="serial">f03c0d03-67d1-4162-abbf-748eeda01512</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="uuid">f03c0d03-67d1-4162-abbf-748eeda01512</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f03c0d03-67d1-4162-abbf-748eeda01512_disk">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f03c0d03-67d1-4162-abbf-748eeda01512_disk.config">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:0d:90:29"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <target dev="tap7f4de623-85"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/console.log" append="off"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:28:52 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:28:52 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:28:52 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:28:52 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.936 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Preparing to wait for external event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.937 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.937 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.937 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.938 2 DEBUG nova.virt.libvirt.vif [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-953315499',display_name='tempest-InstanceActionsTestJSON-server-953315499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-953315499',id=65,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee47e8927a149149f8b2da7f91e512d',ramdisk_id='',reservation_id='r-hen0wn0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-604575546',owner_user_name='tempest-InstanceActionsTestJSON-604575546-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:48Z,user_data=None,user_id='61e6149575e848ffb09a0adcb1fc0829',uuid=f03c0d03-67d1-4162-abbf-748eeda01512,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.938 2 DEBUG nova.network.os_vif_util [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converting VIF {"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.939 2 DEBUG nova.network.os_vif_util [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.939 2 DEBUG os_vif [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.943 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f4de623-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.943 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f4de623-85, col_values=(('external_ids', {'iface-id': '7f4de623-85e2-4c78-9cfe-664d6515907f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:90:29', 'vm-uuid': 'f03c0d03-67d1-4162-abbf-748eeda01512'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:52 np0005466031 NetworkManager[44907]: <info>  [1759408132.9473] manager: (tap7f4de623-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.953 2 INFO os_vif [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85')#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.996 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.996 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.996 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] No VIF found with MAC fa:16:3e:0d:90:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:28:52 np0005466031 nova_compute[235803]: 2025-10-02 12:28:52.997 2 INFO nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Using config drive#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.019 2 DEBUG nova.storage.rbd_utils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] rbd image f03c0d03-67d1-4162-abbf-748eeda01512_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.468 2 INFO nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Creating config drive at /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/disk.config#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.479 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gj0l02f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.620 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gj0l02f" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.647 2 DEBUG nova.storage.rbd_utils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] rbd image f03c0d03-67d1-4162-abbf-748eeda01512_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.650 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/disk.config f03c0d03-67d1-4162-abbf-748eeda01512_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.684 2 DEBUG nova.network.neutron [req-5d827c7f-2145-4918-b01f-0f4a24e2223a req-6256ec38-6568-47b6-a13b-d2d60cfe47e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Updated VIF entry in instance network info cache for port 7f4de623-85e2-4c78-9cfe-664d6515907f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.685 2 DEBUG nova.network.neutron [req-5d827c7f-2145-4918-b01f-0f4a24e2223a req-6256ec38-6568-47b6-a13b-d2d60cfe47e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Updating instance_info_cache with network_info: [{"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.710 2 DEBUG oslo_concurrency.lockutils [req-5d827c7f-2145-4918-b01f-0f4a24e2223a req-6256ec38-6568-47b6-a13b-d2d60cfe47e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.796 2 DEBUG oslo_concurrency.processutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9/disk.config e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:53 np0005466031 nova_compute[235803]: 2025-10-02 12:28:53.796 2 INFO nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Deleting local config drive /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9/disk.config because it was imported into RBD.#033[00m
Oct  2 08:28:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:53.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:53 np0005466031 systemd-machined[192227]: New machine qemu-24-instance-00000042.
Oct  2 08:28:53 np0005466031 systemd[1]: Started Virtual Machine qemu-24-instance-00000042.
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.244 2 DEBUG oslo_concurrency.processutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/disk.config f03c0d03-67d1-4162-abbf-748eeda01512_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.244 2 INFO nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Deleting local config drive /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/disk.config because it was imported into RBD.#033[00m
Oct  2 08:28:54 np0005466031 kernel: tap7f4de623-85: entered promiscuous mode
Oct  2 08:28:54 np0005466031 NetworkManager[44907]: <info>  [1759408134.3064] manager: (tap7f4de623-85): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Oct  2 08:28:54 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:54Z|00211|binding|INFO|Claiming lport 7f4de623-85e2-4c78-9cfe-664d6515907f for this chassis.
Oct  2 08:28:54 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:54Z|00212|binding|INFO|7f4de623-85e2-4c78-9cfe-664d6515907f: Claiming fa:16:3e:0d:90:29 10.100.0.11
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:54 np0005466031 systemd-machined[192227]: New machine qemu-25-instance-00000041.
Oct  2 08:28:54 np0005466031 systemd-udevd[264322]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.394 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:90:29 10.100.0.11'], port_security=['fa:16:3e:0d:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f03c0d03-67d1-4162-abbf-748eeda01512', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee47e8927a149149f8b2da7f91e512d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bdb93dc0-dd7a-4431-a324-d74a1cdd44c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cd9fee4-8cc6-4445-b38a-4d81fd6185bd, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7f4de623-85e2-4c78-9cfe-664d6515907f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:54 np0005466031 systemd[1]: Started Virtual Machine qemu-25-instance-00000041.
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.395 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7f4de623-85e2-4c78-9cfe-664d6515907f in datapath 08c4e8eb-0f89-48c7-931d-c8439eb97c4d bound to our chassis#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.397 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08c4e8eb-0f89-48c7-931d-c8439eb97c4d#033[00m
Oct  2 08:28:54 np0005466031 NetworkManager[44907]: <info>  [1759408134.3991] device (tap7f4de623-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:28:54 np0005466031 NetworkManager[44907]: <info>  [1759408134.4001] device (tap7f4de623-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.408 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3474e0-aeec-44c5-9f3d-f828a6076557]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.410 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08c4e8eb-01 in ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.412 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08c4e8eb-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.412 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[eb36a9c1-0f99-445e-af3c-0992b97ed2aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.414 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1c4bb5-207a-47fb-8bed-7ad6db06d827]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:54Z|00213|binding|INFO|Setting lport 7f4de623-85e2-4c78-9cfe-664d6515907f ovn-installed in OVS
Oct  2 08:28:54 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:54Z|00214|binding|INFO|Setting lport 7f4de623-85e2-4c78-9cfe-664d6515907f up in Southbound
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.429 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7aed6e-0a19-4e6c-a656-a8a675f623b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.442 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c97da836-7dde-490e-b968-e26b4aaa2c9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.471 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0e776aba-9b05-40b9-9bdf-ba2be9de7764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.475 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2e3c01-aee0-4038-a2da-c219816bf4c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 NetworkManager[44907]: <info>  [1759408134.4768] manager: (tap08c4e8eb-00): new Veth device (/org/freedesktop/NetworkManager/Devices/112)
Oct  2 08:28:54 np0005466031 systemd-udevd[264340]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.504 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[9996d96d-b0b9-489d-8b31-40551840ccf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.507 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[da82f634-87cd-40a8-aacf-da92766030d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 NetworkManager[44907]: <info>  [1759408134.5263] device (tap08c4e8eb-00): carrier: link connected
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.531 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb0ea16-1d06-4900-82c2-081fbd2153e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.545 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b47146-fe9d-4fda-a503-1566431e9ca7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08c4e8eb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:36:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599011, 'reachable_time': 16159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264375, 'error': None, 'target': 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.559 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2bd947-d3ab-4564-baaf-e80ba32c0d62]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe45:366f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599011, 'tstamp': 599011}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264376, 'error': None, 'target': 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.572 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[975760a5-fd8d-43cf-ab32-585e7609d508]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08c4e8eb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:36:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599011, 'reachable_time': 16159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264377, 'error': None, 'target': 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.593 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[823a1809-5ccb-4b64-8a57-f2f9dda073e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.638 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5e71ca34-7e4e-47e5-b880-4868921eaddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.640 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08c4e8eb-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.640 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.640 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08c4e8eb-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:54 np0005466031 NetworkManager[44907]: <info>  [1759408134.6427] manager: (tap08c4e8eb-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct  2 08:28:54 np0005466031 kernel: tap08c4e8eb-00: entered promiscuous mode
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.648 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08c4e8eb-00, col_values=(('external_ids', {'iface-id': '35d78e95-d14d-4fdd-9530-f314ae6f13a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:54 np0005466031 ovn_controller[132413]: 2025-10-02T12:28:54Z|00215|binding|INFO|Releasing lport 35d78e95-d14d-4fdd-9530-f314ae6f13a4 from this chassis (sb_readonly=0)
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.684 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08c4e8eb-0f89-48c7-931d-c8439eb97c4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08c4e8eb-0f89-48c7-931d-c8439eb97c4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:28:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:54.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.685 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[23fc9fa3-ebac-4244-bfb4-f1dce9bb8a00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.686 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-08c4e8eb-0f89-48c7-931d-c8439eb97c4d
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/08c4e8eb-0f89-48c7-931d-c8439eb97c4d.pid.haproxy
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 08c4e8eb-0f89-48c7-931d-c8439eb97c4d
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:28:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:28:54.687 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'env', 'PROCESS_TAG=haproxy-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08c4e8eb-0f89-48c7-931d-c8439eb97c4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.735 2 DEBUG nova.compute.manager [req-028cbae0-a0a9-4328-b8bd-af367290075a req-1c1b414a-b686-4366-9ec5-9e60360a4be2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.736 2 DEBUG oslo_concurrency.lockutils [req-028cbae0-a0a9-4328-b8bd-af367290075a req-1c1b414a-b686-4366-9ec5-9e60360a4be2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.736 2 DEBUG oslo_concurrency.lockutils [req-028cbae0-a0a9-4328-b8bd-af367290075a req-1c1b414a-b686-4366-9ec5-9e60360a4be2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.736 2 DEBUG oslo_concurrency.lockutils [req-028cbae0-a0a9-4328-b8bd-af367290075a req-1c1b414a-b686-4366-9ec5-9e60360a4be2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:54 np0005466031 nova_compute[235803]: 2025-10-02 12:28:54.737 2 DEBUG nova.compute.manager [req-028cbae0-a0a9-4328-b8bd-af367290075a req-1c1b414a-b686-4366-9ec5-9e60360a4be2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Processing event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:28:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:55 np0005466031 podman[264456]: 2025-10-02 12:28:55.102203287 +0000 UTC m=+0.053109921 container create 9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:28:55 np0005466031 systemd[1]: Started libpod-conmon-9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0.scope.
Oct  2 08:28:55 np0005466031 podman[264456]: 2025-10-02 12:28:55.068508906 +0000 UTC m=+0.019415560 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:28:55 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:28:55 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7994998e59899618edec3f039e48e027a278f5c8cb86e17212f88d063249715/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:28:55 np0005466031 podman[264456]: 2025-10-02 12:28:55.188303989 +0000 UTC m=+0.139210623 container init 9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:28:55 np0005466031 podman[264456]: 2025-10-02 12:28:55.198466862 +0000 UTC m=+0.149373496 container start 9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:28:55 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264471]: [NOTICE]   (264475) : New worker (264477) forked
Oct  2 08:28:55 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264471]: [NOTICE]   (264475) : Loading success.
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.228 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408135.2282708, e985fe5c-e98d-4f5b-8985-61a156cde5f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.229 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.235 2 DEBUG nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.235 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.239 2 INFO nova.virt.libvirt.driver [-] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Instance spawned successfully.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.239 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.250 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.253 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.260 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.261 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.261 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.262 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.262 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.262 2 DEBUG nova.virt.libvirt.driver [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.270 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.271 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408135.234626, e985fe5c-e98d-4f5b-8985-61a156cde5f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.271 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] VM Started (Lifecycle Event)#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.297 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.299 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.320 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.327 2 INFO nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Took 6.10 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.328 2 DEBUG nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.343 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408135.3428502, f03c0d03-67d1-4162-abbf-748eeda01512 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.343 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] VM Started (Lifecycle Event)#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.345 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.348 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.351 2 INFO nova.virt.libvirt.driver [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Instance spawned successfully.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.351 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.388 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.391 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.411 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.412 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.413 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.413 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.415 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.415 2 DEBUG nova.virt.libvirt.driver [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.423 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.423 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408135.343335, f03c0d03-67d1-4162-abbf-748eeda01512 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.424 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.428 2 INFO nova.compute.manager [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Took 7.14 seconds to build instance.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.446 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.449 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408135.3533936, f03c0d03-67d1-4162-abbf-748eeda01512 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.449 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.453 2 DEBUG oslo_concurrency.lockutils [None req-244bcf88-f6f4-4918-8198-7c86e9ccb790 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.466 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.469 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.472 2 INFO nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Took 7.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.473 2 DEBUG nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.499 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.609 2 INFO nova.compute.manager [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Took 8.37 seconds to build instance.#033[00m
Oct  2 08:28:55 np0005466031 nova_compute[235803]: 2025-10-02 12:28:55.638 2 DEBUG oslo_concurrency.lockutils [None req-a50b259a-949a-4447-a967-f1315b0c5fbc 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:55.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:56.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:56 np0005466031 nova_compute[235803]: 2025-10-02 12:28:56.893 2 DEBUG nova.compute.manager [req-64b6073f-6077-416f-ad22-fe927b087d19 req-a299d90f-8153-4b16-89a9-c1854cf2d0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:56 np0005466031 nova_compute[235803]: 2025-10-02 12:28:56.894 2 DEBUG oslo_concurrency.lockutils [req-64b6073f-6077-416f-ad22-fe927b087d19 req-a299d90f-8153-4b16-89a9-c1854cf2d0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:56 np0005466031 nova_compute[235803]: 2025-10-02 12:28:56.894 2 DEBUG oslo_concurrency.lockutils [req-64b6073f-6077-416f-ad22-fe927b087d19 req-a299d90f-8153-4b16-89a9-c1854cf2d0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:56 np0005466031 nova_compute[235803]: 2025-10-02 12:28:56.895 2 DEBUG oslo_concurrency.lockutils [req-64b6073f-6077-416f-ad22-fe927b087d19 req-a299d90f-8153-4b16-89a9-c1854cf2d0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:56 np0005466031 nova_compute[235803]: 2025-10-02 12:28:56.895 2 DEBUG nova.compute.manager [req-64b6073f-6077-416f-ad22-fe927b087d19 req-a299d90f-8153-4b16-89a9-c1854cf2d0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] No waiting events found dispatching network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:56 np0005466031 nova_compute[235803]: 2025-10-02 12:28:56.895 2 WARNING nova.compute.manager [req-64b6073f-6077-416f-ad22-fe927b087d19 req-a299d90f-8153-4b16-89a9-c1854cf2d0ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received unexpected event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:28:57 np0005466031 nova_compute[235803]: 2025-10-02 12:28:57.184 2 DEBUG oslo_concurrency.lockutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:57 np0005466031 nova_compute[235803]: 2025-10-02 12:28:57.185 2 DEBUG oslo_concurrency.lockutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:57 np0005466031 nova_compute[235803]: 2025-10-02 12:28:57.186 2 INFO nova.compute.manager [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Rebooting instance#033[00m
Oct  2 08:28:57 np0005466031 nova_compute[235803]: 2025-10-02 12:28:57.200 2 DEBUG oslo_concurrency.lockutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:57 np0005466031 nova_compute[235803]: 2025-10-02 12:28:57.201 2 DEBUG oslo_concurrency.lockutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquired lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:57 np0005466031 nova_compute[235803]: 2025-10-02 12:28:57.201 2 DEBUG nova.network.neutron [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:28:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:57.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:57 np0005466031 nova_compute[235803]: 2025-10-02 12:28:57.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e217 e217: 3 total, 3 up, 3 in
Oct  2 08:28:58 np0005466031 nova_compute[235803]: 2025-10-02 12:28:58.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:28:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:58.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:28:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:28:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.225 2 DEBUG nova.network.neutron [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Updating instance_info_cache with network_info: [{"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.247 2 DEBUG oslo_concurrency.lockutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Releasing lock "refresh_cache-f03c0d03-67d1-4162-abbf-748eeda01512" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.248 2 DEBUG nova.compute.manager [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:00 np0005466031 kernel: tap7f4de623-85 (unregistering): left promiscuous mode
Oct  2 08:29:00 np0005466031 NetworkManager[44907]: <info>  [1759408140.3896] device (tap7f4de623-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:29:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:00Z|00216|binding|INFO|Releasing lport 7f4de623-85e2-4c78-9cfe-664d6515907f from this chassis (sb_readonly=0)
Oct  2 08:29:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:00Z|00217|binding|INFO|Setting lport 7f4de623-85e2-4c78-9cfe-664d6515907f down in Southbound
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:00Z|00218|binding|INFO|Removing iface tap7f4de623-85 ovn-installed in OVS
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.412 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:90:29 10.100.0.11'], port_security=['fa:16:3e:0d:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f03c0d03-67d1-4162-abbf-748eeda01512', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee47e8927a149149f8b2da7f91e512d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bdb93dc0-dd7a-4431-a324-d74a1cdd44c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cd9fee4-8cc6-4445-b38a-4d81fd6185bd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7f4de623-85e2-4c78-9cfe-664d6515907f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.413 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7f4de623-85e2-4c78-9cfe-664d6515907f in datapath 08c4e8eb-0f89-48c7-931d-c8439eb97c4d unbound from our chassis#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.414 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08c4e8eb-0f89-48c7-931d-c8439eb97c4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.416 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3297496b-0e9e-45c3-9ab7-bd9b9b68c4a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.416 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d namespace which is not needed anymore#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466031 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000041.scope: Deactivated successfully.
Oct  2 08:29:00 np0005466031 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000041.scope: Consumed 5.829s CPU time.
Oct  2 08:29:00 np0005466031 systemd-machined[192227]: Machine qemu-25-instance-00000041 terminated.
Oct  2 08:29:00 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264471]: [NOTICE]   (264475) : haproxy version is 2.8.14-c23fe91
Oct  2 08:29:00 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264471]: [NOTICE]   (264475) : path to executable is /usr/sbin/haproxy
Oct  2 08:29:00 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264471]: [WARNING]  (264475) : Exiting Master process...
Oct  2 08:29:00 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264471]: [ALERT]    (264475) : Current worker (264477) exited with code 143 (Terminated)
Oct  2 08:29:00 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264471]: [WARNING]  (264475) : All workers exited. Exiting... (0)
Oct  2 08:29:00 np0005466031 systemd[1]: libpod-9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0.scope: Deactivated successfully.
Oct  2 08:29:00 np0005466031 podman[264513]: 2025-10-02 12:29:00.541711516 +0000 UTC m=+0.039809339 container died 9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:29:00 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0-userdata-shm.mount: Deactivated successfully.
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.575 2 INFO nova.virt.libvirt.driver [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Instance destroyed successfully.#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.576 2 DEBUG nova.objects.instance [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lazy-loading 'resources' on Instance uuid f03c0d03-67d1-4162-abbf-748eeda01512 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:00 np0005466031 systemd[1]: var-lib-containers-storage-overlay-d7994998e59899618edec3f039e48e027a278f5c8cb86e17212f88d063249715-merged.mount: Deactivated successfully.
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.590 2 DEBUG nova.virt.libvirt.vif [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-953315499',display_name='tempest-InstanceActionsTestJSON-server-953315499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-953315499',id=65,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aee47e8927a149149f8b2da7f91e512d',ramdisk_id='',reservation_id='r-hen0wn0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-604575546',owner_user_name='tempest-InstanceActionsTestJSON-604575546-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:00Z,user_data=None,user_id='61e6149575e848ffb09a0adcb1fc0829',uuid=f03c0d03-67d1-4162-abbf-748eeda01512,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.590 2 DEBUG nova.network.os_vif_util [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converting VIF {"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.591 2 DEBUG nova.network.os_vif_util [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.592 2 DEBUG os_vif [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:29:00 np0005466031 podman[264513]: 2025-10-02 12:29:00.593328433 +0000 UTC m=+0.091426256 container cleanup 9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4de623-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.602 2 INFO os_vif [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85')#033[00m
Oct  2 08:29:00 np0005466031 systemd[1]: libpod-conmon-9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0.scope: Deactivated successfully.
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.608 2 DEBUG nova.virt.libvirt.driver [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Start _get_guest_xml network_info=[{"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.611 2 WARNING nova.virt.libvirt.driver [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.618 2 DEBUG nova.virt.libvirt.host [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.618 2 DEBUG nova.virt.libvirt.host [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.621 2 DEBUG nova.virt.libvirt.host [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.622 2 DEBUG nova.virt.libvirt.host [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.623 2 DEBUG nova.virt.libvirt.driver [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.623 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.623 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.623 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.624 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.624 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.624 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.624 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.624 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.624 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.625 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.625 2 DEBUG nova.virt.hardware [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.625 2 DEBUG nova.objects.instance [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lazy-loading 'vcpu_model' on Instance uuid f03c0d03-67d1-4162-abbf-748eeda01512 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.642 2 DEBUG oslo_concurrency.processutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:00 np0005466031 podman[264555]: 2025-10-02 12:29:00.651887861 +0000 UTC m=+0.037307876 container remove 9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.661 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[85749952-28ca-4e87-8bbd-1f3a84196af6]: (4, ('Thu Oct  2 12:29:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d (9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0)\n9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0\nThu Oct  2 12:29:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d (9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0)\n9f108f2094851f30905cae4b830c246a0c95f5ac306bd0a5c40b9a92df8f3fb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.662 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[db984151-0c3b-43b1-b1f0-f973702ec9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.663 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08c4e8eb-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:00 np0005466031 kernel: tap08c4e8eb-00: left promiscuous mode
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.669 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[870baecf-8895-413b-b350-925f30130abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:00.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.694 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a37753-af52-4c15-8732-b542fde0fad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.696 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[37a8eb15-e96a-4284-bf8d-4bde71d8b28e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.710 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[849f4ec5-c04a-4c94-8dea-cc4491b5c2ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599005, 'reachable_time': 16552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264571, 'error': None, 'target': 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:00 np0005466031 systemd[1]: run-netns-ovnmeta\x2d08c4e8eb\x2d0f89\x2d48c7\x2d931d\x2dc8439eb97c4d.mount: Deactivated successfully.
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.713 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:29:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:00.713 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[a360cb31-24f8-4cd7-ac71-7ae48ebeb658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.814 2 DEBUG nova.compute.manager [req-f3281dae-c6e8-409a-ac03-689172bbfdc2 req-e887262f-2d0f-4ab1-a622-82802276f409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-unplugged-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.814 2 DEBUG oslo_concurrency.lockutils [req-f3281dae-c6e8-409a-ac03-689172bbfdc2 req-e887262f-2d0f-4ab1-a622-82802276f409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.815 2 DEBUG oslo_concurrency.lockutils [req-f3281dae-c6e8-409a-ac03-689172bbfdc2 req-e887262f-2d0f-4ab1-a622-82802276f409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.815 2 DEBUG oslo_concurrency.lockutils [req-f3281dae-c6e8-409a-ac03-689172bbfdc2 req-e887262f-2d0f-4ab1-a622-82802276f409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.815 2 DEBUG nova.compute.manager [req-f3281dae-c6e8-409a-ac03-689172bbfdc2 req-e887262f-2d0f-4ab1-a622-82802276f409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] No waiting events found dispatching network-vif-unplugged-7f4de623-85e2-4c78-9cfe-664d6515907f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:00 np0005466031 nova_compute[235803]: 2025-10-02 12:29:00.816 2 WARNING nova.compute.manager [req-f3281dae-c6e8-409a-ac03-689172bbfdc2 req-e887262f-2d0f-4ab1-a622-82802276f409 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received unexpected event network-vif-unplugged-7f4de623-85e2-4c78-9cfe-664d6515907f for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 08:29:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:29:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1362851634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.037 2 DEBUG oslo_concurrency.processutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.072 2 DEBUG oslo_concurrency.processutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:29:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/93481825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.524 2 DEBUG oslo_concurrency.processutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.528 2 DEBUG nova.virt.libvirt.vif [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-953315499',display_name='tempest-InstanceActionsTestJSON-server-953315499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-953315499',id=65,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aee47e8927a149149f8b2da7f91e512d',ramdisk_id='',reservation_id='r-hen0wn0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-604575546',owner_user_name='tempest-InstanceActionsTestJSON-604575546-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:00Z,user_data=None,user_id='61e6149575e848ffb09a0adcb1fc0829',uuid=f03c0d03-67d1-4162-abbf-748eeda01512,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.529 2 DEBUG nova.network.os_vif_util [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converting VIF {"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.531 2 DEBUG nova.network.os_vif_util [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.533 2 DEBUG nova.objects.instance [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lazy-loading 'pci_devices' on Instance uuid f03c0d03-67d1-4162-abbf-748eeda01512 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.553 2 DEBUG nova.virt.libvirt.driver [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <uuid>f03c0d03-67d1-4162-abbf-748eeda01512</uuid>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <name>instance-00000041</name>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <nova:name>tempest-InstanceActionsTestJSON-server-953315499</nova:name>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:29:00</nova:creationTime>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <nova:user uuid="61e6149575e848ffb09a0adcb1fc0829">tempest-InstanceActionsTestJSON-604575546-project-member</nova:user>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <nova:project uuid="aee47e8927a149149f8b2da7f91e512d">tempest-InstanceActionsTestJSON-604575546</nova:project>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <nova:port uuid="7f4de623-85e2-4c78-9cfe-664d6515907f">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <entry name="serial">f03c0d03-67d1-4162-abbf-748eeda01512</entry>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <entry name="uuid">f03c0d03-67d1-4162-abbf-748eeda01512</entry>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f03c0d03-67d1-4162-abbf-748eeda01512_disk">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f03c0d03-67d1-4162-abbf-748eeda01512_disk.config">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:0d:90:29"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <target dev="tap7f4de623-85"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512/console.log" append="off"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:29:01 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:29:01 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:29:01 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:29:01 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.559 2 DEBUG nova.virt.libvirt.driver [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.559 2 DEBUG nova.virt.libvirt.driver [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.560 2 DEBUG nova.virt.libvirt.vif [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-953315499',display_name='tempest-InstanceActionsTestJSON-server-953315499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-953315499',id=65,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='aee47e8927a149149f8b2da7f91e512d',ramdisk_id='',reservation_id='r-hen0wn0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-604575546',owner_user_name='tempest-InstanceActionsTestJSON-604575546-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:00Z,user_data=None,user_id='61e6149575e848ffb09a0adcb1fc0829',uuid=f03c0d03-67d1-4162-abbf-748eeda01512,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.560 2 DEBUG nova.network.os_vif_util [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converting VIF {"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.561 2 DEBUG nova.network.os_vif_util [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.561 2 DEBUG os_vif [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f4de623-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f4de623-85, col_values=(('external_ids', {'iface-id': '7f4de623-85e2-4c78-9cfe-664d6515907f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:90:29', 'vm-uuid': 'f03c0d03-67d1-4162-abbf-748eeda01512'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 NetworkManager[44907]: <info>  [1759408141.5688] manager: (tap7f4de623-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.573 2 INFO os_vif [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85')#033[00m
Oct  2 08:29:01 np0005466031 kernel: tap7f4de623-85: entered promiscuous mode
Oct  2 08:29:01 np0005466031 systemd-udevd[264495]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:29:01 np0005466031 NetworkManager[44907]: <info>  [1759408141.6379] manager: (tap7f4de623-85): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:01Z|00219|binding|INFO|Claiming lport 7f4de623-85e2-4c78-9cfe-664d6515907f for this chassis.
Oct  2 08:29:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:01Z|00220|binding|INFO|7f4de623-85e2-4c78-9cfe-664d6515907f: Claiming fa:16:3e:0d:90:29 10.100.0.11
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.645 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:90:29 10.100.0.11'], port_security=['fa:16:3e:0d:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f03c0d03-67d1-4162-abbf-748eeda01512', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee47e8927a149149f8b2da7f91e512d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bdb93dc0-dd7a-4431-a324-d74a1cdd44c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cd9fee4-8cc6-4445-b38a-4d81fd6185bd, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7f4de623-85e2-4c78-9cfe-664d6515907f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.646 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7f4de623-85e2-4c78-9cfe-664d6515907f in datapath 08c4e8eb-0f89-48c7-931d-c8439eb97c4d bound to our chassis#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.647 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08c4e8eb-0f89-48c7-931d-c8439eb97c4d#033[00m
Oct  2 08:29:01 np0005466031 NetworkManager[44907]: <info>  [1759408141.6501] device (tap7f4de623-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:29:01 np0005466031 NetworkManager[44907]: <info>  [1759408141.6517] device (tap7f4de623-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:01Z|00221|binding|INFO|Setting lport 7f4de623-85e2-4c78-9cfe-664d6515907f ovn-installed in OVS
Oct  2 08:29:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:01Z|00222|binding|INFO|Setting lport 7f4de623-85e2-4c78-9cfe-664d6515907f up in Southbound
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.661 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ada0ca98-d802-4d22-a60c-2293f3fc5e53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.662 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08c4e8eb-01 in ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.663 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08c4e8eb-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.664 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e4520e7a-aa36-451e-8dbd-a476cc4b9140]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.665 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4548168e-7546-4bf1-bed0-bbf527baaf8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 systemd-machined[192227]: New machine qemu-26-instance-00000041.
Oct  2 08:29:01 np0005466031 systemd[1]: Started Virtual Machine qemu-26-instance-00000041.
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.678 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[94840634-84bb-43de-a39d-46b11e2705df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.704 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1eed37-ded3-4b24-8473-0fa130b758dc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.738 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3c0f4d-32bc-4945-b9db-2621cb32012a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 NetworkManager[44907]: <info>  [1759408141.7447] manager: (tap08c4e8eb-00): new Veth device (/org/freedesktop/NetworkManager/Devices/116)
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.742 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[41c0ab98-d821-4f5e-88c6-b64e996b7b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.773 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5a26cf-04ec-48f0-9123-cbc8e488f355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.776 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4f58c388-bea4-43d0-b4b5-98a481537d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 NetworkManager[44907]: <info>  [1759408141.7956] device (tap08c4e8eb-00): carrier: link connected
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.802 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8b63008c-fc6a-4d46-8c3f-c2df3cc22661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:01.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.820 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6eaed5a8-0b6e-43e2-9609-305a6f406762]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08c4e8eb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:36:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599738, 'reachable_time': 23679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264679, 'error': None, 'target': 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.836 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[44ddbbaf-2a46-4ae5-9b09-75bab2221347]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe45:366f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599738, 'tstamp': 599738}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264680, 'error': None, 'target': 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.851 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5abed916-2629-4e8a-ad7e-8a2c551e2c4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08c4e8eb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:36:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599738, 'reachable_time': 23679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264681, 'error': None, 'target': 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.876 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8595395d-0cd0-4a51-a4af-b2a1973a8dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.932 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bbf8f9-89d5-4245-a76c-09a10da636d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.934 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08c4e8eb-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.934 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.934 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08c4e8eb-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 kernel: tap08c4e8eb-00: entered promiscuous mode
Oct  2 08:29:01 np0005466031 NetworkManager[44907]: <info>  [1759408141.9364] manager: (tap08c4e8eb-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.939 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08c4e8eb-00, col_values=(('external_ids', {'iface-id': '35d78e95-d14d-4fdd-9530-f314ae6f13a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:01Z|00223|binding|INFO|Releasing lport 35d78e95-d14d-4fdd-9530-f314ae6f13a4 from this chassis (sb_readonly=0)
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.942 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08c4e8eb-0f89-48c7-931d-c8439eb97c4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08c4e8eb-0f89-48c7-931d-c8439eb97c4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.943 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7563273d-edc2-4dd1-9729-9a4e3e977d87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.944 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-08c4e8eb-0f89-48c7-931d-c8439eb97c4d
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/08c4e8eb-0f89-48c7-931d-c8439eb97c4d.pid.haproxy
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 08c4e8eb-0f89-48c7-931d-c8439eb97c4d
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:29:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:01.944 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'env', 'PROCESS_TAG=haproxy-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08c4e8eb-0f89-48c7-931d-c8439eb97c4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:29:01 np0005466031 nova_compute[235803]: 2025-10-02 12:29:01.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e218 e218: 3 total, 3 up, 3 in
Oct  2 08:29:02 np0005466031 podman[264750]: 2025-10-02 12:29:02.305009857 +0000 UTC m=+0.056575982 container create e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:29:02 np0005466031 systemd[1]: Started libpod-conmon-e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d.scope.
Oct  2 08:29:02 np0005466031 podman[264750]: 2025-10-02 12:29:02.278031869 +0000 UTC m=+0.029598034 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:29:02 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:29:02 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d9b02a42dc0440644808487da2576f257a1b07312be4cb77d5396f10bd1e3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:29:02 np0005466031 podman[264750]: 2025-10-02 12:29:02.397558524 +0000 UTC m=+0.149124669 container init e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:29:02 np0005466031 podman[264750]: 2025-10-02 12:29:02.402864147 +0000 UTC m=+0.154430282 container start e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:29:02 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264770]: [NOTICE]   (264774) : New worker (264776) forked
Oct  2 08:29:02 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264770]: [NOTICE]   (264774) : Loading success.
Oct  2 08:29:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:02.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.748 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for f03c0d03-67d1-4162-abbf-748eeda01512 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.749 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408142.7481623, f03c0d03-67d1-4162-abbf-748eeda01512 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.750 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.752 2 DEBUG nova.compute.manager [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.754 2 INFO nova.virt.libvirt.driver [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Instance rebooted successfully.#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.755 2 DEBUG nova.compute.manager [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.790 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.793 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.819 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.820 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408142.749234, f03c0d03-67d1-4162-abbf-748eeda01512 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.820 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] VM Started (Lifecycle Event)#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.830 2 DEBUG oslo_concurrency.lockutils [None req-0bd0bf0b-9e28-4296-8ddd-51711a55a474 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.839 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.841 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.918 2 DEBUG nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.919 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.919 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.919 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.920 2 DEBUG nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] No waiting events found dispatching network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.920 2 WARNING nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received unexpected event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.920 2 DEBUG nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.920 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.921 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.921 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.921 2 DEBUG nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] No waiting events found dispatching network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.921 2 WARNING nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received unexpected event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.922 2 DEBUG nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.922 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.922 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.922 2 DEBUG oslo_concurrency.lockutils [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.923 2 DEBUG nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] No waiting events found dispatching network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:02 np0005466031 nova_compute[235803]: 2025-10-02 12:29:02.923 2 WARNING nova.compute.manager [req-2befc2aa-9152-4c1a-afa5-36f5f81de1b4 req-4182cd5a-f2ba-49c5-a3dd-9d0937ebfdda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received unexpected event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:29:03 np0005466031 nova_compute[235803]: 2025-10-02 12:29:03.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:03.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e219 e219: 3 total, 3 up, 3 in
Oct  2 08:29:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:04.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.219 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.220 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.220 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.220 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.221 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.222 2 INFO nova.compute.manager [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Terminating instance#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.223 2 DEBUG nova.compute.manager [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:29:05 np0005466031 kernel: tap7f4de623-85 (unregistering): left promiscuous mode
Oct  2 08:29:05 np0005466031 NetworkManager[44907]: <info>  [1759408145.2657] device (tap7f4de623-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:05 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:05Z|00224|binding|INFO|Releasing lport 7f4de623-85e2-4c78-9cfe-664d6515907f from this chassis (sb_readonly=0)
Oct  2 08:29:05 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:05Z|00225|binding|INFO|Setting lport 7f4de623-85e2-4c78-9cfe-664d6515907f down in Southbound
Oct  2 08:29:05 np0005466031 ovn_controller[132413]: 2025-10-02T12:29:05Z|00226|binding|INFO|Removing iface tap7f4de623-85 ovn-installed in OVS
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.283 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:90:29 10.100.0.11'], port_security=['fa:16:3e:0d:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f03c0d03-67d1-4162-abbf-748eeda01512', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee47e8927a149149f8b2da7f91e512d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'bdb93dc0-dd7a-4431-a324-d74a1cdd44c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cd9fee4-8cc6-4445-b38a-4d81fd6185bd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7f4de623-85e2-4c78-9cfe-664d6515907f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.284 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7f4de623-85e2-4c78-9cfe-664d6515907f in datapath 08c4e8eb-0f89-48c7-931d-c8439eb97c4d unbound from our chassis#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.285 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08c4e8eb-0f89-48c7-931d-c8439eb97c4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.286 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[857e1a5c-af1f-4404-84e5-316a9378d054]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.286 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d namespace which is not needed anymore#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:05 np0005466031 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000041.scope: Deactivated successfully.
Oct  2 08:29:05 np0005466031 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000041.scope: Consumed 3.558s CPU time.
Oct  2 08:29:05 np0005466031 systemd-machined[192227]: Machine qemu-26-instance-00000041 terminated.
Oct  2 08:29:05 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264770]: [NOTICE]   (264774) : haproxy version is 2.8.14-c23fe91
Oct  2 08:29:05 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264770]: [NOTICE]   (264774) : path to executable is /usr/sbin/haproxy
Oct  2 08:29:05 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264770]: [WARNING]  (264774) : Exiting Master process...
Oct  2 08:29:05 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264770]: [WARNING]  (264774) : Exiting Master process...
Oct  2 08:29:05 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264770]: [ALERT]    (264774) : Current worker (264776) exited with code 143 (Terminated)
Oct  2 08:29:05 np0005466031 neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d[264770]: [WARNING]  (264774) : All workers exited. Exiting... (0)
Oct  2 08:29:05 np0005466031 systemd[1]: libpod-e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d.scope: Deactivated successfully.
Oct  2 08:29:05 np0005466031 podman[264810]: 2025-10-02 12:29:05.40866777 +0000 UTC m=+0.040601121 container died e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:29:05 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d-userdata-shm.mount: Deactivated successfully.
Oct  2 08:29:05 np0005466031 systemd[1]: var-lib-containers-storage-overlay-74d9b02a42dc0440644808487da2576f257a1b07312be4cb77d5396f10bd1e3b-merged.mount: Deactivated successfully.
Oct  2 08:29:05 np0005466031 podman[264810]: 2025-10-02 12:29:05.463358277 +0000 UTC m=+0.095291628 container cleanup e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.464 2 INFO nova.virt.libvirt.driver [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Instance destroyed successfully.#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.466 2 DEBUG nova.objects.instance [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lazy-loading 'resources' on Instance uuid f03c0d03-67d1-4162-abbf-748eeda01512 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:05 np0005466031 systemd[1]: libpod-conmon-e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d.scope: Deactivated successfully.
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.487 2 DEBUG nova.virt.libvirt.vif [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-953315499',display_name='tempest-InstanceActionsTestJSON-server-953315499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-953315499',id=65,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aee47e8927a149149f8b2da7f91e512d',ramdisk_id='',reservation_id='r-hen0wn0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-604575546',owner_user_name='tempest-InstanceActionsTestJSON-604575546-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:02Z,user_data=None,user_id='61e6149575e848ffb09a0adcb1fc0829',uuid=f03c0d03-67d1-4162-abbf-748eeda01512,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.488 2 DEBUG nova.network.os_vif_util [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converting VIF {"id": "7f4de623-85e2-4c78-9cfe-664d6515907f", "address": "fa:16:3e:0d:90:29", "network": {"id": "08c4e8eb-0f89-48c7-931d-c8439eb97c4d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-681797021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee47e8927a149149f8b2da7f91e512d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f4de623-85", "ovs_interfaceid": "7f4de623-85e2-4c78-9cfe-664d6515907f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.488 2 DEBUG nova.network.os_vif_util [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.489 2 DEBUG os_vif [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f4de623-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.495 2 DEBUG nova.compute.manager [req-7e142fcc-71c9-45b5-b392-4d1bcf0f2dc1 req-0e44e9e4-9b4e-46df-9c9a-24dd34b99702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-unplugged-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.495 2 DEBUG oslo_concurrency.lockutils [req-7e142fcc-71c9-45b5-b392-4d1bcf0f2dc1 req-0e44e9e4-9b4e-46df-9c9a-24dd34b99702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.496 2 DEBUG oslo_concurrency.lockutils [req-7e142fcc-71c9-45b5-b392-4d1bcf0f2dc1 req-0e44e9e4-9b4e-46df-9c9a-24dd34b99702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.496 2 DEBUG oslo_concurrency.lockutils [req-7e142fcc-71c9-45b5-b392-4d1bcf0f2dc1 req-0e44e9e4-9b4e-46df-9c9a-24dd34b99702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.496 2 DEBUG nova.compute.manager [req-7e142fcc-71c9-45b5-b392-4d1bcf0f2dc1 req-0e44e9e4-9b4e-46df-9c9a-24dd34b99702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] No waiting events found dispatching network-vif-unplugged-7f4de623-85e2-4c78-9cfe-664d6515907f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.497 2 DEBUG nova.compute.manager [req-7e142fcc-71c9-45b5-b392-4d1bcf0f2dc1 req-0e44e9e4-9b4e-46df-9c9a-24dd34b99702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-unplugged-7f4de623-85e2-4c78-9cfe-664d6515907f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.500 2 INFO os_vif [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:90:29,bridge_name='br-int',has_traffic_filtering=True,id=7f4de623-85e2-4c78-9cfe-664d6515907f,network=Network(08c4e8eb-0f89-48c7-931d-c8439eb97c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f4de623-85')#033[00m
Oct  2 08:29:05 np0005466031 podman[264853]: 2025-10-02 12:29:05.529301847 +0000 UTC m=+0.039720135 container remove e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.535 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[358e593b-e124-4388-af1a-935d7d9b313f]: (4, ('Thu Oct  2 12:29:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d (e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d)\ne5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d\nThu Oct  2 12:29:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d (e5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d)\ne5f840a2c11aa7d42a42c5c40817a5a164647a40e65b62a9476c14b5b0cd785d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.537 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a807794e-0784-45ce-9737-5dc9c141d35f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.538 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08c4e8eb-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:05 np0005466031 kernel: tap08c4e8eb-00: left promiscuous mode
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:05 np0005466031 nova_compute[235803]: 2025-10-02 12:29:05.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.562 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ceec2e93-634c-4517-872c-985a76144ffe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.593 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[61c2c3de-f625-4118-8c35-4a1a7ba108a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.595 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e623a3da-bea6-41fb-8b35-b0aff4940684]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.611 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2e953e04-15a5-4648-929c-ae8a214858e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599732, 'reachable_time': 27726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264884, 'error': None, 'target': 'ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.616 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08c4e8eb-0f89-48c7-931d-c8439eb97c4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:29:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:05.616 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[3065d81d-d8c5-4d2b-bf0d-df6a5aa3ae3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:05 np0005466031 systemd[1]: run-netns-ovnmeta\x2d08c4e8eb\x2d0f89\x2d48c7\x2d931d\x2dc8439eb97c4d.mount: Deactivated successfully.
Oct  2 08:29:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:05.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:06.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:07.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:07 np0005466031 nova_compute[235803]: 2025-10-02 12:29:07.889 2 DEBUG nova.compute.manager [req-3eb432b4-558f-4bd2-b89f-29a41d2b334b req-54c4e04c-2093-46bf-aa06-210610418939 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:07 np0005466031 nova_compute[235803]: 2025-10-02 12:29:07.890 2 DEBUG oslo_concurrency.lockutils [req-3eb432b4-558f-4bd2-b89f-29a41d2b334b req-54c4e04c-2093-46bf-aa06-210610418939 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:07 np0005466031 nova_compute[235803]: 2025-10-02 12:29:07.890 2 DEBUG oslo_concurrency.lockutils [req-3eb432b4-558f-4bd2-b89f-29a41d2b334b req-54c4e04c-2093-46bf-aa06-210610418939 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:07 np0005466031 nova_compute[235803]: 2025-10-02 12:29:07.891 2 DEBUG oslo_concurrency.lockutils [req-3eb432b4-558f-4bd2-b89f-29a41d2b334b req-54c4e04c-2093-46bf-aa06-210610418939 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:07 np0005466031 nova_compute[235803]: 2025-10-02 12:29:07.891 2 DEBUG nova.compute.manager [req-3eb432b4-558f-4bd2-b89f-29a41d2b334b req-54c4e04c-2093-46bf-aa06-210610418939 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] No waiting events found dispatching network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:07 np0005466031 nova_compute[235803]: 2025-10-02 12:29:07.891 2 WARNING nova.compute.manager [req-3eb432b4-558f-4bd2-b89f-29a41d2b334b req-54c4e04c-2093-46bf-aa06-210610418939 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received unexpected event network-vif-plugged-7f4de623-85e2-4c78-9cfe-664d6515907f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:29:08 np0005466031 nova_compute[235803]: 2025-10-02 12:29:08.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:08.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:29:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:29:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:29:09 np0005466031 nova_compute[235803]: 2025-10-02 12:29:09.536 2 INFO nova.virt.libvirt.driver [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Deleting instance files /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512_del#033[00m
Oct  2 08:29:09 np0005466031 nova_compute[235803]: 2025-10-02 12:29:09.537 2 INFO nova.virt.libvirt.driver [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Deletion of /var/lib/nova/instances/f03c0d03-67d1-4162-abbf-748eeda01512_del complete#033[00m
Oct  2 08:29:09 np0005466031 nova_compute[235803]: 2025-10-02 12:29:09.622 2 INFO nova.compute.manager [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Took 4.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:29:09 np0005466031 nova_compute[235803]: 2025-10-02 12:29:09.622 2 DEBUG oslo.service.loopingcall [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:29:09 np0005466031 nova_compute[235803]: 2025-10-02 12:29:09.623 2 DEBUG nova.compute.manager [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:29:09 np0005466031 nova_compute[235803]: 2025-10-02 12:29:09.623 2 DEBUG nova.network.neutron [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:29:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:10 np0005466031 nova_compute[235803]: 2025-10-02 12:29:10.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:10.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:10 np0005466031 nova_compute[235803]: 2025-10-02 12:29:10.774 2 DEBUG nova.network.neutron [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:10 np0005466031 nova_compute[235803]: 2025-10-02 12:29:10.794 2 INFO nova.compute.manager [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Took 1.17 seconds to deallocate network for instance.#033[00m
Oct  2 08:29:10 np0005466031 nova_compute[235803]: 2025-10-02 12:29:10.872 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:10 np0005466031 nova_compute[235803]: 2025-10-02 12:29:10.872 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:10 np0005466031 nova_compute[235803]: 2025-10-02 12:29:10.941 2 DEBUG nova.compute.manager [req-80453028-0d9d-4112-8cb2-00997cf66817 req-caa26615-b12b-4822-b2ad-52a0ead2a2fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Received event network-vif-deleted-7f4de623-85e2-4c78-9cfe-664d6515907f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:10 np0005466031 nova_compute[235803]: 2025-10-02 12:29:10.966 2 DEBUG oslo_concurrency.processutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1899312348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:11 np0005466031 nova_compute[235803]: 2025-10-02 12:29:11.400 2 DEBUG oslo_concurrency.processutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:11 np0005466031 nova_compute[235803]: 2025-10-02 12:29:11.409 2 DEBUG nova.compute.provider_tree [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:29:11 np0005466031 nova_compute[235803]: 2025-10-02 12:29:11.426 2 DEBUG nova.scheduler.client.report [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:29:11 np0005466031 nova_compute[235803]: 2025-10-02 12:29:11.468 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:11 np0005466031 nova_compute[235803]: 2025-10-02 12:29:11.505 2 INFO nova.scheduler.client.report [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Deleted allocations for instance f03c0d03-67d1-4162-abbf-748eeda01512#033[00m
Oct  2 08:29:11 np0005466031 nova_compute[235803]: 2025-10-02 12:29:11.590 2 DEBUG oslo_concurrency.lockutils [None req-4beba222-7072-437b-94ec-5a47b52cf401 61e6149575e848ffb09a0adcb1fc0829 aee47e8927a149149f8b2da7f91e512d - - default default] Lock "f03c0d03-67d1-4162-abbf-748eeda01512" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:11 np0005466031 podman[265044]: 2025-10-02 12:29:11.66839905 +0000 UTC m=+0.088938024 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  2 08:29:11 np0005466031 podman[265045]: 2025-10-02 12:29:11.704350037 +0000 UTC m=+0.125096317 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  2 08:29:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:12.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:13 np0005466031 nova_compute[235803]: 2025-10-02 12:29:13.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e220 e220: 3 total, 3 up, 3 in
Oct  2 08:29:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:13.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:14 np0005466031 nova_compute[235803]: 2025-10-02 12:29:14.015 2 DEBUG nova.compute.manager [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:14 np0005466031 nova_compute[235803]: 2025-10-02 12:29:14.062 2 INFO nova.compute.manager [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] instance snapshotting#033[00m
Oct  2 08:29:14 np0005466031 nova_compute[235803]: 2025-10-02 12:29:14.329 2 INFO nova.virt.libvirt.driver [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Beginning live snapshot process#033[00m
Oct  2 08:29:14 np0005466031 nova_compute[235803]: 2025-10-02 12:29:14.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:14 np0005466031 nova_compute[235803]: 2025-10-02 12:29:14.501 2 DEBUG nova.virt.libvirt.imagebackend [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:29:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:14.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:14 np0005466031 nova_compute[235803]: 2025-10-02 12:29:14.789 2 DEBUG nova.storage.rbd_utils [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(a166629b8d4f45bfaf5e4e0aa324b947) on rbd image(e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:29:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:15 np0005466031 nova_compute[235803]: 2025-10-02 12:29:15.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:15.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e221 e221: 3 total, 3 up, 3 in
Oct  2 08:29:15 np0005466031 nova_compute[235803]: 2025-10-02 12:29:15.965 2 DEBUG nova.storage.rbd_utils [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] cloning vms/e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk@a166629b8d4f45bfaf5e4e0aa324b947 to images/4038405e-7d51-444f-9567-0cdacfa79f42 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:29:16 np0005466031 nova_compute[235803]: 2025-10-02 12:29:16.290 2 DEBUG nova.storage.rbd_utils [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] flattening images/4038405e-7d51-444f-9567-0cdacfa79f42 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:29:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:16.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:17.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:18 np0005466031 nova_compute[235803]: 2025-10-02 12:29:18.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:18.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:19 np0005466031 nova_compute[235803]: 2025-10-02 12:29:19.393 2 DEBUG nova.storage.rbd_utils [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] removing snapshot(a166629b8d4f45bfaf5e4e0aa324b947) on rbd image(e985fe5c-e98d-4f5b-8985-61a156cde5f9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:29:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:19.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e222 e222: 3 total, 3 up, 3 in
Oct  2 08:29:19 np0005466031 nova_compute[235803]: 2025-10-02 12:29:19.963 2 DEBUG nova.storage.rbd_utils [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(snap) on rbd image(4038405e-7d51-444f-9567-0cdacfa79f42) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:29:20 np0005466031 nova_compute[235803]: 2025-10-02 12:29:20.463 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408145.462231, f03c0d03-67d1-4162-abbf-748eeda01512 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:20 np0005466031 nova_compute[235803]: 2025-10-02 12:29:20.463 2 INFO nova.compute.manager [-] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:29:20 np0005466031 nova_compute[235803]: 2025-10-02 12:29:20.487 2 DEBUG nova.compute.manager [None req-d19d789d-313e-4362-8a9d-37722a53d229 - - - - - -] [instance: f03c0d03-67d1-4162-abbf-748eeda01512] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:20 np0005466031 nova_compute[235803]: 2025-10-02 12:29:20.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:20 np0005466031 podman[265282]: 2025-10-02 12:29:20.663761314 +0000 UTC m=+0.082714785 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:29:20 np0005466031 podman[265281]: 2025-10-02 12:29:20.665295108 +0000 UTC m=+0.079264206 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:29:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:20.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e223 e223: 3 total, 3 up, 3 in
Oct  2 08:29:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:29:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:29:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:22.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:23 np0005466031 nova_compute[235803]: 2025-10-02 12:29:23.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:23.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:24 np0005466031 nova_compute[235803]: 2025-10-02 12:29:24.287 2 INFO nova.virt.libvirt.driver [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Snapshot image upload complete#033[00m
Oct  2 08:29:24 np0005466031 nova_compute[235803]: 2025-10-02 12:29:24.288 2 INFO nova.compute.manager [None req-c5f8fc61-4eb4-442a-ad37-0c824f373bd5 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Took 10.22 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:29:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:24.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:25 np0005466031 nova_compute[235803]: 2025-10-02 12:29:25.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:25.834 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:25.835 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:25.835 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:26.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e224 e224: 3 total, 3 up, 3 in
Oct  2 08:29:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:27.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:28 np0005466031 nova_compute[235803]: 2025-10-02 12:29:28.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:28.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e225 e225: 3 total, 3 up, 3 in
Oct  2 08:29:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:30 np0005466031 nova_compute[235803]: 2025-10-02 12:29:30.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:30.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e226 e226: 3 total, 3 up, 3 in
Oct  2 08:29:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e227 e227: 3 total, 3 up, 3 in
Oct  2 08:29:33 np0005466031 nova_compute[235803]: 2025-10-02 12:29:33.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:33.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:35 np0005466031 nova_compute[235803]: 2025-10-02 12:29:35.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:35.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:36.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:37.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:38 np0005466031 nova_compute[235803]: 2025-10-02 12:29:38.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:38.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e228 e228: 3 total, 3 up, 3 in
Oct  2 08:29:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:40 np0005466031 nova_compute[235803]: 2025-10-02 12:29:40.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:41.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:42 np0005466031 podman[265432]: 2025-10-02 12:29:42.640755755 +0000 UTC m=+0.071096970 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:29:42 np0005466031 podman[265433]: 2025-10-02 12:29:42.684561578 +0000 UTC m=+0.103232547 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:29:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:42.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:43 np0005466031 nova_compute[235803]: 2025-10-02 12:29:43.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:43 np0005466031 nova_compute[235803]: 2025-10-02 12:29:43.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:43 np0005466031 nova_compute[235803]: 2025-10-02 12:29:43.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:43 np0005466031 nova_compute[235803]: 2025-10-02 12:29:43.656 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:43 np0005466031 nova_compute[235803]: 2025-10-02 12:29:43.657 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:43 np0005466031 nova_compute[235803]: 2025-10-02 12:29:43.657 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:43 np0005466031 nova_compute[235803]: 2025-10-02 12:29:43.657 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:29:43 np0005466031 nova_compute[235803]: 2025-10-02 12:29:43.658 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:43.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e229 e229: 3 total, 3 up, 3 in
Oct  2 08:29:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4200825112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.065 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.122 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.123 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.308 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.311 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4395MB free_disk=20.85568618774414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.311 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.312 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.393 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance e985fe5c-e98d-4f5b-8985-61a156cde5f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.394 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.395 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.431 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:44.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3910498834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.882 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.889 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.916 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.951 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:29:44 np0005466031 nova_compute[235803]: 2025-10-02 12:29:44.952 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:45 np0005466031 nova_compute[235803]: 2025-10-02 12:29:45.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:45.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:45 np0005466031 nova_compute[235803]: 2025-10-02 12:29:45.948 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:45 np0005466031 nova_compute[235803]: 2025-10-02 12:29:45.949 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:46.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:47 np0005466031 nova_compute[235803]: 2025-10-02 12:29:47.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:47.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:48 np0005466031 nova_compute[235803]: 2025-10-02 12:29:48.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:48 np0005466031 nova_compute[235803]: 2025-10-02 12:29:48.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:48 np0005466031 nova_compute[235803]: 2025-10-02 12:29:48.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:29:48 np0005466031 nova_compute[235803]: 2025-10-02 12:29:48.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:29:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e230 e230: 3 total, 3 up, 3 in
Oct  2 08:29:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:48 np0005466031 nova_compute[235803]: 2025-10-02 12:29:48.961 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-e985fe5c-e98d-4f5b-8985-61a156cde5f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:48 np0005466031 nova_compute[235803]: 2025-10-02 12:29:48.961 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-e985fe5c-e98d-4f5b-8985-61a156cde5f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:48 np0005466031 nova_compute[235803]: 2025-10-02 12:29:48.962 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:29:48 np0005466031 nova_compute[235803]: 2025-10-02 12:29:48.962 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e985fe5c-e98d-4f5b-8985-61a156cde5f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e231 e231: 3 total, 3 up, 3 in
Oct  2 08:29:49 np0005466031 nova_compute[235803]: 2025-10-02 12:29:49.430 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:29:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:49.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.082 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.095 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-e985fe5c-e98d-4f5b-8985-61a156cde5f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.095 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.095 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.096 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:29:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:50.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:50.837 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:50 np0005466031 nova_compute[235803]: 2025-10-02 12:29:50.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:50.839 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:29:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e232 e232: 3 total, 3 up, 3 in
Oct  2 08:29:51 np0005466031 podman[265527]: 2025-10-02 12:29:51.635564243 +0000 UTC m=+0.063449880 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:29:51 np0005466031 podman[265526]: 2025-10-02 12:29:51.651261826 +0000 UTC m=+0.082737316 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:29:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:51.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.777 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.778 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.778 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.778 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.779 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.780 2 INFO nova.compute.manager [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Terminating instance#033[00m
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.781 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "refresh_cache-e985fe5c-e98d-4f5b-8985-61a156cde5f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.781 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquired lock "refresh_cache-e985fe5c-e98d-4f5b-8985-61a156cde5f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:52 np0005466031 nova_compute[235803]: 2025-10-02 12:29:52.781 2 DEBUG nova.network.neutron [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:29:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:52.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:53 np0005466031 nova_compute[235803]: 2025-10-02 12:29:53.006 2 DEBUG nova.network.neutron [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:29:53 np0005466031 nova_compute[235803]: 2025-10-02 12:29:53.459 2 DEBUG nova.network.neutron [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:53 np0005466031 nova_compute[235803]: 2025-10-02 12:29:53.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:53 np0005466031 nova_compute[235803]: 2025-10-02 12:29:53.620 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Releasing lock "refresh_cache-e985fe5c-e98d-4f5b-8985-61a156cde5f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:53 np0005466031 nova_compute[235803]: 2025-10-02 12:29:53.620 2 DEBUG nova.compute.manager [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:29:53 np0005466031 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct  2 08:29:53 np0005466031 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000042.scope: Consumed 14.449s CPU time.
Oct  2 08:29:53 np0005466031 systemd-machined[192227]: Machine qemu-24-instance-00000042 terminated.
Oct  2 08:29:53 np0005466031 nova_compute[235803]: 2025-10-02 12:29:53.840 2 INFO nova.virt.libvirt.driver [-] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Instance destroyed successfully.#033[00m
Oct  2 08:29:53 np0005466031 nova_compute[235803]: 2025-10-02 12:29:53.841 2 DEBUG nova.objects.instance [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'resources' on Instance uuid e985fe5c-e98d-4f5b-8985-61a156cde5f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:53.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e233 e233: 3 total, 3 up, 3 in
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.381 2 INFO nova.virt.libvirt.driver [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Deleting instance files /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9_del#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.381 2 INFO nova.virt.libvirt.driver [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Deletion of /var/lib/nova/instances/e985fe5c-e98d-4f5b-8985-61a156cde5f9_del complete#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.435 2 INFO nova.compute.manager [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.435 2 DEBUG oslo.service.loopingcall [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.436 2 DEBUG nova.compute.manager [-] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.436 2 DEBUG nova.network.neutron [-] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.553 2 DEBUG nova.network.neutron [-] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.589 2 DEBUG nova.network.neutron [-] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.604 2 INFO nova.compute.manager [-] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Took 0.17 seconds to deallocate network for instance.#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.664 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.664 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.691 2 DEBUG nova.scheduler.client.report [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.709 2 DEBUG nova.scheduler.client.report [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.709 2 DEBUG nova.compute.provider_tree [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.725 2 DEBUG nova.scheduler.client.report [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.758 2 DEBUG nova.scheduler.client.report [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:29:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:54.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:29:54.840 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:54 np0005466031 nova_compute[235803]: 2025-10-02 12:29:54.848 2 DEBUG oslo_concurrency.processutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2737000721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:55 np0005466031 nova_compute[235803]: 2025-10-02 12:29:55.279 2 DEBUG oslo_concurrency.processutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:55 np0005466031 nova_compute[235803]: 2025-10-02 12:29:55.285 2 DEBUG nova.compute.provider_tree [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:29:55 np0005466031 nova_compute[235803]: 2025-10-02 12:29:55.412 2 DEBUG nova.scheduler.client.report [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:29:55 np0005466031 nova_compute[235803]: 2025-10-02 12:29:55.537 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:55 np0005466031 nova_compute[235803]: 2025-10-02 12:29:55.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:55 np0005466031 nova_compute[235803]: 2025-10-02 12:29:55.616 2 INFO nova.scheduler.client.report [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Deleted allocations for instance e985fe5c-e98d-4f5b-8985-61a156cde5f9#033[00m
Oct  2 08:29:55 np0005466031 nova_compute[235803]: 2025-10-02 12:29:55.747 2 DEBUG oslo_concurrency.lockutils [None req-c66f0e4e-255f-420c-aa68-cd9b458e0eba 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "e985fe5c-e98d-4f5b-8985-61a156cde5f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:55.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:56.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:57.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:58 np0005466031 nova_compute[235803]: 2025-10-02 12:29:58.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:58.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e234 e234: 3 total, 3 up, 3 in
Oct  2 08:29:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:29:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:29:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:59.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:29:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 08:30:00 np0005466031 nova_compute[235803]: 2025-10-02 12:30:00.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:01.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:02 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:02Z|00227|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  2 08:30:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e235 e235: 3 total, 3 up, 3 in
Oct  2 08:30:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:02.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:03 np0005466031 nova_compute[235803]: 2025-10-02 12:30:03.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:03.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:04.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e236 e236: 3 total, 3 up, 3 in
Oct  2 08:30:05 np0005466031 nova_compute[235803]: 2025-10-02 12:30:05.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:06.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:07.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:08 np0005466031 nova_compute[235803]: 2025-10-02 12:30:08.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:08.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:08 np0005466031 nova_compute[235803]: 2025-10-02 12:30:08.839 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408193.837801, e985fe5c-e98d-4f5b-8985-61a156cde5f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:08 np0005466031 nova_compute[235803]: 2025-10-02 12:30:08.839 2 INFO nova.compute.manager [-] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:30:08 np0005466031 nova_compute[235803]: 2025-10-02 12:30:08.859 2 DEBUG nova.compute.manager [None req-238d5dff-ba4e-4ae4-a826-8ad761746cc4 - - - - - -] [instance: e985fe5c-e98d-4f5b-8985-61a156cde5f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e237 e237: 3 total, 3 up, 3 in
Oct  2 08:30:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:09.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:10 np0005466031 nova_compute[235803]: 2025-10-02 12:30:10.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:10.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e238 e238: 3 total, 3 up, 3 in
Oct  2 08:30:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:12.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:13 np0005466031 nova_compute[235803]: 2025-10-02 12:30:13.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:13 np0005466031 podman[265671]: 2025-10-02 12:30:13.661367361 +0000 UTC m=+0.079028379 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:30:13 np0005466031 podman[265672]: 2025-10-02 12:30:13.689137511 +0000 UTC m=+0.113633916 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 08:30:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:13.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:14.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:15 np0005466031 nova_compute[235803]: 2025-10-02 12:30:15.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:15.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.726 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "3a66c549-fa18-498a-93d5-ac91e746b002" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.727 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.745 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.806 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "660bced2-3ec3-45ab-bb6f-155853e3d658" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.807 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:16.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.849 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.852 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.853 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.864 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.865 2 INFO nova.compute.claims [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:30:16 np0005466031 nova_compute[235803]: 2025-10-02 12:30:16.952 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.038 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3677734845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.508 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.516 2 DEBUG nova.compute.provider_tree [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.542 2 DEBUG nova.scheduler.client.report [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.578 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.579 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.582 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.588 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.588 2 INFO nova.compute.claims [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.658 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.659 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.680 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.698 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.769 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.870 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.872 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.872 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Creating image(s)#033[00m
Oct  2 08:30:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:17.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.903 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 3a66c549-fa18-498a-93d5-ac91e746b002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.928 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 3a66c549-fa18-498a-93d5-ac91e746b002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.956 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 3a66c549-fa18-498a-93d5-ac91e746b002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.959 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:17 np0005466031 nova_compute[235803]: 2025-10-02 12:30:17.981 2 DEBUG nova.policy [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '045de4bc70204ae8b6975513839061d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '546222ddef05450d9aeb91e721403b5b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.015 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.015 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.016 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.016 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.038 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 3a66c549-fa18-498a-93d5-ac91e746b002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.043 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3a66c549-fa18-498a-93d5-ac91e746b002_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1106284102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.201 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.206 2 DEBUG nova.compute.provider_tree [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.227 2 DEBUG nova.scheduler.client.report [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.246 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.247 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.286 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.287 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.307 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.326 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.407 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.409 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.409 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Creating image(s)#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.434 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 660bced2-3ec3-45ab-bb6f-155853e3d658_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.464 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 660bced2-3ec3-45ab-bb6f-155853e3d658_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.484 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 660bced2-3ec3-45ab-bb6f-155853e3d658_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.487 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.510 2 DEBUG nova.policy [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '045de4bc70204ae8b6975513839061d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '546222ddef05450d9aeb91e721403b5b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.550 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.551 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.551 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.552 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.571 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 660bced2-3ec3-45ab-bb6f-155853e3d658_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.575 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 660bced2-3ec3-45ab-bb6f-155853e3d658_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:18 np0005466031 nova_compute[235803]: 2025-10-02 12:30:18.601 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Successfully created port: 12796a62-46f7-45b8-9915-c8ce909d615b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:30:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:18.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:19 np0005466031 nova_compute[235803]: 2025-10-02 12:30:19.261 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Successfully created port: 78202441-8869-4d28-b719-5732a733fe90 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:30:19 np0005466031 nova_compute[235803]: 2025-10-02 12:30:19.792 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Successfully updated port: 12796a62-46f7-45b8-9915-c8ce909d615b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:30:19 np0005466031 nova_compute[235803]: 2025-10-02 12:30:19.811 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "refresh_cache-3a66c549-fa18-498a-93d5-ac91e746b002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:19 np0005466031 nova_compute[235803]: 2025-10-02 12:30:19.812 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquired lock "refresh_cache-3a66c549-fa18-498a-93d5-ac91e746b002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:19 np0005466031 nova_compute[235803]: 2025-10-02 12:30:19.812 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:30:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:19.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:19 np0005466031 nova_compute[235803]: 2025-10-02 12:30:19.928 2 DEBUG nova.compute.manager [req-baf91988-bea4-454d-9abd-49f669fa5212 req-7e63ab99-28e2-4448-8577-6cad2f449bd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received event network-changed-12796a62-46f7-45b8-9915-c8ce909d615b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:19 np0005466031 nova_compute[235803]: 2025-10-02 12:30:19.928 2 DEBUG nova.compute.manager [req-baf91988-bea4-454d-9abd-49f669fa5212 req-7e63ab99-28e2-4448-8577-6cad2f449bd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Refreshing instance network info cache due to event network-changed-12796a62-46f7-45b8-9915-c8ce909d615b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:30:19 np0005466031 nova_compute[235803]: 2025-10-02 12:30:19.929 2 DEBUG oslo_concurrency.lockutils [req-baf91988-bea4-454d-9abd-49f669fa5212 req-7e63ab99-28e2-4448-8577-6cad2f449bd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3a66c549-fa18-498a-93d5-ac91e746b002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.013 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.217 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Successfully updated port: 78202441-8869-4d28-b719-5732a733fe90 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.232 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "refresh_cache-660bced2-3ec3-45ab-bb6f-155853e3d658" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.233 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquired lock "refresh_cache-660bced2-3ec3-45ab-bb6f-155853e3d658" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.233 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.461 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.564 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3a66c549-fa18-498a-93d5-ac91e746b002_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.629 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] resizing rbd image 3a66c549-fa18-498a-93d5-ac91e746b002_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.746 2 DEBUG nova.objects.instance [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'migration_context' on Instance uuid 3a66c549-fa18-498a-93d5-ac91e746b002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.777 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.777 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Ensure instance console log exists: /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.778 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.778 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.778 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:20.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.911 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Updating instance_info_cache with network_info: [{"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.928 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Releasing lock "refresh_cache-3a66c549-fa18-498a-93d5-ac91e746b002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.928 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Instance network_info: |[{"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.928 2 DEBUG oslo_concurrency.lockutils [req-baf91988-bea4-454d-9abd-49f669fa5212 req-7e63ab99-28e2-4448-8577-6cad2f449bd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3a66c549-fa18-498a-93d5-ac91e746b002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.929 2 DEBUG nova.network.neutron [req-baf91988-bea4-454d-9abd-49f669fa5212 req-7e63ab99-28e2-4448-8577-6cad2f449bd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Refreshing network info cache for port 12796a62-46f7-45b8-9915-c8ce909d615b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.931 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Start _get_guest_xml network_info=[{"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.935 2 WARNING nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.939 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.940 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.944 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.945 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.946 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.946 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.946 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.947 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.947 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.947 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.947 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.947 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.948 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.948 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.948 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.948 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:30:20 np0005466031 nova_compute[235803]: 2025-10-02 12:30:20.950 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 e239: 3 total, 3 up, 3 in
Oct  2 08:30:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1752376986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:21 np0005466031 nova_compute[235803]: 2025-10-02 12:30:21.640 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Updating instance_info_cache with network_info: [{"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:21 np0005466031 nova_compute[235803]: 2025-10-02 12:30:21.661 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Releasing lock "refresh_cache-660bced2-3ec3-45ab-bb6f-155853e3d658" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:21 np0005466031 nova_compute[235803]: 2025-10-02 12:30:21.661 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Instance network_info: |[{"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:30:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:21 np0005466031 nova_compute[235803]: 2025-10-02 12:30:21.895 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.945s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:21 np0005466031 nova_compute[235803]: 2025-10-02 12:30:21.922 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 3a66c549-fa18-498a-93d5-ac91e746b002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:21 np0005466031 nova_compute[235803]: 2025-10-02 12:30:21.925 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.029 2 DEBUG nova.compute.manager [req-58cf7055-7759-4fec-9213-5ae50f565963 req-68760a7d-ad2a-416c-b63c-e2e18b47c879 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received event network-changed-78202441-8869-4d28-b719-5732a733fe90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.030 2 DEBUG nova.compute.manager [req-58cf7055-7759-4fec-9213-5ae50f565963 req-68760a7d-ad2a-416c-b63c-e2e18b47c879 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Refreshing instance network info cache due to event network-changed-78202441-8869-4d28-b719-5732a733fe90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.030 2 DEBUG oslo_concurrency.lockutils [req-58cf7055-7759-4fec-9213-5ae50f565963 req-68760a7d-ad2a-416c-b63c-e2e18b47c879 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-660bced2-3ec3-45ab-bb6f-155853e3d658" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.030 2 DEBUG oslo_concurrency.lockutils [req-58cf7055-7759-4fec-9213-5ae50f565963 req-68760a7d-ad2a-416c-b63c-e2e18b47c879 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-660bced2-3ec3-45ab-bb6f-155853e3d658" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.031 2 DEBUG nova.network.neutron [req-58cf7055-7759-4fec-9213-5ae50f565963 req-68760a7d-ad2a-416c-b63c-e2e18b47c879 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Refreshing network info cache for port 78202441-8869-4d28-b719-5732a733fe90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:30:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2545413803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:22 np0005466031 podman[266162]: 2025-10-02 12:30:22.550773542 +0000 UTC m=+0.065065046 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:30:22 np0005466031 podman[266163]: 2025-10-02 12:30:22.557829125 +0000 UTC m=+0.070056230 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.813 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.887s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.814 2 DEBUG nova.virt.libvirt.vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-1',id=71,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:17Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=3a66c549-fa18-498a-93d5-ac91e746b002,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.815 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.816 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:8e:19,bridge_name='br-int',has_traffic_filtering=True,id=12796a62-46f7-45b8-9915-c8ce909d615b,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12796a62-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.817 2 DEBUG nova.objects.instance [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a66c549-fa18-498a-93d5-ac91e746b002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.841 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <uuid>3a66c549-fa18-498a-93d5-ac91e746b002</uuid>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <name>instance-00000047</name>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <nova:name>tempest-ListServersNegativeTestJSON-server-978789831-1</nova:name>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:30:20</nova:creationTime>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <nova:user uuid="045de4bc70204ae8b6975513839061d8">tempest-ListServersNegativeTestJSON-400261674-project-member</nova:user>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <nova:project uuid="546222ddef05450d9aeb91e721403b5b">tempest-ListServersNegativeTestJSON-400261674</nova:project>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <nova:port uuid="12796a62-46f7-45b8-9915-c8ce909d615b">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <entry name="serial">3a66c549-fa18-498a-93d5-ac91e746b002</entry>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <entry name="uuid">3a66c549-fa18-498a-93d5-ac91e746b002</entry>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3a66c549-fa18-498a-93d5-ac91e746b002_disk">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3a66c549-fa18-498a-93d5-ac91e746b002_disk.config">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:bf:8e:19"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <target dev="tap12796a62-46"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002/console.log" append="off"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:30:22 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:30:22 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:30:22 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:30:22 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.842 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Preparing to wait for external event network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.842 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.842 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.842 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.843 2 DEBUG nova.virt.libvirt.vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-1',id=71,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:17Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=3a66c549-fa18-498a-93d5-ac91e746b002,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.843 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.844 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:8e:19,bridge_name='br-int',has_traffic_filtering=True,id=12796a62-46f7-45b8-9915-c8ce909d615b,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12796a62-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.845 2 DEBUG os_vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:8e:19,bridge_name='br-int',has_traffic_filtering=True,id=12796a62-46f7-45b8-9915-c8ce909d615b,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12796a62-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12796a62-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12796a62-46, col_values=(('external_ids', {'iface-id': '12796a62-46f7-45b8-9915-c8ce909d615b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:8e:19', 'vm-uuid': '3a66c549-fa18-498a-93d5-ac91e746b002'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:22.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:22 np0005466031 NetworkManager[44907]: <info>  [1759408222.8918] manager: (tap12796a62-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.897 2 INFO os_vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:8e:19,bridge_name='br-int',has_traffic_filtering=True,id=12796a62-46f7-45b8-9915-c8ce909d615b,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12796a62-46')#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.948 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.949 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.949 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No VIF found with MAC fa:16:3e:bf:8e:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.949 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Using config drive#033[00m
Oct  2 08:30:22 np0005466031 nova_compute[235803]: 2025-10-02 12:30:22.973 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 3a66c549-fa18-498a-93d5-ac91e746b002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.147 2 DEBUG nova.network.neutron [req-baf91988-bea4-454d-9abd-49f669fa5212 req-7e63ab99-28e2-4448-8577-6cad2f449bd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Updated VIF entry in instance network info cache for port 12796a62-46f7-45b8-9915-c8ce909d615b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.148 2 DEBUG nova.network.neutron [req-baf91988-bea4-454d-9abd-49f669fa5212 req-7e63ab99-28e2-4448-8577-6cad2f449bd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Updating instance_info_cache with network_info: [{"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.165 2 DEBUG oslo_concurrency.lockutils [req-baf91988-bea4-454d-9abd-49f669fa5212 req-7e63ab99-28e2-4448-8577-6cad2f449bd1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3a66c549-fa18-498a-93d5-ac91e746b002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.381 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Creating config drive at /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002/disk.config#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.389 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg49r8vhc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.521 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg49r8vhc" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.557 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 3a66c549-fa18-498a-93d5-ac91e746b002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.560 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002/disk.config 3a66c549-fa18-498a-93d5-ac91e746b002_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.721 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002/disk.config 3a66c549-fa18-498a-93d5-ac91e746b002_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.722 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Deleting local config drive /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002/disk.config because it was imported into RBD.#033[00m
Oct  2 08:30:23 np0005466031 kernel: tap12796a62-46: entered promiscuous mode
Oct  2 08:30:23 np0005466031 NetworkManager[44907]: <info>  [1759408223.7697] manager: (tap12796a62-46): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Oct  2 08:30:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:23Z|00228|binding|INFO|Claiming lport 12796a62-46f7-45b8-9915-c8ce909d615b for this chassis.
Oct  2 08:30:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:23Z|00229|binding|INFO|12796a62-46f7-45b8-9915-c8ce909d615b: Claiming fa:16:3e:bf:8e:19 10.100.0.10
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.795 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:8e:19 10.100.0.10'], port_security=['fa:16:3e:bf:8e:19 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '3a66c549-fa18-498a-93d5-ac91e746b002', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '546222ddef05450d9aeb91e721403b5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fed574f-e419-4ee1-a1ca-0cdfc77deb52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9e589ca-e402-46cd-b1b2-28eed346077b, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=12796a62-46f7-45b8-9915-c8ce909d615b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.797 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 12796a62-46f7-45b8-9915-c8ce909d615b in datapath f0f24c0d-e50a-47b1-8faa-15e38342da63 bound to our chassis#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.798 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0f24c0d-e50a-47b1-8faa-15e38342da63#033[00m
Oct  2 08:30:23 np0005466031 systemd-udevd[266377]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.811 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee9e74a-5beb-45b7-b75f-f2b055529dfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.812 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0f24c0d-e1 in ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:30:23 np0005466031 systemd-machined[192227]: New machine qemu-27-instance-00000047.
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.814 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0f24c0d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.814 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ff756703-8155-467c-84dc-4951c3db2846]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.815 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9ade6421-b31b-4114-9738-3226c297b55e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 NetworkManager[44907]: <info>  [1759408223.8184] device (tap12796a62-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:30:23 np0005466031 NetworkManager[44907]: <info>  [1759408223.8193] device (tap12796a62-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.825 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[8fad0ce2-fffd-4c9a-8bf8-d66df48e1442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 systemd[1]: Started Virtual Machine qemu-27-instance-00000047.
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.839 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b5adf2-c769-4fd0-b17a-739b2307a764]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:23Z|00230|binding|INFO|Setting lport 12796a62-46f7-45b8-9915-c8ce909d615b ovn-installed in OVS
Oct  2 08:30:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:23Z|00231|binding|INFO|Setting lport 12796a62-46f7-45b8-9915-c8ce909d615b up in Southbound
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.870 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[57bc1e56-a56e-4272-9fdf-5b046ad74b69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 systemd-udevd[266384]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:30:23 np0005466031 NetworkManager[44907]: <info>  [1759408223.8764] manager: (tapf0f24c0d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/120)
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.876 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7df4641c-00a0-4ffa-984e-89e627155099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:23.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.906 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[216441df-f2d6-4843-b123-8f75c4125ca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.909 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8215c576-7587-4713-abf2-18739d76b5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 NetworkManager[44907]: <info>  [1759408223.9356] device (tapf0f24c0d-e0): carrier: link connected
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.940 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d228d892-9729-40e8-b40a-d3d39f732eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.956 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[97d2ae31-6c3c-4ff9-86a3-e9dfa7ba5031]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0f24c0d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5e:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607952, 'reachable_time': 26395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266413, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.968 2 DEBUG nova.network.neutron [req-58cf7055-7759-4fec-9213-5ae50f565963 req-68760a7d-ad2a-416c-b63c-e2e18b47c879 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Updated VIF entry in instance network info cache for port 78202441-8869-4d28-b719-5732a733fe90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.969 2 DEBUG nova.network.neutron [req-58cf7055-7759-4fec-9213-5ae50f565963 req-68760a7d-ad2a-416c-b63c-e2e18b47c879 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Updating instance_info_cache with network_info: [{"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.976 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f2c149-01fe-4f8f-954e-0c14dc5e933b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:5ed7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607952, 'tstamp': 607952}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266414, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:23 np0005466031 nova_compute[235803]: 2025-10-02 12:30:23.992 2 DEBUG oslo_concurrency.lockutils [req-58cf7055-7759-4fec-9213-5ae50f565963 req-68760a7d-ad2a-416c-b63c-e2e18b47c879 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-660bced2-3ec3-45ab-bb6f-155853e3d658" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:23.994 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fb92d3a1-a3a2-4c41-b329-46a54b9d37ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0f24c0d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5e:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607952, 'reachable_time': 26395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266415, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.022 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e32a4a90-8dc3-46f5-9f3e-4ffee6c79618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.075 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf41050-c18f-4942-aaee-979c5f06b146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.077 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f24c0d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.077 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.077 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f24c0d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:24 np0005466031 NetworkManager[44907]: <info>  [1759408224.0799] manager: (tapf0f24c0d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Oct  2 08:30:24 np0005466031 kernel: tapf0f24c0d-e0: entered promiscuous mode
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.082 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0f24c0d-e0, col_values=(('external_ids', {'iface-id': 'aa017360-5737-4ad9-a150-2ba1122b7ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:24Z|00232|binding|INFO|Releasing lport aa017360-5737-4ad9-a150-2ba1122b7ea5 from this chassis (sb_readonly=0)
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.087 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.088 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[51bb146e-ea0f-4e18-884e-e0577f93819f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.089 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-f0f24c0d-e50a-47b1-8faa-15e38342da63
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID f0f24c0d-e50a-47b1-8faa-15e38342da63
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:30:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:24.091 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'env', 'PROCESS_TAG=haproxy-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0f24c0d-e50a-47b1-8faa-15e38342da63.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.136 2 DEBUG nova.compute.manager [req-c4bcbd65-89cc-4db6-8c09-f44293beee66 req-cddddc55-4ace-4170-88e3-1eef1151e101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received event network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.137 2 DEBUG oslo_concurrency.lockutils [req-c4bcbd65-89cc-4db6-8c09-f44293beee66 req-cddddc55-4ace-4170-88e3-1eef1151e101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.137 2 DEBUG oslo_concurrency.lockutils [req-c4bcbd65-89cc-4db6-8c09-f44293beee66 req-cddddc55-4ace-4170-88e3-1eef1151e101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.137 2 DEBUG oslo_concurrency.lockutils [req-c4bcbd65-89cc-4db6-8c09-f44293beee66 req-cddddc55-4ace-4170-88e3-1eef1151e101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.137 2 DEBUG nova.compute.manager [req-c4bcbd65-89cc-4db6-8c09-f44293beee66 req-cddddc55-4ace-4170-88e3-1eef1151e101 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Processing event network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:30:24 np0005466031 podman[266489]: 2025-10-02 12:30:24.452469842 +0000 UTC m=+0.049304682 container create 12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:30:24 np0005466031 systemd[1]: Started libpod-conmon-12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82.scope.
Oct  2 08:30:24 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:30:24 np0005466031 podman[266489]: 2025-10-02 12:30:24.423902609 +0000 UTC m=+0.020737469 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:30:24 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7bda3cc79d380a51d6caf3ffe81f7635d004d0f7dd9b778362e4b000a19f12/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:30:24 np0005466031 podman[266489]: 2025-10-02 12:30:24.532835488 +0000 UTC m=+0.129670348 container init 12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:30:24 np0005466031 podman[266489]: 2025-10-02 12:30:24.537716819 +0000 UTC m=+0.134551659 container start 12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:30:24 np0005466031 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[266504]: [NOTICE]   (266508) : New worker (266510) forked
Oct  2 08:30:24 np0005466031 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[266504]: [NOTICE]   (266508) : Loading success.
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.665 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.666 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408224.6647239, 3a66c549-fa18-498a-93d5-ac91e746b002 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.666 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] VM Started (Lifecycle Event)#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.668 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.670 2 INFO nova.virt.libvirt.driver [-] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Instance spawned successfully.#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.671 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.691 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.695 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.696 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.696 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.697 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.697 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.697 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.701 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.754 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.754 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408224.6657906, 3a66c549-fa18-498a-93d5-ac91e746b002 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.754 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.785 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.787 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408224.6678832, 3a66c549-fa18-498a-93d5-ac91e746b002 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.788 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.793 2 INFO nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Took 6.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.793 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.806 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.808 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.851 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.871 2 INFO nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Took 8.06 seconds to build instance.#033[00m
Oct  2 08:30:24 np0005466031 nova_compute[235803]: 2025-10-02 12:30:24.887 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:24.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:25.836 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:25.837 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:25.837 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:25.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:26 np0005466031 nova_compute[235803]: 2025-10-02 12:30:26.216 2 DEBUG nova.compute.manager [req-2ad855f2-39db-4388-b5a9-125e6512f2c0 req-17fe15c7-5e51-4966-bdcf-6c95b496a171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received event network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:26 np0005466031 nova_compute[235803]: 2025-10-02 12:30:26.216 2 DEBUG oslo_concurrency.lockutils [req-2ad855f2-39db-4388-b5a9-125e6512f2c0 req-17fe15c7-5e51-4966-bdcf-6c95b496a171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:26 np0005466031 nova_compute[235803]: 2025-10-02 12:30:26.217 2 DEBUG oslo_concurrency.lockutils [req-2ad855f2-39db-4388-b5a9-125e6512f2c0 req-17fe15c7-5e51-4966-bdcf-6c95b496a171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:26 np0005466031 nova_compute[235803]: 2025-10-02 12:30:26.217 2 DEBUG oslo_concurrency.lockutils [req-2ad855f2-39db-4388-b5a9-125e6512f2c0 req-17fe15c7-5e51-4966-bdcf-6c95b496a171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:26 np0005466031 nova_compute[235803]: 2025-10-02 12:30:26.217 2 DEBUG nova.compute.manager [req-2ad855f2-39db-4388-b5a9-125e6512f2c0 req-17fe15c7-5e51-4966-bdcf-6c95b496a171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] No waiting events found dispatching network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:26 np0005466031 nova_compute[235803]: 2025-10-02 12:30:26.217 2 WARNING nova.compute.manager [req-2ad855f2-39db-4388-b5a9-125e6512f2c0 req-17fe15c7-5e51-4966-bdcf-6c95b496a171 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received unexpected event network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b for instance with vm_state active and task_state None.#033[00m
Oct  2 08:30:26 np0005466031 nova_compute[235803]: 2025-10-02 12:30:26.492 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 660bced2-3ec3-45ab-bb6f-155853e3d658_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.918s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:26 np0005466031 nova_compute[235803]: 2025-10-02 12:30:26.562 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] resizing rbd image 660bced2-3ec3-45ab-bb6f-155853e3d658_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:30:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:26.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:27.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:27 np0005466031 nova_compute[235803]: 2025-10-02 12:30:27.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:28 np0005466031 nova_compute[235803]: 2025-10-02 12:30:28.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:28.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.607 2 DEBUG nova.objects.instance [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'migration_context' on Instance uuid 660bced2-3ec3-45ab-bb6f-155853e3d658 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.627 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.628 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Ensure instance console log exists: /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.628 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.628 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.628 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.630 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Start _get_guest_xml network_info=[{"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.634 2 WARNING nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.638 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.639 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.641 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.641 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.642 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.642 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.642 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.643 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.643 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.643 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.643 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.643 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.644 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.644 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.644 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.644 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:30:29 np0005466031 nova_compute[235803]: 2025-10-02 12:30:29.647 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:29.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1719555084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.062 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.084 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 660bced2-3ec3-45ab-bb6f-155853e3d658_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.087 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2808893995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.499 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.501 2 DEBUG nova.virt.libvirt.vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-3',id=73,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:18Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=660bced2-3ec3-45ab-bb6f-155853e3d658,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.501 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.502 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:ee:17,bridge_name='br-int',has_traffic_filtering=True,id=78202441-8869-4d28-b719-5732a733fe90,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78202441-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.503 2 DEBUG nova.objects.instance [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'pci_devices' on Instance uuid 660bced2-3ec3-45ab-bb6f-155853e3d658 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.525 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <uuid>660bced2-3ec3-45ab-bb6f-155853e3d658</uuid>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <name>instance-00000049</name>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <nova:name>tempest-ListServersNegativeTestJSON-server-978789831-3</nova:name>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:30:29</nova:creationTime>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <nova:user uuid="045de4bc70204ae8b6975513839061d8">tempest-ListServersNegativeTestJSON-400261674-project-member</nova:user>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <nova:project uuid="546222ddef05450d9aeb91e721403b5b">tempest-ListServersNegativeTestJSON-400261674</nova:project>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <nova:port uuid="78202441-8869-4d28-b719-5732a733fe90">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <entry name="serial">660bced2-3ec3-45ab-bb6f-155853e3d658</entry>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <entry name="uuid">660bced2-3ec3-45ab-bb6f-155853e3d658</entry>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/660bced2-3ec3-45ab-bb6f-155853e3d658_disk">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/660bced2-3ec3-45ab-bb6f-155853e3d658_disk.config">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:8f:ee:17"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <target dev="tap78202441-88"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658/console.log" append="off"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:30:30 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:30:30 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:30:30 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:30:30 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.526 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Preparing to wait for external event network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.526 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.526 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.526 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.527 2 DEBUG nova.virt.libvirt.vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-3',id=73,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:18Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=660bced2-3ec3-45ab-bb6f-155853e3d658,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.527 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.528 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:ee:17,bridge_name='br-int',has_traffic_filtering=True,id=78202441-8869-4d28-b719-5732a733fe90,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78202441-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.528 2 DEBUG os_vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:ee:17,bridge_name='br-int',has_traffic_filtering=True,id=78202441-8869-4d28-b719-5732a733fe90,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78202441-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.531 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78202441-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.532 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78202441-88, col_values=(('external_ids', {'iface-id': '78202441-8869-4d28-b719-5732a733fe90', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:ee:17', 'vm-uuid': '660bced2-3ec3-45ab-bb6f-155853e3d658'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:30 np0005466031 NetworkManager[44907]: <info>  [1759408230.5344] manager: (tap78202441-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.541 2 INFO os_vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:ee:17,bridge_name='br-int',has_traffic_filtering=True,id=78202441-8869-4d28-b719-5732a733fe90,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78202441-88')#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.587 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.587 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.588 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No VIF found with MAC fa:16:3e:8f:ee:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.588 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Using config drive#033[00m
Oct  2 08:30:30 np0005466031 nova_compute[235803]: 2025-10-02 12:30:30.609 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 660bced2-3ec3-45ab-bb6f-155853e3d658_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:30.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:31 np0005466031 nova_compute[235803]: 2025-10-02 12:30:31.183 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Creating config drive at /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658/disk.config#033[00m
Oct  2 08:30:31 np0005466031 nova_compute[235803]: 2025-10-02 12:30:31.187 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5olzt1ut execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:31 np0005466031 nova_compute[235803]: 2025-10-02 12:30:31.322 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5olzt1ut" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:31 np0005466031 nova_compute[235803]: 2025-10-02 12:30:31.347 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 660bced2-3ec3-45ab-bb6f-155853e3d658_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:31 np0005466031 nova_compute[235803]: 2025-10-02 12:30:31.350 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658/disk.config 660bced2-3ec3-45ab-bb6f-155853e3d658_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:31.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:32.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:33 np0005466031 nova_compute[235803]: 2025-10-02 12:30:33.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:33.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:34 np0005466031 nova_compute[235803]: 2025-10-02 12:30:34.624 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658/disk.config 660bced2-3ec3-45ab-bb6f-155853e3d658_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:34 np0005466031 nova_compute[235803]: 2025-10-02 12:30:34.625 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Deleting local config drive /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658/disk.config because it was imported into RBD.#033[00m
Oct  2 08:30:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:30:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:34 np0005466031 kernel: tap78202441-88: entered promiscuous mode
Oct  2 08:30:34 np0005466031 NetworkManager[44907]: <info>  [1759408234.6767] manager: (tap78202441-88): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Oct  2 08:30:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:34Z|00233|binding|INFO|Claiming lport 78202441-8869-4d28-b719-5732a733fe90 for this chassis.
Oct  2 08:30:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:34Z|00234|binding|INFO|78202441-8869-4d28-b719-5732a733fe90: Claiming fa:16:3e:8f:ee:17 10.100.0.14
Oct  2 08:30:34 np0005466031 nova_compute[235803]: 2025-10-02 12:30:34.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.726 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:ee:17 10.100.0.14'], port_security=['fa:16:3e:8f:ee:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '660bced2-3ec3-45ab-bb6f-155853e3d658', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '546222ddef05450d9aeb91e721403b5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fed574f-e419-4ee1-a1ca-0cdfc77deb52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9e589ca-e402-46cd-b1b2-28eed346077b, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=78202441-8869-4d28-b719-5732a733fe90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.727 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 78202441-8869-4d28-b719-5732a733fe90 in datapath f0f24c0d-e50a-47b1-8faa-15e38342da63 bound to our chassis#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.729 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0f24c0d-e50a-47b1-8faa-15e38342da63#033[00m
Oct  2 08:30:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:34Z|00235|binding|INFO|Setting lport 78202441-8869-4d28-b719-5732a733fe90 ovn-installed in OVS
Oct  2 08:30:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:34Z|00236|binding|INFO|Setting lport 78202441-8869-4d28-b719-5732a733fe90 up in Southbound
Oct  2 08:30:34 np0005466031 systemd-machined[192227]: New machine qemu-28-instance-00000049.
Oct  2 08:30:34 np0005466031 nova_compute[235803]: 2025-10-02 12:30:34.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.744 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5c150708-213c-4361-8d6d-a298d29c11c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:34 np0005466031 systemd[1]: Started Virtual Machine qemu-28-instance-00000049.
Oct  2 08:30:34 np0005466031 systemd-udevd[266734]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:30:34 np0005466031 NetworkManager[44907]: <info>  [1759408234.7739] device (tap78202441-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.773 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[01aae5c0-d392-4f75-ba8f-b42de1e8c6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:34 np0005466031 NetworkManager[44907]: <info>  [1759408234.7749] device (tap78202441-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.776 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7424bf-a1f4-4bd1-8d4a-3b0c61e5e571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.808 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb19e6e-0924-4422-a2cd-88e09ff2ed8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.826 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[770dd6f2-fe5b-41ff-b47d-4bfb5b757ace]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0f24c0d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5e:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607952, 'reachable_time': 26395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266745, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.845 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5dde7322-6a1d-4ce7-b736-8368e57e319c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf0f24c0d-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607963, 'tstamp': 607963}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266747, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0f24c0d-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607966, 'tstamp': 607966}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266747, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.847 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f24c0d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:34 np0005466031 nova_compute[235803]: 2025-10-02 12:30:34.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.850 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f24c0d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.851 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.852 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0f24c0d-e0, col_values=(('external_ids', {'iface-id': 'aa017360-5737-4ad9-a150-2ba1122b7ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:34.852 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:34.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.858 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408235.8581114, 660bced2-3ec3-45ab-bb6f-155853e3d658 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.860 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] VM Started (Lifecycle Event)#033[00m
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.883 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.887 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408235.858268, 660bced2-3ec3-45ab-bb6f-155853e3d658 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.887 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:30:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:35.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.905 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.908 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:35 np0005466031 nova_compute[235803]: 2025-10-02 12:30:35.930 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.698 2 DEBUG nova.compute.manager [req-f3d70663-14f6-4139-8e95-52bf319b1c91 req-946ffc95-3a37-4b3f-955f-2b370e4dfea2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received event network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.699 2 DEBUG oslo_concurrency.lockutils [req-f3d70663-14f6-4139-8e95-52bf319b1c91 req-946ffc95-3a37-4b3f-955f-2b370e4dfea2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.700 2 DEBUG oslo_concurrency.lockutils [req-f3d70663-14f6-4139-8e95-52bf319b1c91 req-946ffc95-3a37-4b3f-955f-2b370e4dfea2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.700 2 DEBUG oslo_concurrency.lockutils [req-f3d70663-14f6-4139-8e95-52bf319b1c91 req-946ffc95-3a37-4b3f-955f-2b370e4dfea2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.700 2 DEBUG nova.compute.manager [req-f3d70663-14f6-4139-8e95-52bf319b1c91 req-946ffc95-3a37-4b3f-955f-2b370e4dfea2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Processing event network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.701 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.704 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408236.7041488, 660bced2-3ec3-45ab-bb6f-155853e3d658 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.704 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.705 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.708 2 INFO nova.virt.libvirt.driver [-] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Instance spawned successfully.#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.709 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.735 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.739 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.739 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.740 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.740 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.741 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.741 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.744 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.775 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.812 2 INFO nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Took 18.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.812 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.875 2 INFO nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Took 19.94 seconds to build instance.#033[00m
Oct  2 08:30:36 np0005466031 nova_compute[235803]: 2025-10-02 12:30:36.906 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:37 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:37Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:8e:19 10.100.0.10
Oct  2 08:30:37 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:37Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:8e:19 10.100.0.10
Oct  2 08:30:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:37.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:38 np0005466031 nova_compute[235803]: 2025-10-02 12:30:38.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:38 np0005466031 nova_compute[235803]: 2025-10-02 12:30:38.854 2 DEBUG nova.compute.manager [req-e5e0f9b7-78ac-4dc8-bc14-1dcd9f3d2a83 req-0f6de4c2-dc07-4aaf-b091-7bde5b99ce11 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received event network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:38 np0005466031 nova_compute[235803]: 2025-10-02 12:30:38.855 2 DEBUG oslo_concurrency.lockutils [req-e5e0f9b7-78ac-4dc8-bc14-1dcd9f3d2a83 req-0f6de4c2-dc07-4aaf-b091-7bde5b99ce11 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:38 np0005466031 nova_compute[235803]: 2025-10-02 12:30:38.855 2 DEBUG oslo_concurrency.lockutils [req-e5e0f9b7-78ac-4dc8-bc14-1dcd9f3d2a83 req-0f6de4c2-dc07-4aaf-b091-7bde5b99ce11 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:38 np0005466031 nova_compute[235803]: 2025-10-02 12:30:38.855 2 DEBUG oslo_concurrency.lockutils [req-e5e0f9b7-78ac-4dc8-bc14-1dcd9f3d2a83 req-0f6de4c2-dc07-4aaf-b091-7bde5b99ce11 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:38 np0005466031 nova_compute[235803]: 2025-10-02 12:30:38.855 2 DEBUG nova.compute.manager [req-e5e0f9b7-78ac-4dc8-bc14-1dcd9f3d2a83 req-0f6de4c2-dc07-4aaf-b091-7bde5b99ce11 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] No waiting events found dispatching network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:38 np0005466031 nova_compute[235803]: 2025-10-02 12:30:38.856 2 WARNING nova.compute.manager [req-e5e0f9b7-78ac-4dc8-bc14-1dcd9f3d2a83 req-0f6de4c2-dc07-4aaf-b091-7bde5b99ce11 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received unexpected event network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:30:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.506 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "3a66c549-fa18-498a-93d5-ac91e746b002" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.506 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.506 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.506 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.507 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.507 2 INFO nova.compute.manager [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Terminating instance#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.508 2 DEBUG nova.compute.manager [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:30:39 np0005466031 kernel: tap12796a62-46 (unregistering): left promiscuous mode
Oct  2 08:30:39 np0005466031 NetworkManager[44907]: <info>  [1759408239.6011] device (tap12796a62-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:30:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:39Z|00237|binding|INFO|Releasing lport 12796a62-46f7-45b8-9915-c8ce909d615b from this chassis (sb_readonly=0)
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:39Z|00238|binding|INFO|Setting lport 12796a62-46f7-45b8-9915-c8ce909d615b down in Southbound
Oct  2 08:30:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:39Z|00239|binding|INFO|Removing iface tap12796a62-46 ovn-installed in OVS
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.618 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:8e:19 10.100.0.10'], port_security=['fa:16:3e:bf:8e:19 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '3a66c549-fa18-498a-93d5-ac91e746b002', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '546222ddef05450d9aeb91e721403b5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fed574f-e419-4ee1-a1ca-0cdfc77deb52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9e589ca-e402-46cd-b1b2-28eed346077b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=12796a62-46f7-45b8-9915-c8ce909d615b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.621 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 12796a62-46f7-45b8-9915-c8ce909d615b in datapath f0f24c0d-e50a-47b1-8faa-15e38342da63 unbound from our chassis#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.624 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0f24c0d-e50a-47b1-8faa-15e38342da63#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.642 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7705986c-a30c-4765-9bb9-a4b4ed947ca8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:39 np0005466031 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000047.scope: Deactivated successfully.
Oct  2 08:30:39 np0005466031 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000047.scope: Consumed 12.629s CPU time.
Oct  2 08:30:39 np0005466031 systemd-machined[192227]: Machine qemu-27-instance-00000047 terminated.
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.683 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e190f3ce-4e0a-4825-acbe-cd7922cb8c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.687 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e85ab058-3a5b-497e-8b50-707d5075470f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.721 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[040cceeb-81e1-4bb9-a2a9-2972a6c91ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.742 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c5e22f-eda5-4cdd-b606-187cbd6dca81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0f24c0d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5e:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607952, 'reachable_time': 26395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266855, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.749 2 INFO nova.virt.libvirt.driver [-] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Instance destroyed successfully.#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.750 2 DEBUG nova.objects.instance [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'resources' on Instance uuid 3a66c549-fa18-498a-93d5-ac91e746b002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.765 2 DEBUG nova.virt.libvirt.vif [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-1',id=71,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:24Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=3a66c549-fa18-498a-93d5-ac91e746b002,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.766 2 DEBUG nova.network.os_vif_util [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "12796a62-46f7-45b8-9915-c8ce909d615b", "address": "fa:16:3e:bf:8e:19", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12796a62-46", "ovs_interfaceid": "12796a62-46f7-45b8-9915-c8ce909d615b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.766 2 DEBUG nova.network.os_vif_util [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:8e:19,bridge_name='br-int',has_traffic_filtering=True,id=12796a62-46f7-45b8-9915-c8ce909d615b,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12796a62-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.767 2 DEBUG os_vif [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:8e:19,bridge_name='br-int',has_traffic_filtering=True,id=12796a62-46f7-45b8-9915-c8ce909d615b,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12796a62-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12796a62-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.764 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[79966dd3-a938-4447-9104-d5f9c5686695]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf0f24c0d-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607963, 'tstamp': 607963}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266862, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0f24c0d-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607966, 'tstamp': 607966}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266862, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.772 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f24c0d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.774 2 INFO os_vif [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:8e:19,bridge_name='br-int',has_traffic_filtering=True,id=12796a62-46f7-45b8-9915-c8ce909d615b,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12796a62-46')#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.775 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f24c0d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.775 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.775 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0f24c0d-e0, col_values=(('external_ids', {'iface-id': 'aa017360-5737-4ad9-a150-2ba1122b7ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:39.776 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:39 np0005466031 nova_compute[235803]: 2025-10-02 12:30:39.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:39.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.264 2 DEBUG nova.compute.manager [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received event network-vif-unplugged-12796a62-46f7-45b8-9915-c8ce909d615b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.264 2 DEBUG oslo_concurrency.lockutils [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.265 2 DEBUG oslo_concurrency.lockutils [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.265 2 DEBUG oslo_concurrency.lockutils [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.265 2 DEBUG nova.compute.manager [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] No waiting events found dispatching network-vif-unplugged-12796a62-46f7-45b8-9915-c8ce909d615b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.266 2 DEBUG nova.compute.manager [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received event network-vif-unplugged-12796a62-46f7-45b8-9915-c8ce909d615b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.266 2 DEBUG nova.compute.manager [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received event network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.266 2 DEBUG oslo_concurrency.lockutils [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.266 2 DEBUG oslo_concurrency.lockutils [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.266 2 DEBUG oslo_concurrency.lockutils [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.267 2 DEBUG nova.compute.manager [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] No waiting events found dispatching network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.267 2 WARNING nova.compute.manager [req-9ce2b4bd-2015-4584-b53d-b7df7a99fa6b req-253e639e-9cc8-4b9d-adc2-e444286fb920 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received unexpected event network-vif-plugged-12796a62-46f7-45b8-9915-c8ce909d615b for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.763 2 INFO nova.virt.libvirt.driver [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Deleting instance files /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002_del#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.764 2 INFO nova.virt.libvirt.driver [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Deletion of /var/lib/nova/instances/3a66c549-fa18-498a-93d5-ac91e746b002_del complete#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.857 2 INFO nova.compute.manager [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Took 1.35 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.857 2 DEBUG oslo.service.loopingcall [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.858 2 DEBUG nova.compute.manager [-] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:30:40 np0005466031 nova_compute[235803]: 2025-10-02 12:30:40.858 2 DEBUG nova.network.neutron [-] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:30:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:40.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:41.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:42 np0005466031 nova_compute[235803]: 2025-10-02 12:30:42.165 2 DEBUG nova.network.neutron [-] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:42 np0005466031 nova_compute[235803]: 2025-10-02 12:30:42.188 2 INFO nova.compute.manager [-] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Took 1.33 seconds to deallocate network for instance.#033[00m
Oct  2 08:30:42 np0005466031 nova_compute[235803]: 2025-10-02 12:30:42.279 2 DEBUG nova.compute.manager [req-be037cb5-4c1a-4472-9c55-b81a0ea26816 req-722b40c3-d68c-4dd4-a60a-b2c2821f5d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Received event network-vif-deleted-12796a62-46f7-45b8-9915-c8ce909d615b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:42 np0005466031 nova_compute[235803]: 2025-10-02 12:30:42.364 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:42 np0005466031 nova_compute[235803]: 2025-10-02 12:30:42.365 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:42 np0005466031 nova_compute[235803]: 2025-10-02 12:30:42.676 2 DEBUG oslo_concurrency.processutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:42.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3250871919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.106 2 DEBUG oslo_concurrency.processutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.112 2 DEBUG nova.compute.provider_tree [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.145 2 DEBUG nova.scheduler.client.report [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.179 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.236 2 INFO nova.scheduler.client.report [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Deleted allocations for instance 3a66c549-fa18-498a-93d5-ac91e746b002#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.387 2 DEBUG oslo_concurrency.lockutils [None req-4ab50279-f647-46f1-8744-1611e671d43b 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "3a66c549-fa18-498a-93d5-ac91e746b002" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.741 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.742 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.742 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.742 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:30:43 np0005466031 nova_compute[235803]: 2025-10-02 12:30:43.742 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:43.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/744951610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.175 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:44 np0005466031 podman[266929]: 2025-10-02 12:30:44.276007165 +0000 UTC m=+0.050244139 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:30:44 np0005466031 podman[266930]: 2025-10-02 12:30:44.308386509 +0000 UTC m=+0.082604042 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.315 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.316 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.461 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.463 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4431MB free_disk=20.835590362548828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.463 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.464 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.682 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 660bced2-3ec3-45ab-bb6f-155853e3d658 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.683 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.684 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.725 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:44 np0005466031 nova_compute[235803]: 2025-10-02 12:30:44.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:44.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/97662532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:45 np0005466031 nova_compute[235803]: 2025-10-02 12:30:45.145 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:45 np0005466031 nova_compute[235803]: 2025-10-02 12:30:45.150 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:45 np0005466031 nova_compute[235803]: 2025-10-02 12:30:45.167 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:45 np0005466031 nova_compute[235803]: 2025-10-02 12:30:45.197 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:30:45 np0005466031 nova_compute[235803]: 2025-10-02 12:30:45.197 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.296351) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245296390, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2482, "num_deletes": 264, "total_data_size": 5657044, "memory_usage": 5743880, "flush_reason": "Manual Compaction"}
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245312978, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3707008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36446, "largest_seqno": 38923, "table_properties": {"data_size": 3696471, "index_size": 6775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22145, "raw_average_key_size": 21, "raw_value_size": 3675363, "raw_average_value_size": 3487, "num_data_blocks": 291, "num_entries": 1054, "num_filter_entries": 1054, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408059, "oldest_key_time": 1759408059, "file_creation_time": 1759408245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 16656 microseconds, and 7308 cpu microseconds.
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.313011) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3707008 bytes OK
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.313027) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.315007) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.315020) EVENT_LOG_v1 {"time_micros": 1759408245315016, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.315034) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5645886, prev total WAL file size 5645886, number of live WAL files 2.
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.316276) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3620KB)], [69(8322KB)]
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245316303, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 12228831, "oldest_snapshot_seqno": -1}
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6409 keys, 12064146 bytes, temperature: kUnknown
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245366722, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12064146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12018229, "index_size": 28783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 163806, "raw_average_key_size": 25, "raw_value_size": 11900332, "raw_average_value_size": 1856, "num_data_blocks": 1163, "num_entries": 6409, "num_filter_entries": 6409, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.366967) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12064146 bytes
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.368412) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.2 rd, 239.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.6) write-amplify(3.3) OK, records in: 6951, records dropped: 542 output_compression: NoCompression
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.368437) EVENT_LOG_v1 {"time_micros": 1759408245368420, "job": 42, "event": "compaction_finished", "compaction_time_micros": 50487, "compaction_time_cpu_micros": 22726, "output_level": 6, "num_output_files": 1, "total_output_size": 12064146, "num_input_records": 6951, "num_output_records": 6409, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245369126, "job": 42, "event": "table_file_deletion", "file_number": 71}
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245370322, "job": 42, "event": "table_file_deletion", "file_number": 69}
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.316168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.370374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.370379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.370381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.370383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:45.370385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:45.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.197 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.530 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.530 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.638 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.781 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.782 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.789 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:30:46 np0005466031 nova_compute[235803]: 2025-10-02 12:30:46.789 2 INFO nova.compute.claims [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:30:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:46.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.015 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1935961188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.463 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.471 2 DEBUG nova.compute.provider_tree [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.496 2 DEBUG nova.scheduler.client.report [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.614 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.615 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.738 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.738 2 DEBUG nova.network.neutron [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.804 2 INFO nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:30:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:47.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.912 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.924 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "660bced2-3ec3-45ab-bb6f-155853e3d658" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.924 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.925 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.925 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.925 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.926 2 INFO nova.compute.manager [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Terminating instance#033[00m
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.927 2 DEBUG nova.compute.manager [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:30:47 np0005466031 kernel: tap78202441-88 (unregistering): left promiscuous mode
Oct  2 08:30:47 np0005466031 NetworkManager[44907]: <info>  [1759408247.9806] device (tap78202441-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:30:47 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:47Z|00240|binding|INFO|Releasing lport 78202441-8869-4d28-b719-5732a733fe90 from this chassis (sb_readonly=0)
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:47 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:47Z|00241|binding|INFO|Setting lport 78202441-8869-4d28-b719-5732a733fe90 down in Southbound
Oct  2 08:30:47 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:47Z|00242|binding|INFO|Removing iface tap78202441-88 ovn-installed in OVS
Oct  2 08:30:47 np0005466031 nova_compute[235803]: 2025-10-02 12:30:47.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.001 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:ee:17 10.100.0.14'], port_security=['fa:16:3e:8f:ee:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '660bced2-3ec3-45ab-bb6f-155853e3d658', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '546222ddef05450d9aeb91e721403b5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fed574f-e419-4ee1-a1ca-0cdfc77deb52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9e589ca-e402-46cd-b1b2-28eed346077b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=78202441-8869-4d28-b719-5732a733fe90) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.002 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 78202441-8869-4d28-b719-5732a733fe90 in datapath f0f24c0d-e50a-47b1-8faa-15e38342da63 unbound from our chassis#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.004 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0f24c0d-e50a-47b1-8faa-15e38342da63, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.005 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2facf6-2114-4bc3-94d8-2531682fc990]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.006 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 namespace which is not needed anymore#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466031 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000049.scope: Deactivated successfully.
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.039 2 DEBUG nova.policy [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '28d5425714b04888ba9e6112879fae33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:30:48 np0005466031 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000049.scope: Consumed 12.240s CPU time.
Oct  2 08:30:48 np0005466031 systemd-machined[192227]: Machine qemu-28-instance-00000049 terminated.
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.126 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.127 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.128 2 INFO nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Creating image(s)#033[00m
Oct  2 08:30:48 np0005466031 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[266504]: [NOTICE]   (266508) : haproxy version is 2.8.14-c23fe91
Oct  2 08:30:48 np0005466031 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[266504]: [NOTICE]   (266508) : path to executable is /usr/sbin/haproxy
Oct  2 08:30:48 np0005466031 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[266504]: [WARNING]  (266508) : Exiting Master process...
Oct  2 08:30:48 np0005466031 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[266504]: [WARNING]  (266508) : Exiting Master process...
Oct  2 08:30:48 np0005466031 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[266504]: [ALERT]    (266508) : Current worker (266510) exited with code 143 (Terminated)
Oct  2 08:30:48 np0005466031 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[266504]: [WARNING]  (266508) : All workers exited. Exiting... (0)
Oct  2 08:30:48 np0005466031 systemd[1]: libpod-12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82.scope: Deactivated successfully.
Oct  2 08:30:48 np0005466031 podman[267092]: 2025-10-02 12:30:48.141083305 +0000 UTC m=+0.044953477 container died 12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.159 2 DEBUG nova.storage.rbd_utils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:48 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82-userdata-shm.mount: Deactivated successfully.
Oct  2 08:30:48 np0005466031 systemd[1]: var-lib-containers-storage-overlay-fc7bda3cc79d380a51d6caf3ffe81f7635d004d0f7dd9b778362e4b000a19f12-merged.mount: Deactivated successfully.
Oct  2 08:30:48 np0005466031 podman[267092]: 2025-10-02 12:30:48.19157176 +0000 UTC m=+0.095441932 container cleanup 12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:30:48 np0005466031 systemd[1]: libpod-conmon-12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82.scope: Deactivated successfully.
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.201 2 DEBUG nova.storage.rbd_utils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.226 2 DEBUG nova.storage.rbd_utils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.230 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:48 np0005466031 podman[267164]: 2025-10-02 12:30:48.255311077 +0000 UTC m=+0.041083865 container remove 12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.256 2 INFO nova.virt.libvirt.driver [-] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Instance destroyed successfully.#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.257 2 DEBUG nova.objects.instance [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'resources' on Instance uuid 660bced2-3ec3-45ab-bb6f-155853e3d658 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.260 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7b4574-95b8-499b-b08d-f8134c8ade4c]: (4, ('Thu Oct  2 12:30:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 (12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82)\n12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82\nThu Oct  2 12:30:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 (12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82)\n12a6d6f4a6bd06d57dea4d0389823088a8acb1ed25d2659b6e2159ddd7fc7a82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.262 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3d94834a-0f33-47f3-9e62-6b1a46208551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.263 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f24c0d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466031 kernel: tapf0f24c0d-e0: left promiscuous mode
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.287 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e79ab1fb-2c54-4c40-b879-4271f8e22223]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.287 2 DEBUG nova.virt.libvirt.vif [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-3',id=73,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-02T12:30:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:36Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=660bced2-3ec3-45ab-bb6f-155853e3d658,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.287 2 DEBUG nova.network.os_vif_util [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "78202441-8869-4d28-b719-5732a733fe90", "address": "fa:16:3e:8f:ee:17", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78202441-88", "ovs_interfaceid": "78202441-8869-4d28-b719-5732a733fe90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.289 2 DEBUG nova.network.os_vif_util [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:ee:17,bridge_name='br-int',has_traffic_filtering=True,id=78202441-8869-4d28-b719-5732a733fe90,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78202441-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.290 2 DEBUG os_vif [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:ee:17,bridge_name='br-int',has_traffic_filtering=True,id=78202441-8869-4d28-b719-5732a733fe90,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78202441-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78202441-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.300 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.302 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.303 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.304 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.315 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2d326b63-e090-4f6d-a4df-4c2658694ce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.316 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2324dff5-e60d-47f8-aee5-bb9fe53ef51d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.330 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ee8c4b-b297-4774-80a4-9673accd410f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607945, 'reachable_time': 24928, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267221, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466031 systemd[1]: run-netns-ovnmeta\x2df0f24c0d\x2de50a\x2d47b1\x2d8faa\x2d15e38342da63.mount: Deactivated successfully.
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.335 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:30:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:48.335 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3c4f26-c10f-423a-9a4e-2a406b4e2ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.335 2 DEBUG nova.storage.rbd_utils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.339 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.369 2 INFO os_vif [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:ee:17,bridge_name='br-int',has_traffic_filtering=True,id=78202441-8869-4d28-b719-5732a733fe90,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78202441-88')#033[00m
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.466203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248466234, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 319, "num_deletes": 251, "total_data_size": 197299, "memory_usage": 204584, "flush_reason": "Manual Compaction"}
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248469033, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 129947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38928, "largest_seqno": 39242, "table_properties": {"data_size": 127837, "index_size": 274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5344, "raw_average_key_size": 18, "raw_value_size": 123706, "raw_average_value_size": 434, "num_data_blocks": 11, "num_entries": 285, "num_filter_entries": 285, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408246, "oldest_key_time": 1759408246, "file_creation_time": 1759408248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 2881 microseconds, and 1057 cpu microseconds.
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.469083) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 129947 bytes OK
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.469101) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470169) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470187) EVENT_LOG_v1 {"time_micros": 1759408248470180, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470204) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 195007, prev total WAL file size 195007, number of live WAL files 2.
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470625) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(126KB)], [72(11MB)]
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248470677, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 12194093, "oldest_snapshot_seqno": -1}
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6182 keys, 10187714 bytes, temperature: kUnknown
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248530984, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 10187714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10145149, "index_size": 26007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 159798, "raw_average_key_size": 25, "raw_value_size": 10032915, "raw_average_value_size": 1622, "num_data_blocks": 1038, "num_entries": 6182, "num_filter_entries": 6182, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.531290) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10187714 bytes
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.536073) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.9 rd, 168.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(172.2) write-amplify(78.4) OK, records in: 6694, records dropped: 512 output_compression: NoCompression
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.536091) EVENT_LOG_v1 {"time_micros": 1759408248536082, "job": 44, "event": "compaction_finished", "compaction_time_micros": 60403, "compaction_time_cpu_micros": 20674, "output_level": 6, "num_output_files": 1, "total_output_size": 10187714, "num_input_records": 6694, "num_output_records": 6182, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248536243, "job": 44, "event": "table_file_deletion", "file_number": 74}
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248538045, "job": 44, "event": "table_file_deletion", "file_number": 72}
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.538128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.538133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.538135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.538137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:30:48.538139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.543 2 DEBUG nova.compute.manager [req-bd80b217-ffe2-44f0-a6fe-d3e108787257 req-59cbbefc-71e4-4d52-a320-bcc13666966f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received event network-vif-unplugged-78202441-8869-4d28-b719-5732a733fe90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.543 2 DEBUG oslo_concurrency.lockutils [req-bd80b217-ffe2-44f0-a6fe-d3e108787257 req-59cbbefc-71e4-4d52-a320-bcc13666966f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.543 2 DEBUG oslo_concurrency.lockutils [req-bd80b217-ffe2-44f0-a6fe-d3e108787257 req-59cbbefc-71e4-4d52-a320-bcc13666966f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.544 2 DEBUG oslo_concurrency.lockutils [req-bd80b217-ffe2-44f0-a6fe-d3e108787257 req-59cbbefc-71e4-4d52-a320-bcc13666966f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.544 2 DEBUG nova.compute.manager [req-bd80b217-ffe2-44f0-a6fe-d3e108787257 req-59cbbefc-71e4-4d52-a320-bcc13666966f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] No waiting events found dispatching network-vif-unplugged-78202441-8869-4d28-b719-5732a733fe90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.544 2 DEBUG nova.compute.manager [req-bd80b217-ffe2-44f0-a6fe-d3e108787257 req-59cbbefc-71e4-4d52-a320-bcc13666966f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received event network-vif-unplugged-78202441-8869-4d28-b719-5732a733fe90 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.709 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.709 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.709 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.710 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.728 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.817 2 DEBUG nova.storage.rbd_utils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:30:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:48.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:48 np0005466031 nova_compute[235803]: 2025-10-02 12:30:48.984 2 DEBUG nova.network.neutron [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Successfully created port: baa241e6-fa7d-4fea-9e14-0af61693406b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.317 2 DEBUG nova.objects.instance [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.339 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.339 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Ensure instance console log exists: /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.340 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.340 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.340 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.738 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.788 2 INFO nova.virt.libvirt.driver [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Deleting instance files /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658_del#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.789 2 INFO nova.virt.libvirt.driver [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Deletion of /var/lib/nova/instances/660bced2-3ec3-45ab-bb6f-155853e3d658_del complete#033[00m
Oct  2 08:30:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:49.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.937 2 INFO nova.compute.manager [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Took 2.01 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.938 2 DEBUG oslo.service.loopingcall [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.938 2 DEBUG nova.compute.manager [-] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:30:49 np0005466031 nova_compute[235803]: 2025-10-02 12:30:49.938 2 DEBUG nova.network.neutron [-] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:30:50 np0005466031 nova_compute[235803]: 2025-10-02 12:30:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:50 np0005466031 nova_compute[235803]: 2025-10-02 12:30:50.871 2 DEBUG nova.network.neutron [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Successfully updated port: baa241e6-fa7d-4fea-9e14-0af61693406b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:30:50 np0005466031 nova_compute[235803]: 2025-10-02 12:30:50.903 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:50 np0005466031 nova_compute[235803]: 2025-10-02 12:30:50.904 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:50 np0005466031 nova_compute[235803]: 2025-10-02 12:30:50.904 2 DEBUG nova.network.neutron [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:30:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:50.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.003 2 DEBUG nova.compute.manager [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received event network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.003 2 DEBUG oslo_concurrency.lockutils [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.004 2 DEBUG oslo_concurrency.lockutils [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.004 2 DEBUG oslo_concurrency.lockutils [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.004 2 DEBUG nova.compute.manager [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] No waiting events found dispatching network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.004 2 WARNING nova.compute.manager [req-fe561889-6ce2-4a1a-bc26-c031a75731fd req-cc15d3f1-9b2b-4323-989a-9557c4ebc132 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received unexpected event network-vif-plugged-78202441-8869-4d28-b719-5732a733fe90 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.170 2 DEBUG nova.compute.manager [req-8d81855e-0694-404d-bc28-ebcf6fa2c575 req-0672e80b-b945-4ad6-9fe3-aff67fab6c57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-changed-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.171 2 DEBUG nova.compute.manager [req-8d81855e-0694-404d-bc28-ebcf6fa2c575 req-0672e80b-b945-4ad6-9fe3-aff67fab6c57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Refreshing instance network info cache due to event network-changed-baa241e6-fa7d-4fea-9e14-0af61693406b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.171 2 DEBUG oslo_concurrency.lockutils [req-8d81855e-0694-404d-bc28-ebcf6fa2c575 req-0672e80b-b945-4ad6-9fe3-aff67fab6c57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.226 2 DEBUG nova.network.neutron [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:30:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:51.285 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:51.286 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.324 2 DEBUG nova.network.neutron [-] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.358 2 INFO nova.compute.manager [-] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Took 1.42 seconds to deallocate network for instance.#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.418 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.419 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.491 2 DEBUG oslo_concurrency.processutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4223918663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:51.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.919 2 DEBUG oslo_concurrency.processutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.924 2 DEBUG nova.compute.provider_tree [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.945 2 DEBUG nova.scheduler.client.report [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:51 np0005466031 nova_compute[235803]: 2025-10-02 12:30:51.978 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.036 2 INFO nova.scheduler.client.report [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Deleted allocations for instance 660bced2-3ec3-45ab-bb6f-155853e3d658#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.118 2 DEBUG oslo_concurrency.lockutils [None req-b4138979-582e-4736-9c94-b8e8470facac 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "660bced2-3ec3-45ab-bb6f-155853e3d658" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.234 2 DEBUG nova.network.neutron [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating instance_info_cache with network_info: [{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.253 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.253 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance network_info: |[{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.254 2 DEBUG oslo_concurrency.lockutils [req-8d81855e-0694-404d-bc28-ebcf6fa2c575 req-0672e80b-b945-4ad6-9fe3-aff67fab6c57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.254 2 DEBUG nova.network.neutron [req-8d81855e-0694-404d-bc28-ebcf6fa2c575 req-0672e80b-b945-4ad6-9fe3-aff67fab6c57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Refreshing network info cache for port baa241e6-fa7d-4fea-9e14-0af61693406b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.257 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Start _get_guest_xml network_info=[{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.261 2 WARNING nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.266 2 DEBUG nova.virt.libvirt.host [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.266 2 DEBUG nova.virt.libvirt.host [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.270 2 DEBUG nova.virt.libvirt.host [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.271 2 DEBUG nova.virt.libvirt.host [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.272 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.272 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.273 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.273 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.273 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.274 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.274 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.274 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.275 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.275 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.275 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.275 2 DEBUG nova.virt.hardware [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.278 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:30:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3813447493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.700 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.729 2 DEBUG nova.storage.rbd_utils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:52 np0005466031 nova_compute[235803]: 2025-10-02 12:30:52.733 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:52.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.121 2 DEBUG nova.compute.manager [req-c26f1d6c-a1d1-4f0c-9783-a634d57e498e req-250a9236-7cc5-442a-9c19-be33ed74cd12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Received event network-vif-deleted-78202441-8869-4d28-b719-5732a733fe90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:53 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2493142074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.156 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.158 2 DEBUG nova.virt.libvirt.vif [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1329955703',display_name='tempest-ServerDiskConfigTestJSON-server-1329955703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1329955703',id=75,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-9ne0uubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:47Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b20c27bc-0af3-4e54-a5ab-51d9d5afce82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.158 2 DEBUG nova.network.os_vif_util [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.159 2 DEBUG nova.network.os_vif_util [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.160 2 DEBUG nova.objects.instance [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.178 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <uuid>b20c27bc-0af3-4e54-a5ab-51d9d5afce82</uuid>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <name>instance-0000004b</name>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1329955703</nova:name>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:30:52</nova:creationTime>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <nova:port uuid="baa241e6-fa7d-4fea-9e14-0af61693406b">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <entry name="serial">b20c27bc-0af3-4e54-a5ab-51d9d5afce82</entry>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <entry name="uuid">b20c27bc-0af3-4e54-a5ab-51d9d5afce82</entry>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk.config">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:f4:93:64"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <target dev="tapbaa241e6-fa"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/console.log" append="off"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:30:53 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:30:53 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:30:53 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:30:53 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.179 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Preparing to wait for external event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.179 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.180 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.180 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.180 2 DEBUG nova.virt.libvirt.vif [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1329955703',display_name='tempest-ServerDiskConfigTestJSON-server-1329955703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1329955703',id=75,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-9ne0uubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:47Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b20c27bc-0af3-4e54-a5ab-51d9d5afce82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.181 2 DEBUG nova.network.os_vif_util [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.181 2 DEBUG nova.network.os_vif_util [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.181 2 DEBUG os_vif [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbaa241e6-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbaa241e6-fa, col_values=(('external_ids', {'iface-id': 'baa241e6-fa7d-4fea-9e14-0af61693406b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:93:64', 'vm-uuid': 'b20c27bc-0af3-4e54-a5ab-51d9d5afce82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:53 np0005466031 NetworkManager[44907]: <info>  [1759408253.1875] manager: (tapbaa241e6-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.191 2 INFO os_vif [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa')#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.238 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.238 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.238 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:f4:93:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.239 2 INFO nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Using config drive#033[00m
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.269 2 DEBUG nova.storage.rbd_utils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:53 np0005466031 podman[267424]: 2025-10-02 12:30:53.280939475 +0000 UTC m=+0.054853452 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:30:53 np0005466031 podman[267425]: 2025-10-02 12:30:53.309590201 +0000 UTC m=+0.083172769 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:30:53 np0005466031 nova_compute[235803]: 2025-10-02 12:30:53.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:53.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:30:54 np0005466031 nova_compute[235803]: 2025-10-02 12:30:54.476 2 INFO nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Creating config drive at /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/disk.config#033[00m
Oct  2 08:30:54 np0005466031 nova_compute[235803]: 2025-10-02 12:30:54.481 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_72jssy6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:54 np0005466031 nova_compute[235803]: 2025-10-02 12:30:54.613 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_72jssy6" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:54 np0005466031 nova_compute[235803]: 2025-10-02 12:30:54.644 2 DEBUG nova.storage.rbd_utils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:54 np0005466031 nova_compute[235803]: 2025-10-02 12:30:54.649 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/disk.config b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:54 np0005466031 nova_compute[235803]: 2025-10-02 12:30:54.747 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408239.7470543, 3a66c549-fa18-498a-93d5-ac91e746b002 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:54 np0005466031 nova_compute[235803]: 2025-10-02 12:30:54.748 2 INFO nova.compute.manager [-] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:30:54 np0005466031 nova_compute[235803]: 2025-10-02 12:30:54.774 2 DEBUG nova.compute.manager [None req-44bd0494-9062-474c-8659-8e5bd57f598c - - - - - -] [instance: 3a66c549-fa18-498a-93d5-ac91e746b002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:54.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.180 2 DEBUG oslo_concurrency.processutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/disk.config b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.180 2 INFO nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Deleting local config drive /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/disk.config because it was imported into RBD.#033[00m
Oct  2 08:30:55 np0005466031 kernel: tapbaa241e6-fa: entered promiscuous mode
Oct  2 08:30:55 np0005466031 NetworkManager[44907]: <info>  [1759408255.2267] manager: (tapbaa241e6-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Oct  2 08:30:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:55Z|00243|binding|INFO|Claiming lport baa241e6-fa7d-4fea-9e14-0af61693406b for this chassis.
Oct  2 08:30:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:55Z|00244|binding|INFO|baa241e6-fa7d-4fea-9e14-0af61693406b: Claiming fa:16:3e:f4:93:64 10.100.0.14
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.254 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:93:64 10.100.0.14'], port_security=['fa:16:3e:f4:93:64 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b20c27bc-0af3-4e54-a5ab-51d9d5afce82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=baa241e6-fa7d-4fea-9e14-0af61693406b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:55 np0005466031 systemd-udevd[267532]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.256 141898 INFO neutron.agent.ovn.metadata.agent [-] Port baa241e6-fa7d-4fea-9e14-0af61693406b in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.257 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711#033[00m
Oct  2 08:30:55 np0005466031 systemd-machined[192227]: New machine qemu-29-instance-0000004b.
Oct  2 08:30:55 np0005466031 NetworkManager[44907]: <info>  [1759408255.2715] device (tapbaa241e6-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:30:55 np0005466031 NetworkManager[44907]: <info>  [1759408255.2725] device (tapbaa241e6-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.272 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[edc64bef-64b1-4b19-9102-6f83e93b6fc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.273 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.275 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.275 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c42d0a-d6dc-46f4-8e87-f24c1c6c8e05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.276 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5cba7a-6fd2-41ec-a560-7c69b3e54a27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 systemd[1]: Started Virtual Machine qemu-29-instance-0000004b.
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.286 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[1d14fc63-3f01-42bf-9935-254e9c29b66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.310 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe542ec-dfd1-4403-96cb-06171a1367ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:55Z|00245|binding|INFO|Setting lport baa241e6-fa7d-4fea-9e14-0af61693406b ovn-installed in OVS
Oct  2 08:30:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:55Z|00246|binding|INFO|Setting lport baa241e6-fa7d-4fea-9e14-0af61693406b up in Southbound
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.339 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[170542ce-164e-4979-b03a-1347fc6d950d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.345 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2e46ac-06f4-4ec4-994d-2324f725e527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 NetworkManager[44907]: <info>  [1759408255.3472] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.376 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[dfefb6be-1050-4b6e-a8a9-a33c70659283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.379 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[394858c8-bc98-45a6-9d5a-7ca766c9bb5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 NetworkManager[44907]: <info>  [1759408255.4021] device (tape21cd6a6-f0): carrier: link connected
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.408 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3eea427d-180f-4071-aa47-c73c47f6d1b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.425 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[555edf44-2bb0-4cae-9118-89582f88d373]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611099, 'reachable_time': 44919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267615, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.440 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cf72d97e-f055-45e6-9235-3f92aeeed835]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611099, 'tstamp': 611099}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267617, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.456 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3acdbaa2-8a29-4ab8-8b6f-bb70a2160bc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611099, 'reachable_time': 44919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267618, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.480 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[51b6a29e-5d93-44bb-877b-d4b3e265d47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.533 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fd47221c-9cfc-40e3-ac37-666bf8b464d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.535 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.535 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.535 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005466031 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct  2 08:30:55 np0005466031 NetworkManager[44907]: <info>  [1759408255.5391] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.542 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:55Z|00247|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.547 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.548 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[719ba264-bf98-4022-88c4-2d3aa2ca7f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.548 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:30:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:55.549 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.661 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.801 2 DEBUG nova.network.neutron [req-8d81855e-0694-404d-bc28-ebcf6fa2c575 req-0672e80b-b945-4ad6-9fe3-aff67fab6c57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updated VIF entry in instance network info cache for port baa241e6-fa7d-4fea-9e14-0af61693406b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.802 2 DEBUG nova.network.neutron [req-8d81855e-0694-404d-bc28-ebcf6fa2c575 req-0672e80b-b945-4ad6-9fe3-aff67fab6c57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating instance_info_cache with network_info: [{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:55 np0005466031 nova_compute[235803]: 2025-10-02 12:30:55.827 2 DEBUG oslo_concurrency.lockutils [req-8d81855e-0694-404d-bc28-ebcf6fa2c575 req-0672e80b-b945-4ad6-9fe3-aff67fab6c57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:55.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:55 np0005466031 podman[267688]: 2025-10-02 12:30:55.937183473 +0000 UTC m=+0.048211381 container create 355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:30:55 np0005466031 systemd[1]: Started libpod-conmon-355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289.scope.
Oct  2 08:30:56 np0005466031 podman[267688]: 2025-10-02 12:30:55.911634506 +0000 UTC m=+0.022662434 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:30:56 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:30:56 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a64c02ed2bc144f73325e1d49e906d95c916b84b865a2533bd85964dd9f6b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:30:56 np0005466031 podman[267688]: 2025-10-02 12:30:56.03457225 +0000 UTC m=+0.145600178 container init 355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:30:56 np0005466031 podman[267688]: 2025-10-02 12:30:56.0456915 +0000 UTC m=+0.156719408 container start 355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:30:56 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[267708]: [NOTICE]   (267712) : New worker (267714) forked
Oct  2 08:30:56 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[267708]: [NOTICE]   (267712) : Loading success.
Oct  2 08:30:56 np0005466031 nova_compute[235803]: 2025-10-02 12:30:56.397 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408256.396807, b20c27bc-0af3-4e54-a5ab-51d9d5afce82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:56 np0005466031 nova_compute[235803]: 2025-10-02 12:30:56.398 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] VM Started (Lifecycle Event)#033[00m
Oct  2 08:30:56 np0005466031 nova_compute[235803]: 2025-10-02 12:30:56.416 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:56 np0005466031 nova_compute[235803]: 2025-10-02 12:30:56.419 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408256.3976464, b20c27bc-0af3-4e54-a5ab-51d9d5afce82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:56 np0005466031 nova_compute[235803]: 2025-10-02 12:30:56.419 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:30:56 np0005466031 nova_compute[235803]: 2025-10-02 12:30:56.446 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:56 np0005466031 nova_compute[235803]: 2025-10-02 12:30:56.450 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:56 np0005466031 nova_compute[235803]: 2025-10-02 12:30:56.477 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:56.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:30:57.288 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.710 2 DEBUG nova.compute.manager [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.711 2 DEBUG oslo_concurrency.lockutils [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.711 2 DEBUG oslo_concurrency.lockutils [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.712 2 DEBUG oslo_concurrency.lockutils [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.712 2 DEBUG nova.compute.manager [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Processing event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.712 2 DEBUG nova.compute.manager [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.713 2 DEBUG oslo_concurrency.lockutils [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.713 2 DEBUG oslo_concurrency.lockutils [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.713 2 DEBUG oslo_concurrency.lockutils [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.714 2 DEBUG nova.compute.manager [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.714 2 WARNING nova.compute.manager [req-273cf6eb-0afd-4447-b089-fac56a2e2214 req-f649a94e-14fe-433c-b43d-d82578cf357f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.715 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.718 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408257.7181668, b20c27bc-0af3-4e54-a5ab-51d9d5afce82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.718 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.720 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.723 2 INFO nova.virt.libvirt.driver [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance spawned successfully.#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.723 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.750 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.755 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.758 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.759 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.760 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.760 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.761 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.761 2 DEBUG nova.virt.libvirt.driver [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.794 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.828 2 INFO nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Took 9.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.829 2 DEBUG nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.898 2 INFO nova.compute.manager [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Took 11.16 seconds to build instance.#033[00m
Oct  2 08:30:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:57.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:57 np0005466031 nova_compute[235803]: 2025-10-02 12:30:57.920 2 DEBUG oslo_concurrency.lockutils [None req-e6cea5f1-f432-4025-9e8b-32be66ff4ba1 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:58 np0005466031 nova_compute[235803]: 2025-10-02 12:30:58.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005466031 nova_compute[235803]: 2025-10-02 12:30:58.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:58.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:30:59Z|00248|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct  2 08:30:59 np0005466031 nova_compute[235803]: 2025-10-02 12:30:59.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:30:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:30:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:59.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:00.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:01.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:02.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:03 np0005466031 nova_compute[235803]: 2025-10-02 12:31:03.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:03 np0005466031 nova_compute[235803]: 2025-10-02 12:31:03.252 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408248.1572232, 660bced2-3ec3-45ab-bb6f-155853e3d658 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:03 np0005466031 nova_compute[235803]: 2025-10-02 12:31:03.252 2 INFO nova.compute.manager [-] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:31:03 np0005466031 nova_compute[235803]: 2025-10-02 12:31:03.283 2 DEBUG nova.compute.manager [None req-aba6a78d-014a-4dae-9cbe-4564a5a62a3f - - - - - -] [instance: 660bced2-3ec3-45ab-bb6f-155853e3d658] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:03 np0005466031 nova_compute[235803]: 2025-10-02 12:31:03.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:03.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:04 np0005466031 nova_compute[235803]: 2025-10-02 12:31:04.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:04 np0005466031 nova_compute[235803]: 2025-10-02 12:31:04.638 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:31:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:04.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:31:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/633304687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:31:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:31:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/633304687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:31:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:05.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:06.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:07.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:08 np0005466031 nova_compute[235803]: 2025-10-02 12:31:08.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:08 np0005466031 nova_compute[235803]: 2025-10-02 12:31:08.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:08.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:09.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:10 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  2 08:31:10 np0005466031 nova_compute[235803]: 2025-10-02 12:31:10.394 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:10 np0005466031 nova_compute[235803]: 2025-10-02 12:31:10.395 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:10 np0005466031 nova_compute[235803]: 2025-10-02 12:31:10.395 2 DEBUG nova.network.neutron [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:31:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:10.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:11.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:12.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:13 np0005466031 nova_compute[235803]: 2025-10-02 12:31:13.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:13 np0005466031 nova_compute[235803]: 2025-10-02 12:31:13.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:13.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:31:14Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:93:64 10.100.0.14
Oct  2 08:31:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:31:14Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:93:64 10.100.0.14
Oct  2 08:31:14 np0005466031 podman[267733]: 2025-10-02 12:31:14.626357652 +0000 UTC m=+0.054457681 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 08:31:14 np0005466031 podman[267734]: 2025-10-02 12:31:14.660376652 +0000 UTC m=+0.087743960 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:31:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:15.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:17.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:18 np0005466031 nova_compute[235803]: 2025-10-02 12:31:18.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:18 np0005466031 nova_compute[235803]: 2025-10-02 12:31:18.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:19.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:19.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:21.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:22 np0005466031 nova_compute[235803]: 2025-10-02 12:31:22.734 2 DEBUG nova.network.neutron [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating instance_info_cache with network_info: [{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:23.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:23 np0005466031 nova_compute[235803]: 2025-10-02 12:31:23.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:23 np0005466031 nova_compute[235803]: 2025-10-02 12:31:23.376 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:23 np0005466031 podman[267829]: 2025-10-02 12:31:23.633446743 +0000 UTC m=+0.062142592 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:31:23 np0005466031 podman[267830]: 2025-10-02 12:31:23.642326759 +0000 UTC m=+0.063531102 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:31:23 np0005466031 nova_compute[235803]: 2025-10-02 12:31:23.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:23 np0005466031 nova_compute[235803]: 2025-10-02 12:31:23.873 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:31:23 np0005466031 nova_compute[235803]: 2025-10-02 12:31:23.873 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Creating file /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/151181dc48454a31916814437c1c2663.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:31:23 np0005466031 nova_compute[235803]: 2025-10-02 12:31:23.874 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/151181dc48454a31916814437c1c2663.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:23.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:24 np0005466031 nova_compute[235803]: 2025-10-02 12:31:24.400 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/151181dc48454a31916814437c1c2663.tmp" returned: 1 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:24 np0005466031 nova_compute[235803]: 2025-10-02 12:31:24.401 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82/151181dc48454a31916814437c1c2663.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:31:24 np0005466031 nova_compute[235803]: 2025-10-02 12:31:24.401 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Creating directory /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:31:24 np0005466031 nova_compute[235803]: 2025-10-02 12:31:24.402 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:24 np0005466031 nova_compute[235803]: 2025-10-02 12:31:24.631 2 DEBUG oslo_concurrency.processutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/b20c27bc-0af3-4e54-a5ab-51d9d5afce82" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:24 np0005466031 nova_compute[235803]: 2025-10-02 12:31:24.635 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:31:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:25.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:25.838 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:25.838 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:25.839 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:25.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:27.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 08:31:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:27.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.660 2 INFO nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance shutdown successfully after 4 seconds.#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:28 np0005466031 kernel: tapbaa241e6-fa (unregistering): left promiscuous mode
Oct  2 08:31:28 np0005466031 NetworkManager[44907]: <info>  [1759408288.7336] device (tapbaa241e6-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:28 np0005466031 ovn_controller[132413]: 2025-10-02T12:31:28Z|00249|binding|INFO|Releasing lport baa241e6-fa7d-4fea-9e14-0af61693406b from this chassis (sb_readonly=0)
Oct  2 08:31:28 np0005466031 ovn_controller[132413]: 2025-10-02T12:31:28Z|00250|binding|INFO|Setting lport baa241e6-fa7d-4fea-9e14-0af61693406b down in Southbound
Oct  2 08:31:28 np0005466031 ovn_controller[132413]: 2025-10-02T12:31:28Z|00251|binding|INFO|Removing iface tapbaa241e6-fa ovn-installed in OVS
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:28 np0005466031 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Oct  2 08:31:28 np0005466031 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000004b.scope: Consumed 14.261s CPU time.
Oct  2 08:31:28 np0005466031 systemd-machined[192227]: Machine qemu-29-instance-0000004b terminated.
Oct  2 08:31:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:28.865 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:93:64 10.100.0.14'], port_security=['fa:16:3e:f4:93:64 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b20c27bc-0af3-4e54-a5ab-51d9d5afce82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=baa241e6-fa7d-4fea-9e14-0af61693406b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:28.867 141898 INFO neutron.agent.ovn.metadata.agent [-] Port baa241e6-fa7d-4fea-9e14-0af61693406b in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis#033[00m
Oct  2 08:31:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:28.868 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:31:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:28.870 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6a65f7-568f-499c-a7c8-48294c0795b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:28.871 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.903 2 INFO nova.virt.libvirt.driver [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Instance destroyed successfully.#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.905 2 DEBUG nova.virt.libvirt.vif [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1329955703',display_name='tempest-ServerDiskConfigTestJSON-server-1329955703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1329955703',id=75,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-9ne0uubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:07Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b20c27bc-0af3-4e54-a5ab-51d9d5afce82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:f4:93:64"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.905 2 DEBUG nova.network.os_vif_util [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:f4:93:64"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.906 2 DEBUG nova.network.os_vif_util [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.906 2 DEBUG os_vif [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbaa241e6-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.914 2 INFO os_vif [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa')#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.919 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:28 np0005466031 nova_compute[235803]: 2025-10-02 12:31:28.919 2 DEBUG nova.virt.libvirt.driver [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:28 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[267708]: [NOTICE]   (267712) : haproxy version is 2.8.14-c23fe91
Oct  2 08:31:28 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[267708]: [NOTICE]   (267712) : path to executable is /usr/sbin/haproxy
Oct  2 08:31:28 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[267708]: [WARNING]  (267712) : Exiting Master process...
Oct  2 08:31:28 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[267708]: [WARNING]  (267712) : Exiting Master process...
Oct  2 08:31:28 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[267708]: [ALERT]    (267712) : Current worker (267714) exited with code 143 (Terminated)
Oct  2 08:31:28 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[267708]: [WARNING]  (267712) : All workers exited. Exiting... (0)
Oct  2 08:31:28 np0005466031 systemd[1]: libpod-355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289.scope: Deactivated successfully.
Oct  2 08:31:28 np0005466031 podman[267907]: 2025-10-02 12:31:28.997868806 +0000 UTC m=+0.044513894 container died 355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:31:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:31:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:29.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:31:29 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289-userdata-shm.mount: Deactivated successfully.
Oct  2 08:31:29 np0005466031 systemd[1]: var-lib-containers-storage-overlay-25a64c02ed2bc144f73325e1d49e906d95c916b84b865a2533bd85964dd9f6b0-merged.mount: Deactivated successfully.
Oct  2 08:31:29 np0005466031 podman[267907]: 2025-10-02 12:31:29.047750083 +0000 UTC m=+0.094395171 container cleanup 355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:31:29 np0005466031 systemd[1]: libpod-conmon-355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289.scope: Deactivated successfully.
Oct  2 08:31:29 np0005466031 podman[267937]: 2025-10-02 12:31:29.122944891 +0000 UTC m=+0.050973101 container remove 355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.131 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[67cc3917-b93f-4dbe-bb00-86b52a42faec]: (4, ('Thu Oct  2 12:31:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289)\n355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289\nThu Oct  2 12:31:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289)\n355e3562c9d9cc17d19ec60187dbb33e232ecbb42eddc3b59d5210c6c6f4d289\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.134 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[be652b64-41b2-4d09-80b1-8a8b5a067e3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.135 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:29 np0005466031 kernel: tape21cd6a6-f0: left promiscuous mode
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.142 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fdaf624a-b2b3-44fe-9005-823d740499c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.169 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[34c97a88-4082-47cf-949c-5d2d9301832c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.170 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9a9689-9c54-4027-af2f-62927cfc608b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.187 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7e239652-6ef7-46c2-a796-74eb4cc64439]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611092, 'reachable_time': 21196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267952, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:29 np0005466031 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.191 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:31:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:29.192 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d869ebcb-ed12-4df0-af89-a84f9a21dbc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.191 2 DEBUG neutronclient.v2_0.client [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port baa241e6-fa7d-4fea-9e14-0af61693406b for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.768 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.769 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.769 2 DEBUG oslo_concurrency.lockutils [None req-d0d36fbe-ae98-4f75-a98e-e33fb0e627e0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.915 2 DEBUG nova.compute.manager [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.916 2 DEBUG oslo_concurrency.lockutils [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.916 2 DEBUG oslo_concurrency.lockutils [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.916 2 DEBUG oslo_concurrency.lockutils [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.917 2 DEBUG nova.compute.manager [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:29 np0005466031 nova_compute[235803]: 2025-10-02 12:31:29.917 2 WARNING nova.compute.manager [req-c5fd6bdc-3d1a-4217-8b3d-56a2c88388d7 req-8f3416a3-27a7-4d08-893a-e2658e572bd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-unplugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:31:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:29.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:31.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:32 np0005466031 nova_compute[235803]: 2025-10-02 12:31:32.822 2 DEBUG nova.compute.manager [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:32 np0005466031 nova_compute[235803]: 2025-10-02 12:31:32.822 2 DEBUG oslo_concurrency.lockutils [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:32 np0005466031 nova_compute[235803]: 2025-10-02 12:31:32.823 2 DEBUG oslo_concurrency.lockutils [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:32 np0005466031 nova_compute[235803]: 2025-10-02 12:31:32.823 2 DEBUG oslo_concurrency.lockutils [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:32 np0005466031 nova_compute[235803]: 2025-10-02 12:31:32.823 2 DEBUG nova.compute.manager [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:32 np0005466031 nova_compute[235803]: 2025-10-02 12:31:32.823 2 WARNING nova.compute.manager [req-75309f44-b47b-44f6-9617-47c72bcc64d9 req-8739b5e1-4d4d-460c-882e-e5738746757b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:31:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:31:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:33.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:31:33 np0005466031 nova_compute[235803]: 2025-10-02 12:31:33.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466031 nova_compute[235803]: 2025-10-02 12:31:33.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:33.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:34 np0005466031 nova_compute[235803]: 2025-10-02 12:31:34.986 2 DEBUG nova.compute.manager [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-changed-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:34 np0005466031 nova_compute[235803]: 2025-10-02 12:31:34.986 2 DEBUG nova.compute.manager [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Refreshing instance network info cache due to event network-changed-baa241e6-fa7d-4fea-9e14-0af61693406b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:31:34 np0005466031 nova_compute[235803]: 2025-10-02 12:31:34.986 2 DEBUG oslo_concurrency.lockutils [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:34 np0005466031 nova_compute[235803]: 2025-10-02 12:31:34.987 2 DEBUG oslo_concurrency.lockutils [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:34 np0005466031 nova_compute[235803]: 2025-10-02 12:31:34.987 2 DEBUG nova.network.neutron [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Refreshing network info cache for port baa241e6-fa7d-4fea-9e14-0af61693406b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:31:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:35.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:35.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:36 np0005466031 nova_compute[235803]: 2025-10-02 12:31:36.418 2 DEBUG nova.network.neutron [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updated VIF entry in instance network info cache for port baa241e6-fa7d-4fea-9e14-0af61693406b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:31:36 np0005466031 nova_compute[235803]: 2025-10-02 12:31:36.418 2 DEBUG nova.network.neutron [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating instance_info_cache with network_info: [{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:36 np0005466031 nova_compute[235803]: 2025-10-02 12:31:36.440 2 DEBUG oslo_concurrency.lockutils [req-d1ddb3ed-f1e2-42c5-ad21-e16fa6241dfa req-76a19dc3-d2e7-47e8-b118-89c810b1af87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e240 e240: 3 total, 3 up, 3 in
Oct  2 08:31:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:38 np0005466031 nova_compute[235803]: 2025-10-02 12:31:38.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005466031 nova_compute[235803]: 2025-10-02 12:31:38.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:39.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:39.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:40 np0005466031 nova_compute[235803]: 2025-10-02 12:31:40.687 2 DEBUG nova.compute.manager [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:40 np0005466031 nova_compute[235803]: 2025-10-02 12:31:40.688 2 DEBUG oslo_concurrency.lockutils [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:40 np0005466031 nova_compute[235803]: 2025-10-02 12:31:40.688 2 DEBUG oslo_concurrency.lockutils [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:40 np0005466031 nova_compute[235803]: 2025-10-02 12:31:40.688 2 DEBUG oslo_concurrency.lockutils [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:40 np0005466031 nova_compute[235803]: 2025-10-02 12:31:40.689 2 DEBUG nova.compute.manager [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:40 np0005466031 nova_compute[235803]: 2025-10-02 12:31:40.689 2 WARNING nova.compute.manager [req-ae03bd73-b5f4-4a72-8b21-6cc6161cf0b7 req-6bba2d6d-fcb6-48b1-906d-6dfb07d25208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:31:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:41.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:41.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:42 np0005466031 nova_compute[235803]: 2025-10-02 12:31:42.875 2 DEBUG nova.compute.manager [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:42 np0005466031 nova_compute[235803]: 2025-10-02 12:31:42.875 2 DEBUG oslo_concurrency.lockutils [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:42 np0005466031 nova_compute[235803]: 2025-10-02 12:31:42.876 2 DEBUG oslo_concurrency.lockutils [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:42 np0005466031 nova_compute[235803]: 2025-10-02 12:31:42.876 2 DEBUG oslo_concurrency.lockutils [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:42 np0005466031 nova_compute[235803]: 2025-10-02 12:31:42.876 2 DEBUG nova.compute.manager [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] No waiting events found dispatching network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:42 np0005466031 nova_compute[235803]: 2025-10-02 12:31:42.876 2 WARNING nova.compute.manager [req-789e8015-a629-4416-a72e-dc1f8bf74b7a req-6d47c621-773a-4917-ac6d-bfae0252b5f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Received unexpected event network-vif-plugged-baa241e6-fa7d-4fea-9e14-0af61693406b for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.030 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.031 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.031 2 DEBUG nova.compute.manager [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:31:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:43.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:43 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.728 2 DEBUG neutronclient.v2_0.client [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port baa241e6-fa7d-4fea-9e14-0af61693406b for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.729 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.730 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.730 2 DEBUG nova.network.neutron [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.731 2 DEBUG nova.objects.instance [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'info_cache' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.902 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408288.9009469, b20c27bc-0af3-4e54-a5ab-51d9d5afce82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.903 2 INFO nova.compute.manager [-] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.921 2 DEBUG nova.compute.manager [None req-bdb26c7a-4b20-452d-a9e9-47a585170768 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.954 2 DEBUG nova.compute.manager [None req-bdb26c7a-4b20-452d-a9e9-47a585170768 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:43.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:43 np0005466031 nova_compute[235803]: 2025-10-02 12:31:43.996 2 INFO nova.compute.manager [None req-bdb26c7a-4b20-452d-a9e9-47a585170768 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Oct  2 08:31:44 np0005466031 nova_compute[235803]: 2025-10-02 12:31:44.653 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:44 np0005466031 nova_compute[235803]: 2025-10-02 12:31:44.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:44 np0005466031 nova_compute[235803]: 2025-10-02 12:31:44.672 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:44 np0005466031 nova_compute[235803]: 2025-10-02 12:31:44.672 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:44 np0005466031 nova_compute[235803]: 2025-10-02 12:31:44.672 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:31:44 np0005466031 nova_compute[235803]: 2025-10-02 12:31:44.672 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:45.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/231615588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.093 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.153 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.153 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:45 np0005466031 podman[268034]: 2025-10-02 12:31:45.190792427 +0000 UTC m=+0.054258815 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:31:45 np0005466031 podman[268035]: 2025-10-02 12:31:45.224231871 +0000 UTC m=+0.088790730 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.305 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.306 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4592MB free_disk=20.92181396484375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.307 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.307 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.351 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Migration for instance b20c27bc-0af3-4e54-a5ab-51d9d5afce82 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.371 2 INFO nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating resource usage from migration 2151dc52-824f-4888-8c0d-1661bb797446#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.371 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Starting to track outgoing migration 2151dc52-824f-4888-8c0d-1661bb797446 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.394 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Migration 2151dc52-824f-4888-8c0d-1661bb797446 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.394 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.394 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.434 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3674039429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.882 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.889 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.909 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.934 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:31:45 np0005466031 nova_compute[235803]: 2025-10-02 12:31:45.935 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:45.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:46 np0005466031 nova_compute[235803]: 2025-10-02 12:31:46.072 2 DEBUG nova.network.neutron [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b20c27bc-0af3-4e54-a5ab-51d9d5afce82] Updating instance_info_cache with network_info: [{"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:46 np0005466031 nova_compute[235803]: 2025-10-02 12:31:46.089 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-b20c27bc-0af3-4e54-a5ab-51d9d5afce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:46 np0005466031 nova_compute[235803]: 2025-10-02 12:31:46.090 2 DEBUG nova.objects.instance [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid b20c27bc-0af3-4e54-a5ab-51d9d5afce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:46 np0005466031 nova_compute[235803]: 2025-10-02 12:31:46.207 2 DEBUG nova.storage.rbd_utils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] removing snapshot(nova-resize) on rbd image(b20c27bc-0af3-4e54-a5ab-51d9d5afce82_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:31:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e241 e241: 3 total, 3 up, 3 in
Oct  2 08:31:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:47.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.347 2 DEBUG nova.virt.libvirt.vif [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1329955703',display_name='tempest-ServerDiskConfigTestJSON-server-1329955703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1329955703',id=75,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-9ne0uubk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:41Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b20c27bc-0af3-4e54-a5ab-51d9d5afce82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.348 2 DEBUG nova.network.os_vif_util [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "baa241e6-fa7d-4fea-9e14-0af61693406b", "address": "fa:16:3e:f4:93:64", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbaa241e6-fa", "ovs_interfaceid": "baa241e6-fa7d-4fea-9e14-0af61693406b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.349 2 DEBUG nova.network.os_vif_util [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.350 2 DEBUG os_vif [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbaa241e6-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.356 2 INFO os_vif [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:93:64,bridge_name='br-int',has_traffic_filtering=True,id=baa241e6-fa7d-4fea-9e14-0af61693406b,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbaa241e6-fa')#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.356 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.357 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.435 2 DEBUG oslo_concurrency.processutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3148228124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.893 2 DEBUG oslo_concurrency.processutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.899 2 DEBUG nova.compute.provider_tree [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.912 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.913 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.913 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.917 2 DEBUG nova.scheduler.client.report [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:31:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:47 np0005466031 nova_compute[235803]: 2025-10-02 12:31:47.984 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:48 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:31:48 np0005466031 nova_compute[235803]: 2025-10-02 12:31:48.094 2 INFO nova.scheduler.client.report [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Deleted allocation for migration 2151dc52-824f-4888-8c0d-1661bb797446#033[00m
Oct  2 08:31:48 np0005466031 nova_compute[235803]: 2025-10-02 12:31:48.143 2 DEBUG oslo_concurrency.lockutils [None req-e6839cbb-ae5e-4e31-8e51-93918286becc 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b20c27bc-0af3-4e54-a5ab-51d9d5afce82" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:48 np0005466031 nova_compute[235803]: 2025-10-02 12:31:48.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:48 np0005466031 nova_compute[235803]: 2025-10-02 12:31:48.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:31:48 np0005466031 nova_compute[235803]: 2025-10-02 12:31:48.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:31:48 np0005466031 nova_compute[235803]: 2025-10-02 12:31:48.652 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:31:48 np0005466031 nova_compute[235803]: 2025-10-02 12:31:48.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005466031 nova_compute[235803]: 2025-10-02 12:31:48.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:31:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:31:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:49.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:49 np0005466031 nova_compute[235803]: 2025-10-02 12:31:49.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:49 np0005466031 nova_compute[235803]: 2025-10-02 12:31:49.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:49.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:51.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:52 np0005466031 nova_compute[235803]: 2025-10-02 12:31:52.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:52 np0005466031 nova_compute[235803]: 2025-10-02 12:31:52.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:52 np0005466031 nova_compute[235803]: 2025-10-02 12:31:52.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:31:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:53 np0005466031 nova_compute[235803]: 2025-10-02 12:31:53.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:53 np0005466031 nova_compute[235803]: 2025-10-02 12:31:53.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:53.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:54 np0005466031 podman[268294]: 2025-10-02 12:31:54.616563304 +0000 UTC m=+0.043769993 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:31:54 np0005466031 podman[268293]: 2025-10-02 12:31:54.623092232 +0000 UTC m=+0.052277488 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 08:31:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:55.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 e242: 3 total, 3 up, 3 in
Oct  2 08:31:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:55.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:31:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.029 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.029 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.075 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:31:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:57.076 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:31:57.077 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.539 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.540 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.555 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.555 2 INFO nova.compute.claims [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:57 np0005466031 nova_compute[235803]: 2025-10-02 12:31:57.716 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:57.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1408219479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:58 np0005466031 nova_compute[235803]: 2025-10-02 12:31:58.135 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:58 np0005466031 nova_compute[235803]: 2025-10-02 12:31:58.140 2 DEBUG nova.compute.provider_tree [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:31:58 np0005466031 nova_compute[235803]: 2025-10-02 12:31:58.471 2 DEBUG nova.scheduler.client.report [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:31:58 np0005466031 nova_compute[235803]: 2025-10-02 12:31:58.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:58 np0005466031 nova_compute[235803]: 2025-10-02 12:31:58.961 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:58 np0005466031 nova_compute[235803]: 2025-10-02 12:31:58.961 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:31:58 np0005466031 nova_compute[235803]: 2025-10-02 12:31:58.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.056 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.056 2 DEBUG nova.network.neutron [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:31:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:31:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.093 2 INFO nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.129 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.250 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.251 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.251 2 INFO nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Creating image(s)#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.275 2 DEBUG nova.storage.rbd_utils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b6c2a016-125f-4f83-a284-5a2d50805121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.301 2 DEBUG nova.storage.rbd_utils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b6c2a016-125f-4f83-a284-5a2d50805121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.329 2 DEBUG nova.storage.rbd_utils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b6c2a016-125f-4f83-a284-5a2d50805121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.333 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.406 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.407 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.407 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.408 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.433 2 DEBUG nova.storage.rbd_utils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b6c2a016-125f-4f83-a284-5a2d50805121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.436 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b6c2a016-125f-4f83-a284-5a2d50805121_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.464 2 DEBUG nova.policy [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '28d5425714b04888ba9e6112879fae33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.697 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b6c2a016-125f-4f83-a284-5a2d50805121_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.756 2 DEBUG nova.storage.rbd_utils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image b6c2a016-125f-4f83-a284-5a2d50805121_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.866 2 DEBUG nova.objects.instance [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.886 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.886 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Ensure instance console log exists: /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.887 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.887 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:59 np0005466031 nova_compute[235803]: 2025-10-02 12:31:59.887 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:31:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:59.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:01.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:01 np0005466031 nova_compute[235803]: 2025-10-02 12:32:01.127 2 DEBUG nova.network.neutron [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Successfully created port: 2ce66710-95c3-4fa0-999b-b7cf0b722cac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:32:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:32:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:32:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:03.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:03 np0005466031 nova_compute[235803]: 2025-10-02 12:32:03.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:03 np0005466031 nova_compute[235803]: 2025-10-02 12:32:03.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:03.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:05.078 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:05.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:05.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:07.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:07 np0005466031 nova_compute[235803]: 2025-10-02 12:32:07.306 2 DEBUG nova.network.neutron [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Successfully updated port: 2ce66710-95c3-4fa0-999b-b7cf0b722cac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:32:07 np0005466031 nova_compute[235803]: 2025-10-02 12:32:07.498 2 DEBUG nova.compute.manager [req-53638be2-c308-49d0-892a-da8ca9468a09 req-a91b657f-cfaf-4ee5-965f-d2a9615984ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-changed-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:07 np0005466031 nova_compute[235803]: 2025-10-02 12:32:07.499 2 DEBUG nova.compute.manager [req-53638be2-c308-49d0-892a-da8ca9468a09 req-a91b657f-cfaf-4ee5-965f-d2a9615984ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Refreshing instance network info cache due to event network-changed-2ce66710-95c3-4fa0-999b-b7cf0b722cac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:32:07 np0005466031 nova_compute[235803]: 2025-10-02 12:32:07.499 2 DEBUG oslo_concurrency.lockutils [req-53638be2-c308-49d0-892a-da8ca9468a09 req-a91b657f-cfaf-4ee5-965f-d2a9615984ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:07 np0005466031 nova_compute[235803]: 2025-10-02 12:32:07.499 2 DEBUG oslo_concurrency.lockutils [req-53638be2-c308-49d0-892a-da8ca9468a09 req-a91b657f-cfaf-4ee5-965f-d2a9615984ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:07 np0005466031 nova_compute[235803]: 2025-10-02 12:32:07.499 2 DEBUG nova.network.neutron [req-53638be2-c308-49d0-892a-da8ca9468a09 req-a91b657f-cfaf-4ee5-965f-d2a9615984ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Refreshing network info cache for port 2ce66710-95c3-4fa0-999b-b7cf0b722cac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:32:07 np0005466031 nova_compute[235803]: 2025-10-02 12:32:07.589 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:07 np0005466031 nova_compute[235803]: 2025-10-02 12:32:07.969 2 DEBUG nova.network.neutron [req-53638be2-c308-49d0-892a-da8ca9468a09 req-a91b657f-cfaf-4ee5-965f-d2a9615984ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:32:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:07.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:08 np0005466031 nova_compute[235803]: 2025-10-02 12:32:08.429 2 DEBUG nova.network.neutron [req-53638be2-c308-49d0-892a-da8ca9468a09 req-a91b657f-cfaf-4ee5-965f-d2a9615984ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:08 np0005466031 nova_compute[235803]: 2025-10-02 12:32:08.468 2 DEBUG oslo_concurrency.lockutils [req-53638be2-c308-49d0-892a-da8ca9468a09 req-a91b657f-cfaf-4ee5-965f-d2a9615984ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:08 np0005466031 nova_compute[235803]: 2025-10-02 12:32:08.470 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:08 np0005466031 nova_compute[235803]: 2025-10-02 12:32:08.470 2 DEBUG nova.network.neutron [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:32:08 np0005466031 nova_compute[235803]: 2025-10-02 12:32:08.654 2 DEBUG nova.network.neutron [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:32:08 np0005466031 nova_compute[235803]: 2025-10-02 12:32:08.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:08 np0005466031 nova_compute[235803]: 2025-10-02 12:32:08.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:09.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.020 2 DEBUG nova.network.neutron [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating instance_info_cache with network_info: [{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.077 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.077 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance network_info: |[{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.080 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Start _get_guest_xml network_info=[{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.086 2 WARNING nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.094 2 DEBUG nova.virt.libvirt.host [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.095 2 DEBUG nova.virt.libvirt.host [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.098 2 DEBUG nova.virt.libvirt.host [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.099 2 DEBUG nova.virt.libvirt.host [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.100 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.101 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.101 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.101 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.102 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.102 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.102 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.103 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.103 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.103 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.103 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.104 2 DEBUG nova.virt.hardware [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.107 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2211173841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.544 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.572 2 DEBUG nova.storage.rbd_utils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b6c2a016-125f-4f83-a284-5a2d50805121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:10 np0005466031 nova_compute[235803]: 2025-10-02 12:32:10.575 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3524789732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.072 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.074 2 DEBUG nova.virt.libvirt.vif [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1932213760',display_name='tempest-ServerDiskConfigTestJSON-server-1932213760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1932213760',id=78,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-3bos98gj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:59Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b6c2a016-125f-4f83-a284-5a2d50805121,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.075 2 DEBUG nova.network.os_vif_util [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.075 2 DEBUG nova.network.os_vif_util [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.076 2 DEBUG nova.objects.instance [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.097 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <uuid>b6c2a016-125f-4f83-a284-5a2d50805121</uuid>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <name>instance-0000004e</name>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1932213760</nova:name>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:32:10</nova:creationTime>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <nova:port uuid="2ce66710-95c3-4fa0-999b-b7cf0b722cac">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <entry name="serial">b6c2a016-125f-4f83-a284-5a2d50805121</entry>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <entry name="uuid">b6c2a016-125f-4f83-a284-5a2d50805121</entry>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/b6c2a016-125f-4f83-a284-5a2d50805121_disk">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/b6c2a016-125f-4f83-a284-5a2d50805121_disk.config">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:35:b2:fa"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <target dev="tap2ce66710-95"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/console.log" append="off"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:32:11 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:32:11 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:32:11 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:32:11 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.098 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Preparing to wait for external event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.098 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.099 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.099 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.099 2 DEBUG nova.virt.libvirt.vif [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1932213760',display_name='tempest-ServerDiskConfigTestJSON-server-1932213760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1932213760',id=78,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-3bos98gj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:59Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b6c2a016-125f-4f83-a284-5a2d50805121,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.100 2 DEBUG nova.network.os_vif_util [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.100 2 DEBUG nova.network.os_vif_util [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.101 2 DEBUG os_vif [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ce66710-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ce66710-95, col_values=(('external_ids', {'iface-id': '2ce66710-95c3-4fa0-999b-b7cf0b722cac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:b2:fa', 'vm-uuid': 'b6c2a016-125f-4f83-a284-5a2d50805121'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:11 np0005466031 NetworkManager[44907]: <info>  [1759408331.1089] manager: (tap2ce66710-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.121 2 INFO os_vif [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95')#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.183 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.184 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.185 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:35:b2:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.185 2 INFO nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Using config drive#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.203 2 DEBUG nova.storage.rbd_utils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b6c2a016-125f-4f83-a284-5a2d50805121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.574 2 INFO nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Creating config drive at /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/disk.config#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.583 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplx6m394e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.720 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplx6m394e" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.761 2 DEBUG nova.storage.rbd_utils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image b6c2a016-125f-4f83-a284-5a2d50805121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:11 np0005466031 nova_compute[235803]: 2025-10-02 12:32:11.764 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/disk.config b6c2a016-125f-4f83-a284-5a2d50805121_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:11.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:13.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.273 2 DEBUG oslo_concurrency.processutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/disk.config b6c2a016-125f-4f83-a284-5a2d50805121_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.274 2 INFO nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Deleting local config drive /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/disk.config because it was imported into RBD.#033[00m
Oct  2 08:32:13 np0005466031 kernel: tap2ce66710-95: entered promiscuous mode
Oct  2 08:32:13 np0005466031 NetworkManager[44907]: <info>  [1759408333.3285] manager: (tap2ce66710-95): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct  2 08:32:13 np0005466031 systemd-udevd[268758]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:13Z|00252|binding|INFO|Claiming lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac for this chassis.
Oct  2 08:32:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:13Z|00253|binding|INFO|2ce66710-95c3-4fa0-999b-b7cf0b722cac: Claiming fa:16:3e:35:b2:fa 10.100.0.14
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.395 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:b2:fa 10.100.0.14'], port_security=['fa:16:3e:35:b2:fa 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b6c2a016-125f-4f83-a284-5a2d50805121', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=2ce66710-95c3-4fa0-999b-b7cf0b722cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.396 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 2ce66710-95c3-4fa0-999b-b7cf0b722cac in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.397 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711#033[00m
Oct  2 08:32:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:13Z|00254|binding|INFO|Setting lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac ovn-installed in OVS
Oct  2 08:32:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:13Z|00255|binding|INFO|Setting lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac up in Southbound
Oct  2 08:32:13 np0005466031 NetworkManager[44907]: <info>  [1759408333.4042] device (tap2ce66710-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:32:13 np0005466031 NetworkManager[44907]: <info>  [1759408333.4052] device (tap2ce66710-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.411 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[15d79fe4-5085-4864-98bb-d5d5ec2cc6d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.412 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.414 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.414 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4ee5b0-db79-4502-8acf-0911a902b9d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 systemd-machined[192227]: New machine qemu-30-instance-0000004e.
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.415 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b30d5280-630e-4e92-ab90-1783a7821cbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 systemd[1]: Started Virtual Machine qemu-30-instance-0000004e.
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.429 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[19aa5962-1edf-4160-9ac2-315908c45070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.443 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5b59b2fa-34d1-4729-9d60-8e9364db9a9b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.470 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a37af98a-b912-40a8-84e8-a181693d6236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.474 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[751d1142-dbad-43a0-ab14-ad2dfe8f39d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 NetworkManager[44907]: <info>  [1759408333.4763] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.503 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[402f901b-ec1e-41ab-b66a-d13136b876b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.506 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c520cc5c-ce51-4dc8-9def-7ad39c612bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 NetworkManager[44907]: <info>  [1759408333.5279] device (tape21cd6a6-f0): carrier: link connected
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.533 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e266c58d-f934-4bcc-9c23-b0bb3a0c2df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.549 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[be2c7bdc-b735-4957-ba8e-b47bffb01ac9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618911, 'reachable_time': 42130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268794, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.565 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[80002754-d9e6-47ad-8cc3-8ea7ab9dffcb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618911, 'tstamp': 618911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268795, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.586 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c00615bb-ddf3-4a6a-8786-677640adeed8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618911, 'reachable_time': 42130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268811, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.617 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[96ad1625-4343-4450-83be-1e743d4857d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.689 2 DEBUG nova.compute.manager [req-ee3b33ab-185f-4340-b437-5b7ea5d6fe8e req-233012a5-e5f0-4e2e-9ea1-221345068bb0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.690 2 DEBUG oslo_concurrency.lockutils [req-ee3b33ab-185f-4340-b437-5b7ea5d6fe8e req-233012a5-e5f0-4e2e-9ea1-221345068bb0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.690 2 DEBUG oslo_concurrency.lockutils [req-ee3b33ab-185f-4340-b437-5b7ea5d6fe8e req-233012a5-e5f0-4e2e-9ea1-221345068bb0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.690 2 DEBUG oslo_concurrency.lockutils [req-ee3b33ab-185f-4340-b437-5b7ea5d6fe8e req-233012a5-e5f0-4e2e-9ea1-221345068bb0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.690 2 DEBUG nova.compute.manager [req-ee3b33ab-185f-4340-b437-5b7ea5d6fe8e req-233012a5-e5f0-4e2e-9ea1-221345068bb0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Processing event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.690 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d695cf71-2a00-4b73-b475-847b25d5f217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.692 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.692 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.692 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:13 np0005466031 NetworkManager[44907]: <info>  [1759408333.6948] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct  2 08:32:13 np0005466031 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.699 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:13Z|00256|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.721 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.722 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[590eb711-1592-47cb-bf7b-9b6c084b43b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.723 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:32:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:13.725 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:32:13 np0005466031 nova_compute[235803]: 2025-10-02 12:32:13.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:13.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:14 np0005466031 podman[268869]: 2025-10-02 12:32:14.070879504 +0000 UTC m=+0.045641727 container create 371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:32:14 np0005466031 systemd[1]: Started libpod-conmon-371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc.scope.
Oct  2 08:32:14 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:32:14 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79074c81211ea521a8652092408e4c37d0723a20bbebc2b97f0d589ac31e3727/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.125 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408334.1189, b6c2a016-125f-4f83-a284-5a2d50805121 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.125 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] VM Started (Lifecycle Event)#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.127 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.131 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.134 2 INFO nova.virt.libvirt.driver [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance spawned successfully.#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.134 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:32:14 np0005466031 podman[268869]: 2025-10-02 12:32:14.136255898 +0000 UTC m=+0.111018171 container init 371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:14 np0005466031 podman[268869]: 2025-10-02 12:32:14.045678078 +0000 UTC m=+0.020440321 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:32:14 np0005466031 podman[268869]: 2025-10-02 12:32:14.143244989 +0000 UTC m=+0.118007252 container start 371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.156 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.163 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:14 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[268884]: [NOTICE]   (268888) : New worker (268890) forked
Oct  2 08:32:14 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[268884]: [NOTICE]   (268888) : Loading success.
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.166 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.166 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.167 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.167 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.168 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.168 2 DEBUG nova.virt.libvirt.driver [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.182 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.182 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408334.1189606, b6c2a016-125f-4f83-a284-5a2d50805121 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.183 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.203 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.206 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408334.1292741, b6c2a016-125f-4f83-a284-5a2d50805121 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.206 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.238 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.241 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.268 2 INFO nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Took 15.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.268 2 DEBUG nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.269 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.319 2 INFO nova.compute.manager [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Took 16.83 seconds to build instance.#033[00m
Oct  2 08:32:14 np0005466031 nova_compute[235803]: 2025-10-02 12:32:14.338 2 DEBUG oslo_concurrency.lockutils [None req-a1be4cad-babe-4da5-b383-11c6cf31f328 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 08:32:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:15.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 08:32:15 np0005466031 podman[268899]: 2025-10-02 12:32:15.629361222 +0000 UTC m=+0.055510481 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:32:15 np0005466031 podman[268900]: 2025-10-02 12:32:15.651299704 +0000 UTC m=+0.076020742 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:32:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:32:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:15.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:32:16 np0005466031 nova_compute[235803]: 2025-10-02 12:32:16.052 2 DEBUG nova.compute.manager [req-2f4da16e-c42f-43df-b932-0a01c7a2aa01 req-37fe3e3d-5f36-458f-b5d0-625ef886b2d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:16 np0005466031 nova_compute[235803]: 2025-10-02 12:32:16.052 2 DEBUG oslo_concurrency.lockutils [req-2f4da16e-c42f-43df-b932-0a01c7a2aa01 req-37fe3e3d-5f36-458f-b5d0-625ef886b2d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:16 np0005466031 nova_compute[235803]: 2025-10-02 12:32:16.052 2 DEBUG oslo_concurrency.lockutils [req-2f4da16e-c42f-43df-b932-0a01c7a2aa01 req-37fe3e3d-5f36-458f-b5d0-625ef886b2d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:16 np0005466031 nova_compute[235803]: 2025-10-02 12:32:16.053 2 DEBUG oslo_concurrency.lockutils [req-2f4da16e-c42f-43df-b932-0a01c7a2aa01 req-37fe3e3d-5f36-458f-b5d0-625ef886b2d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:16 np0005466031 nova_compute[235803]: 2025-10-02 12:32:16.053 2 DEBUG nova.compute.manager [req-2f4da16e-c42f-43df-b932-0a01c7a2aa01 req-37fe3e3d-5f36-458f-b5d0-625ef886b2d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:16 np0005466031 nova_compute[235803]: 2025-10-02 12:32:16.053 2 WARNING nova.compute.manager [req-2f4da16e-c42f-43df-b932-0a01c7a2aa01 req-37fe3e3d-5f36-458f-b5d0-625ef886b2d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state active and task_state None.#033[00m
Oct  2 08:32:16 np0005466031 nova_compute[235803]: 2025-10-02 12:32:16.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:17.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:17.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:18 np0005466031 nova_compute[235803]: 2025-10-02 12:32:18.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:19.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:19.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:20 np0005466031 nova_compute[235803]: 2025-10-02 12:32:20.045 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:20 np0005466031 nova_compute[235803]: 2025-10-02 12:32:20.046 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:20 np0005466031 nova_compute[235803]: 2025-10-02 12:32:20.047 2 DEBUG nova.network.neutron [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:32:21 np0005466031 nova_compute[235803]: 2025-10-02 12:32:21.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:21.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:21 np0005466031 nova_compute[235803]: 2025-10-02 12:32:21.996 2 DEBUG nova.network.neutron [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating instance_info_cache with network_info: [{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:21.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:22 np0005466031 nova_compute[235803]: 2025-10-02 12:32:22.026 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:22 np0005466031 nova_compute[235803]: 2025-10-02 12:32:22.145 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:32:22 np0005466031 nova_compute[235803]: 2025-10-02 12:32:22.146 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Creating file /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/c8ed48d5d39e4d01b6e0171ac61d005a.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:32:22 np0005466031 nova_compute[235803]: 2025-10-02 12:32:22.146 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/c8ed48d5d39e4d01b6e0171ac61d005a.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:22 np0005466031 nova_compute[235803]: 2025-10-02 12:32:22.727 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/c8ed48d5d39e4d01b6e0171ac61d005a.tmp" returned: 1 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:22 np0005466031 nova_compute[235803]: 2025-10-02 12:32:22.728 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121/c8ed48d5d39e4d01b6e0171ac61d005a.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:32:22 np0005466031 nova_compute[235803]: 2025-10-02 12:32:22.728 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Creating directory /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:32:22 np0005466031 nova_compute[235803]: 2025-10-02 12:32:22.729 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:23 np0005466031 nova_compute[235803]: 2025-10-02 12:32:23.000 2 DEBUG oslo_concurrency.processutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/b6c2a016-125f-4f83-a284-5a2d50805121" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:23 np0005466031 nova_compute[235803]: 2025-10-02 12:32:23.005 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:32:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:23.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:23 np0005466031 nova_compute[235803]: 2025-10-02 12:32:23.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:23.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:25.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:25 np0005466031 podman[269001]: 2025-10-02 12:32:25.625629953 +0000 UTC m=+0.054740769 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:32:25 np0005466031 podman[269000]: 2025-10-02 12:32:25.641467689 +0000 UTC m=+0.072242033 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:32:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:25.840 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:25.841 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:25.841 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:26.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:26 np0005466031 nova_compute[235803]: 2025-10-02 12:32:26.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:27.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.004000114s ======
Oct  2 08:32:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:28.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000114s
Oct  2 08:32:28 np0005466031 nova_compute[235803]: 2025-10-02 12:32:28.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:29.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:29Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:35:b2:fa 10.100.0.14
Oct  2 08:32:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:29Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:35:b2:fa 10.100.0.14
Oct  2 08:32:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:30.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:31 np0005466031 nova_compute[235803]: 2025-10-02 12:32:31.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:31.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:32.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:33 np0005466031 nova_compute[235803]: 2025-10-02 12:32:33.051 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:32:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:33.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:33 np0005466031 nova_compute[235803]: 2025-10-02 12:32:33.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:34.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:35.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:35 np0005466031 kernel: tap2ce66710-95 (unregistering): left promiscuous mode
Oct  2 08:32:35 np0005466031 NetworkManager[44907]: <info>  [1759408355.8526] device (tap2ce66710-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:32:35 np0005466031 nova_compute[235803]: 2025-10-02 12:32:35.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:35Z|00257|binding|INFO|Releasing lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac from this chassis (sb_readonly=0)
Oct  2 08:32:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:35Z|00258|binding|INFO|Setting lport 2ce66710-95c3-4fa0-999b-b7cf0b722cac down in Southbound
Oct  2 08:32:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:32:35Z|00259|binding|INFO|Removing iface tap2ce66710-95 ovn-installed in OVS
Oct  2 08:32:35 np0005466031 nova_compute[235803]: 2025-10-02 12:32:35.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:35.871 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:b2:fa 10.100.0.14'], port_security=['fa:16:3e:35:b2:fa 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b6c2a016-125f-4f83-a284-5a2d50805121', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=2ce66710-95c3-4fa0-999b-b7cf0b722cac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:35.872 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 2ce66710-95c3-4fa0-999b-b7cf0b722cac in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis#033[00m
Oct  2 08:32:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:35.873 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:32:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:35.874 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b180b0c9-6964-4e38-bcc7-3d8f98f13587]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:35.875 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore#033[00m
Oct  2 08:32:35 np0005466031 nova_compute[235803]: 2025-10-02 12:32:35.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:35 np0005466031 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct  2 08:32:35 np0005466031 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000004e.scope: Consumed 13.231s CPU time.
Oct  2 08:32:35 np0005466031 systemd-machined[192227]: Machine qemu-30-instance-0000004e terminated.
Oct  2 08:32:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:32:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:32:36 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[268884]: [NOTICE]   (268888) : haproxy version is 2.8.14-c23fe91
Oct  2 08:32:36 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[268884]: [NOTICE]   (268888) : path to executable is /usr/sbin/haproxy
Oct  2 08:32:36 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[268884]: [WARNING]  (268888) : Exiting Master process...
Oct  2 08:32:36 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[268884]: [ALERT]    (268888) : Current worker (268890) exited with code 143 (Terminated)
Oct  2 08:32:36 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[268884]: [WARNING]  (268888) : All workers exited. Exiting... (0)
Oct  2 08:32:36 np0005466031 systemd[1]: libpod-371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc.scope: Deactivated successfully.
Oct  2 08:32:36 np0005466031 podman[269068]: 2025-10-02 12:32:36.052317949 +0000 UTC m=+0.093871276 container died 371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.099 2 INFO nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.111 2 INFO nova.virt.libvirt.driver [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Instance destroyed successfully.#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.113 2 DEBUG nova.virt.libvirt.vif [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1932213760',display_name='tempest-ServerDiskConfigTestJSON-server-1932213760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1932213760',id=78,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-3bos98gj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:19Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b6c2a016-125f-4f83-a284-5a2d50805121,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:35:b2:fa"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.113 2 DEBUG nova.network.os_vif_util [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "vif_mac": "fa:16:3e:35:b2:fa"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.114 2 DEBUG nova.network.os_vif_util [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.115 2 DEBUG os_vif [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ce66710-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.122 2 INFO os_vif [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95')#033[00m
Oct  2 08:32:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc-userdata-shm.mount: Deactivated successfully.
Oct  2 08:32:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay-79074c81211ea521a8652092408e4c37d0723a20bbebc2b97f0d589ac31e3727-merged.mount: Deactivated successfully.
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.126 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.127 2 DEBUG nova.virt.libvirt.driver [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:36 np0005466031 podman[269068]: 2025-10-02 12:32:36.161910357 +0000 UTC m=+0.203463684 container cleanup 371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:32:36 np0005466031 systemd[1]: libpod-conmon-371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc.scope: Deactivated successfully.
Oct  2 08:32:36 np0005466031 podman[269138]: 2025-10-02 12:32:36.251780248 +0000 UTC m=+0.067151917 container remove 371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.261 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[584ebe76-3adf-4d30-9919-a880f5d8fc31]: (4, ('Thu Oct  2 12:32:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc)\n371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc\nThu Oct  2 12:32:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc)\n371eaea04b4199bd2839be68d8857a9d85a15fad901ec1c676f9add7ea0a93fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.264 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[04ac0872-440d-462a-92eb-8aff84152235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.266 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:36 np0005466031 kernel: tape21cd6a6-f0: left promiscuous mode
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.285 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[62d71873-ead2-492f-8a4c-9b4331aed33a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.322 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[72ab401a-91b9-4b7e-9ce8-35396b6059c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.325 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8634c92b-fb2c-42cf-9eac-cb9ea4cc7631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.341 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9d007340-4cb1-4de9-ba7f-0e541479ca0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618905, 'reachable_time': 24252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269171, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:36 np0005466031 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.346 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:32:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:32:36.346 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d1348937-f785-4d95-85d9-40bdb5775be1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.567 2 DEBUG neutronclient.v2_0.client [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 2ce66710-95c3-4fa0-999b-b7cf0b722cac for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.674 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.675 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:36 np0005466031 nova_compute[235803]: 2025-10-02 12:32:36.675 2 DEBUG oslo_concurrency.lockutils [None req-ff36af8b-510e-4b0a-b492-ad275ecf19b8 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:37.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.007 2 DEBUG nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.008 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.008 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.009 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.009 2 DEBUG nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.009 2 WARNING nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-unplugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.009 2 DEBUG nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.009 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.010 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.010 2 DEBUG oslo_concurrency.lockutils [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.010 2 DEBUG nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.010 2 WARNING nova.compute.manager [req-cc51a29b-1ac6-4fb4-9626-c7d9313ea761 req-9f99192b-de67-4b57-8c9d-78b7190cb791 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:32:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:38 np0005466031 nova_compute[235803]: 2025-10-02 12:32:38.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:39.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:39 np0005466031 nova_compute[235803]: 2025-10-02 12:32:39.696 2 DEBUG nova.compute.manager [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-changed-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:39 np0005466031 nova_compute[235803]: 2025-10-02 12:32:39.696 2 DEBUG nova.compute.manager [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Refreshing instance network info cache due to event network-changed-2ce66710-95c3-4fa0-999b-b7cf0b722cac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:32:39 np0005466031 nova_compute[235803]: 2025-10-02 12:32:39.697 2 DEBUG oslo_concurrency.lockutils [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:39 np0005466031 nova_compute[235803]: 2025-10-02 12:32:39.697 2 DEBUG oslo_concurrency.lockutils [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:39 np0005466031 nova_compute[235803]: 2025-10-02 12:32:39.697 2 DEBUG nova.network.neutron [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Refreshing network info cache for port 2ce66710-95c3-4fa0-999b-b7cf0b722cac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:32:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:40.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:41 np0005466031 nova_compute[235803]: 2025-10-02 12:32:41.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:41.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:42.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e243 e243: 3 total, 3 up, 3 in
Oct  2 08:32:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:43.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:43 np0005466031 nova_compute[235803]: 2025-10-02 12:32:43.538 2 DEBUG nova.network.neutron [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updated VIF entry in instance network info cache for port 2ce66710-95c3-4fa0-999b-b7cf0b722cac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:32:43 np0005466031 nova_compute[235803]: 2025-10-02 12:32:43.538 2 DEBUG nova.network.neutron [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating instance_info_cache with network_info: [{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:43 np0005466031 nova_compute[235803]: 2025-10-02 12:32:43.570 2 DEBUG oslo_concurrency.lockutils [req-7ec57bfe-7c90-48af-881f-dd8a4fc0e68e req-7d95abbf-5b16-4da2-bfbf-7a9c51f4e222 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:43 np0005466031 nova_compute[235803]: 2025-10-02 12:32:43.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:44.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:44 np0005466031 nova_compute[235803]: 2025-10-02 12:32:44.843 2 DEBUG nova.compute.manager [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:44 np0005466031 nova_compute[235803]: 2025-10-02 12:32:44.844 2 DEBUG oslo_concurrency.lockutils [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:44 np0005466031 nova_compute[235803]: 2025-10-02 12:32:44.844 2 DEBUG oslo_concurrency.lockutils [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:44 np0005466031 nova_compute[235803]: 2025-10-02 12:32:44.844 2 DEBUG oslo_concurrency.lockutils [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:44 np0005466031 nova_compute[235803]: 2025-10-02 12:32:44.845 2 DEBUG nova.compute.manager [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:44 np0005466031 nova_compute[235803]: 2025-10-02 12:32:44.845 2 WARNING nova.compute.manager [req-bfadf06e-6eef-40d1-9c1e-3967fc43d052 req-cd33ea3a-3257-4255-8de0-daf6d686ed76 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:32:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:45.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:46.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:46 np0005466031 podman[269178]: 2025-10-02 12:32:46.639495494 +0000 UTC m=+0.061738481 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.659 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.661 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.661 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:46 np0005466031 podman[269179]: 2025-10-02 12:32:46.692491191 +0000 UTC m=+0.116983923 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.967 2 DEBUG nova.compute.manager [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.968 2 DEBUG oslo_concurrency.lockutils [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.968 2 DEBUG oslo_concurrency.lockutils [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.969 2 DEBUG oslo_concurrency.lockutils [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.969 2 DEBUG nova.compute.manager [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] No waiting events found dispatching network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:46 np0005466031 nova_compute[235803]: 2025-10-02 12:32:46.969 2 WARNING nova.compute.manager [req-73b0cf7e-21b4-4ca8-b8fd-9db1b3d952ea req-5c6d2a8a-aa11-4928-87e2-7ec4bbdf60a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Received unexpected event network-vif-plugged-2ce66710-95c3-4fa0-999b-b7cf0b722cac for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:32:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:47.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/513984406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.192 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.254 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.254 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.385 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.386 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4624MB free_disk=20.845951080322266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.386 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.386 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.417 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Migration for instance b6c2a016-125f-4f83-a284-5a2d50805121 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.436 2 INFO nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating resource usage from migration 86dbf808-c068-4f96-81df-2af5c322181a#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.436 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Starting to track outgoing migration 86dbf808-c068-4f96-81df-2af5c322181a with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.473 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Migration 86dbf808-c068-4f96-81df-2af5c322181a is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.474 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.474 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.534 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/381182074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.981 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:47 np0005466031 nova_compute[235803]: 2025-10-02 12:32:47.987 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:48 np0005466031 nova_compute[235803]: 2025-10-02 12:32:48.006 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:48 np0005466031 nova_compute[235803]: 2025-10-02 12:32:48.039 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:32:48 np0005466031 nova_compute[235803]: 2025-10-02 12:32:48.040 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:48 np0005466031 nova_compute[235803]: 2025-10-02 12:32:48.389 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "b6c2a016-125f-4f83-a284-5a2d50805121" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:48 np0005466031 nova_compute[235803]: 2025-10-02 12:32:48.389 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:48 np0005466031 nova_compute[235803]: 2025-10-02 12:32:48.390 2 DEBUG nova.compute.manager [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Going to confirm migration 12 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:32:48 np0005466031 nova_compute[235803]: 2025-10-02 12:32:48.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.040 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.040 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.040 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.152 2 DEBUG neutronclient.v2_0.client [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 2ce66710-95c3-4fa0-999b-b7cf0b722cac for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.153 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.154 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.154 2 DEBUG nova.network.neutron [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.154 2 DEBUG nova.objects.instance [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'info_cache' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:49.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.658 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.659 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:49 np0005466031 nova_compute[235803]: 2025-10-02 12:32:49.659 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:50 np0005466031 nova_compute[235803]: 2025-10-02 12:32:50.432 2 DEBUG nova.network.neutron [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Updating instance_info_cache with network_info: [{"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:50 np0005466031 nova_compute[235803]: 2025-10-02 12:32:50.449 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-b6c2a016-125f-4f83-a284-5a2d50805121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:50 np0005466031 nova_compute[235803]: 2025-10-02 12:32:50.449 2 DEBUG nova.objects.instance [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid b6c2a016-125f-4f83-a284-5a2d50805121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:50 np0005466031 nova_compute[235803]: 2025-10-02 12:32:50.864 2 DEBUG nova.storage.rbd_utils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] removing snapshot(nova-resize) on rbd image(b6c2a016-125f-4f83-a284-5a2d50805121_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.101 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408356.0990338, b6c2a016-125f-4f83-a284-5a2d50805121 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.101 2 INFO nova.compute.manager [-] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.126 2 DEBUG nova.compute.manager [None req-827b2b67-615f-4f61-82e2-b3966cea0552 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.129 2 DEBUG nova.compute.manager [None req-827b2b67-615f-4f61-82e2-b3966cea0552 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.159 2 INFO nova.compute.manager [None req-827b2b67-615f-4f61-82e2-b3966cea0552 - - - - - -] [instance: b6c2a016-125f-4f83-a284-5a2d50805121] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Oct  2 08:32:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:51.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e244 e244: 3 total, 3 up, 3 in
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.530 2 DEBUG nova.virt.libvirt.vif [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1932213760',display_name='tempest-ServerDiskConfigTestJSON-server-1932213760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1932213760',id=78,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-3bos98gj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:45Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=b6c2a016-125f-4f83-a284-5a2d50805121,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.530 2 DEBUG nova.network.os_vif_util [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "address": "fa:16:3e:35:b2:fa", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce66710-95", "ovs_interfaceid": "2ce66710-95c3-4fa0-999b-b7cf0b722cac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.531 2 DEBUG nova.network.os_vif_util [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.531 2 DEBUG os_vif [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ce66710-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.535 2 INFO os_vif [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:b2:fa,bridge_name='br-int',has_traffic_filtering=True,id=2ce66710-95c3-4fa0-999b-b7cf0b722cac,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce66710-95')#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.536 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.536 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:51 np0005466031 nova_compute[235803]: 2025-10-02 12:32:51.612 2 DEBUG oslo_concurrency.processutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:52.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2482660617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:52 np0005466031 nova_compute[235803]: 2025-10-02 12:32:52.071 2 DEBUG oslo_concurrency.processutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:52 np0005466031 nova_compute[235803]: 2025-10-02 12:32:52.078 2 DEBUG nova.compute.provider_tree [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:52 np0005466031 nova_compute[235803]: 2025-10-02 12:32:52.103 2 DEBUG nova.scheduler.client.report [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:52 np0005466031 nova_compute[235803]: 2025-10-02 12:32:52.147 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:52 np0005466031 nova_compute[235803]: 2025-10-02 12:32:52.233 2 INFO nova.scheduler.client.report [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Deleted allocation for migration 86dbf808-c068-4f96-81df-2af5c322181a#033[00m
Oct  2 08:32:52 np0005466031 nova_compute[235803]: 2025-10-02 12:32:52.295 2 DEBUG oslo_concurrency.lockutils [None req-680e0fa4-f7db-4ffa-9c0a-45847e3abe7c 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "b6c2a016-125f-4f83-a284-5a2d50805121" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 3.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:52 np0005466031 nova_compute[235803]: 2025-10-02 12:32:52.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:52 np0005466031 nova_compute[235803]: 2025-10-02 12:32:52.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:32:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:53.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:53 np0005466031 nova_compute[235803]: 2025-10-02 12:32:53.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:53 np0005466031 nova_compute[235803]: 2025-10-02 12:32:53.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:54.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:55.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:55 np0005466031 podman[269352]: 2025-10-02 12:32:55.895100398 +0000 UTC m=+0.056797508 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:32:55 np0005466031 podman[269353]: 2025-10-02 12:32:55.913258071 +0000 UTC m=+0.074487748 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 08:32:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:56.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:56 np0005466031 nova_compute[235803]: 2025-10-02 12:32:56.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:32:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:32:56 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:32:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:32:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:57.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:32:57 np0005466031 nova_compute[235803]: 2025-10-02 12:32:57.988 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:57 np0005466031 nova_compute[235803]: 2025-10-02 12:32:57.989 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.010 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:32:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:58.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.072 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.073 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.079 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.079 2 INFO nova.compute.claims [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.122 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.122 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.140 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.233 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.237 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3780027703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.689 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.695 2 DEBUG nova.compute.provider_tree [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.737 2 DEBUG nova.scheduler.client.report [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.807 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.808 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.811 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.820 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.820 2 INFO nova.compute.claims [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.908 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.909 2 DEBUG nova.network.neutron [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.939 2 INFO nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.978 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:32:58 np0005466031 nova_compute[235803]: 2025-10-02 12:32:58.990 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.084 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.086 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.086 2 INFO nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Creating image(s)#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.116 2 DEBUG nova.storage.rbd_utils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.147 2 DEBUG nova.storage.rbd_utils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.173 2 DEBUG nova.storage.rbd_utils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:59.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.178 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.240 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.242 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.243 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.243 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.338 2 DEBUG nova.storage.rbd_utils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.341 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.399 2 DEBUG nova.policy [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:32:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:59 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2511685414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.445 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.450 2 DEBUG nova.compute.provider_tree [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.486 2 DEBUG nova.scheduler.client.report [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.617 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.618 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.695 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.696 2 DEBUG nova.network.neutron [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.737 2 INFO nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.786 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:32:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.960 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.963 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.963 2 INFO nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Creating image(s)#033[00m
Oct  2 08:32:59 np0005466031 nova_compute[235803]: 2025-10-02 12:32:59.987 2 DEBUG nova.storage.rbd_utils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.011 2 DEBUG nova.storage.rbd_utils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.040 2 DEBUG nova.storage.rbd_utils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.046 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.113 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.114 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.115 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.115 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.139 2 DEBUG nova.storage.rbd_utils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.143 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:00 np0005466031 nova_compute[235803]: 2025-10-02 12:33:00.539 2 DEBUG nova.policy [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '28d5425714b04888ba9e6112879fae33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:33:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 e245: 3 total, 3 up, 3 in
Oct  2 08:33:01 np0005466031 nova_compute[235803]: 2025-10-02 12:33:01.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:01.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:01 np0005466031 nova_compute[235803]: 2025-10-02 12:33:01.235 2 DEBUG nova.network.neutron [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Successfully created port: 0661a8d8-df40-4903-ad8c-8e3f7549831e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:33:01 np0005466031 nova_compute[235803]: 2025-10-02 12:33:01.632 2 DEBUG nova.network.neutron [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Successfully created port: 340cc57e-78d6-4616-a26f-486e389d5a21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:33:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:02 np0005466031 nova_compute[235803]: 2025-10-02 12:33:02.047 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.705s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:02 np0005466031 nova_compute[235803]: 2025-10-02 12:33:02.369 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:02 np0005466031 nova_compute[235803]: 2025-10-02 12:33:02.408 2 DEBUG nova.storage.rbd_utils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] resizing rbd image 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:33:02 np0005466031 nova_compute[235803]: 2025-10-02 12:33:02.545 2 DEBUG nova.storage.rbd_utils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:33:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:03.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.340 2 DEBUG nova.objects.instance [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid 7852948a-b6c5-4caa-9077-a5f2f0657f2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.357 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.357 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Ensure instance console log exists: /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.358 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.359 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.359 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:03.390 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:03.391 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:33:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:03.392 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.436 2 DEBUG nova.network.neutron [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Successfully updated port: 0661a8d8-df40-4903-ad8c-8e3f7549831e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.442 2 DEBUG nova.objects.instance [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b2e9f39-f886-4e6d-939e-cba3d731a330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.522 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.523 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.523 2 DEBUG nova.network.neutron [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.525 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.525 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Ensure instance console log exists: /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.525 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.526 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.526 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.530 2 DEBUG nova.compute.manager [req-b6bd3ba5-dde5-4e9c-8ab5-1c94f5e01bb2 req-8b6dde6b-0bfe-43f5-9627-4602b2751eae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.530 2 DEBUG nova.compute.manager [req-b6bd3ba5-dde5-4e9c-8ab5-1c94f5e01bb2 req-8b6dde6b-0bfe-43f5-9627-4602b2751eae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing instance network info cache due to event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.530 2 DEBUG oslo_concurrency.lockutils [req-b6bd3ba5-dde5-4e9c-8ab5-1c94f5e01bb2 req-8b6dde6b-0bfe-43f5-9627-4602b2751eae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.613 2 DEBUG nova.network.neutron [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Successfully updated port: 340cc57e-78d6-4616-a26f-486e389d5a21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.632 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-7852948a-b6c5-4caa-9077-a5f2f0657f2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.633 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-7852948a-b6c5-4caa-9077-a5f2f0657f2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.633 2 DEBUG nova.network.neutron [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.709 2 DEBUG nova.compute.manager [req-f57bb3fa-aab1-4c32-8f67-e3d9787023b3 req-abec5562-5b9d-4b08-9da2-f84acaccbc27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received event network-changed-340cc57e-78d6-4616-a26f-486e389d5a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.709 2 DEBUG nova.compute.manager [req-f57bb3fa-aab1-4c32-8f67-e3d9787023b3 req-abec5562-5b9d-4b08-9da2-f84acaccbc27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Refreshing instance network info cache due to event network-changed-340cc57e-78d6-4616-a26f-486e389d5a21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.709 2 DEBUG oslo_concurrency.lockutils [req-f57bb3fa-aab1-4c32-8f67-e3d9787023b3 req-abec5562-5b9d-4b08-9da2-f84acaccbc27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7852948a-b6c5-4caa-9077-a5f2f0657f2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:03 np0005466031 nova_compute[235803]: 2025-10-02 12:33:03.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:04.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:04 np0005466031 nova_compute[235803]: 2025-10-02 12:33:04.502 2 DEBUG nova.network.neutron [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:33:04 np0005466031 nova_compute[235803]: 2025-10-02 12:33:04.519 2 DEBUG nova.network.neutron [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:33:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:33:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:33:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:05.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:06.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.497 2 DEBUG nova.network.neutron [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Updating instance_info_cache with network_info: [{"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.563 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-7852948a-b6c5-4caa-9077-a5f2f0657f2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.564 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Instance network_info: |[{"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.565 2 DEBUG oslo_concurrency.lockutils [req-f57bb3fa-aab1-4c32-8f67-e3d9787023b3 req-abec5562-5b9d-4b08-9da2-f84acaccbc27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7852948a-b6c5-4caa-9077-a5f2f0657f2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.566 2 DEBUG nova.network.neutron [req-f57bb3fa-aab1-4c32-8f67-e3d9787023b3 req-abec5562-5b9d-4b08-9da2-f84acaccbc27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Refreshing network info cache for port 340cc57e-78d6-4616-a26f-486e389d5a21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.572 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Start _get_guest_xml network_info=[{"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.578 2 WARNING nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.584 2 DEBUG nova.virt.libvirt.host [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.585 2 DEBUG nova.virt.libvirt.host [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.589 2 DEBUG nova.virt.libvirt.host [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.590 2 DEBUG nova.virt.libvirt.host [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.592 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.593 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.593 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.593 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.593 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.594 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.594 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.594 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.594 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.595 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.595 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.595 2 DEBUG nova.virt.hardware [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.598 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.863 2 DEBUG nova.network.neutron [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.891 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.891 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Instance network_info: |[{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.891 2 DEBUG oslo_concurrency.lockutils [req-b6bd3ba5-dde5-4e9c-8ab5-1c94f5e01bb2 req-8b6dde6b-0bfe-43f5-9627-4602b2751eae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.892 2 DEBUG nova.network.neutron [req-b6bd3ba5-dde5-4e9c-8ab5-1c94f5e01bb2 req-8b6dde6b-0bfe-43f5-9627-4602b2751eae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.895 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Start _get_guest_xml network_info=[{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.899 2 WARNING nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.906 2 DEBUG nova.virt.libvirt.host [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.907 2 DEBUG nova.virt.libvirt.host [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.909 2 DEBUG nova.virt.libvirt.host [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.910 2 DEBUG nova.virt.libvirt.host [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.911 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.911 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.911 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.912 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.912 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.912 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.912 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.913 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.913 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.913 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.914 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.914 2 DEBUG nova.virt.hardware [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:33:06 np0005466031 nova_compute[235803]: 2025-10-02 12:33:06.917 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1683438901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.080 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.101 2 DEBUG nova.storage.rbd_utils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.104 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.003000086s ======
Oct  2 08:33:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:07.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000086s
Oct  2 08:33:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2923262910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.382 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.403 2 DEBUG nova.storage.rbd_utils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.407 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3684999600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.565 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.566 2 DEBUG nova.virt.libvirt.vif [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1605248076',display_name='tempest-ServerDiskConfigTestJSON-server-1605248076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1605248076',id=81,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-5t4gmjjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:59Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=7852948a-b6c5-4caa-9077-a5f2f0657f2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.567 2 DEBUG nova.network.os_vif_util [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.567 2 DEBUG nova.network.os_vif_util [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:17:2b,bridge_name='br-int',has_traffic_filtering=True,id=340cc57e-78d6-4616-a26f-486e389d5a21,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340cc57e-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.568 2 DEBUG nova.objects.instance [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7852948a-b6c5-4caa-9077-a5f2f0657f2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.593 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <uuid>7852948a-b6c5-4caa-9077-a5f2f0657f2c</uuid>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <name>instance-00000051</name>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1605248076</nova:name>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:33:06</nova:creationTime>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:port uuid="340cc57e-78d6-4616-a26f-486e389d5a21">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="serial">7852948a-b6c5-4caa-9077-a5f2f0657f2c</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="uuid">7852948a-b6c5-4caa-9077-a5f2f0657f2c</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk.config">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:ed:17:2b"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <target dev="tap340cc57e-78"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c/console.log" append="off"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:33:07 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:33:07 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.595 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Preparing to wait for external event network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.595 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.595 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.595 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.596 2 DEBUG nova.virt.libvirt.vif [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1605248076',display_name='tempest-ServerDiskConfigTestJSON-server-1605248076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1605248076',id=81,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-5t4gmjjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:59Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=7852948a-b6c5-4caa-9077-a5f2f0657f2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.596 2 DEBUG nova.network.os_vif_util [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.597 2 DEBUG nova.network.os_vif_util [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:17:2b,bridge_name='br-int',has_traffic_filtering=True,id=340cc57e-78d6-4616-a26f-486e389d5a21,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340cc57e-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.598 2 DEBUG os_vif [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:17:2b,bridge_name='br-int',has_traffic_filtering=True,id=340cc57e-78d6-4616-a26f-486e389d5a21,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340cc57e-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap340cc57e-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap340cc57e-78, col_values=(('external_ids', {'iface-id': '340cc57e-78d6-4616-a26f-486e389d5a21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:17:2b', 'vm-uuid': '7852948a-b6c5-4caa-9077-a5f2f0657f2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:07 np0005466031 NetworkManager[44907]: <info>  [1759408387.6043] manager: (tap340cc57e-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.611 2 INFO os_vif [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:17:2b,bridge_name='br-int',has_traffic_filtering=True,id=340cc57e-78d6-4616-a26f-486e389d5a21,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340cc57e-78')#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.703 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.703 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.704 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:ed:17:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.704 2 INFO nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Using config drive#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.724 2 DEBUG nova.storage.rbd_utils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/531094039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.841 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.842 2 DEBUG nova.virt.libvirt.vif [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-206411601',display_name='tempest-tempest.common.compute-instance-206411601',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-206411601',id=80,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-4ya55hgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=6b2e9f39-f886-4e6d-939e-cba3d731a330,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.843 2 DEBUG nova.network.os_vif_util [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.844 2 DEBUG nova.network.os_vif_util [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:95:70,bridge_name='br-int',has_traffic_filtering=True,id=0661a8d8-df40-4903-ad8c-8e3f7549831e,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0661a8d8-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.845 2 DEBUG nova.objects.instance [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b2e9f39-f886-4e6d-939e-cba3d731a330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.873 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <uuid>6b2e9f39-f886-4e6d-939e-cba3d731a330</uuid>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <name>instance-00000050</name>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:name>tempest-tempest.common.compute-instance-206411601</nova:name>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:33:06</nova:creationTime>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <nova:port uuid="0661a8d8-df40-4903-ad8c-8e3f7549831e">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="serial">6b2e9f39-f886-4e6d-939e-cba3d731a330</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="uuid">6b2e9f39-f886-4e6d-939e-cba3d731a330</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/6b2e9f39-f886-4e6d-939e-cba3d731a330_disk">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/6b2e9f39-f886-4e6d-939e-cba3d731a330_disk.config">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:80:95:70"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <target dev="tap0661a8d8-df"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/console.log" append="off"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:33:07 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:33:07 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:33:07 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:33:07 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.874 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Preparing to wait for external event network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.874 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.874 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.874 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.875 2 DEBUG nova.virt.libvirt.vif [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-206411601',display_name='tempest-tempest.common.compute-instance-206411601',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-206411601',id=80,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-4ya55hgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=6b2e9f39-f886-4e6d-939e-cba3d731a330,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.875 2 DEBUG nova.network.os_vif_util [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.876 2 DEBUG nova.network.os_vif_util [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:95:70,bridge_name='br-int',has_traffic_filtering=True,id=0661a8d8-df40-4903-ad8c-8e3f7549831e,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0661a8d8-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.876 2 DEBUG os_vif [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:95:70,bridge_name='br-int',has_traffic_filtering=True,id=0661a8d8-df40-4903-ad8c-8e3f7549831e,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0661a8d8-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.879 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0661a8d8-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.879 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0661a8d8-df, col_values=(('external_ids', {'iface-id': '0661a8d8-df40-4903-ad8c-8e3f7549831e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:95:70', 'vm-uuid': '6b2e9f39-f886-4e6d-939e-cba3d731a330'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:07 np0005466031 NetworkManager[44907]: <info>  [1759408387.8820] manager: (tap0661a8d8-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.888 2 INFO os_vif [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:95:70,bridge_name='br-int',has_traffic_filtering=True,id=0661a8d8-df40-4903-ad8c-8e3f7549831e,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0661a8d8-df')#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.974 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.974 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.975 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:80:95:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.975 2 INFO nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Using config drive#033[00m
Oct  2 08:33:07 np0005466031 nova_compute[235803]: 2025-10-02 12:33:07.996 2 DEBUG nova.storage.rbd_utils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:08.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.508 2 INFO nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Creating config drive at /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c/disk.config#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.519 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8pdtff1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.657 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8pdtff1" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.678 2 DEBUG nova.storage.rbd_utils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.680 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c/disk.config 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.738 2 INFO nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Creating config drive at /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/disk.config#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.746 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptec13imd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.878 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptec13imd" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.901 2 DEBUG nova.storage.rbd_utils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] rbd image 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.905 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/disk.config 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.982 2 DEBUG nova.network.neutron [req-f57bb3fa-aab1-4c32-8f67-e3d9787023b3 req-abec5562-5b9d-4b08-9da2-f84acaccbc27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Updated VIF entry in instance network info cache for port 340cc57e-78d6-4616-a26f-486e389d5a21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:33:08 np0005466031 nova_compute[235803]: 2025-10-02 12:33:08.983 2 DEBUG nova.network.neutron [req-f57bb3fa-aab1-4c32-8f67-e3d9787023b3 req-abec5562-5b9d-4b08-9da2-f84acaccbc27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Updating instance_info_cache with network_info: [{"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.006 2 DEBUG oslo_concurrency.lockutils [req-f57bb3fa-aab1-4c32-8f67-e3d9787023b3 req-abec5562-5b9d-4b08-9da2-f84acaccbc27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7852948a-b6c5-4caa-9077-a5f2f0657f2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:09.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.519 2 DEBUG oslo_concurrency.processutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c/disk.config 7852948a-b6c5-4caa-9077-a5f2f0657f2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.839s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.520 2 INFO nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Deleting local config drive /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c/disk.config because it was imported into RBD.#033[00m
Oct  2 08:33:09 np0005466031 kernel: tap340cc57e-78: entered promiscuous mode
Oct  2 08:33:09 np0005466031 NetworkManager[44907]: <info>  [1759408389.5714] manager: (tap340cc57e-78): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:09Z|00260|binding|INFO|Claiming lport 340cc57e-78d6-4616-a26f-486e389d5a21 for this chassis.
Oct  2 08:33:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:09Z|00261|binding|INFO|340cc57e-78d6-4616-a26f-486e389d5a21: Claiming fa:16:3e:ed:17:2b 10.100.0.4
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.581 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:17:2b 10.100.0.4'], port_security=['fa:16:3e:ed:17:2b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7852948a-b6c5-4caa-9077-a5f2f0657f2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=340cc57e-78d6-4616-a26f-486e389d5a21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.582 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 340cc57e-78d6-4616-a26f-486e389d5a21 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.583 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.595 2 DEBUG nova.network.neutron [req-b6bd3ba5-dde5-4e9c-8ab5-1c94f5e01bb2 req-8b6dde6b-0bfe-43f5-9627-4602b2751eae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updated VIF entry in instance network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.596 2 DEBUG nova.network.neutron [req-b6bd3ba5-dde5-4e9c-8ab5-1c94f5e01bb2 req-8b6dde6b-0bfe-43f5-9627-4602b2751eae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.596 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd576fb-50e0-42ff-95f8-ef15c7badc4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.597 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:33:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:09Z|00262|binding|INFO|Setting lport 340cc57e-78d6-4616-a26f-486e389d5a21 ovn-installed in OVS
Oct  2 08:33:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:09Z|00263|binding|INFO|Setting lport 340cc57e-78d6-4616-a26f-486e389d5a21 up in Southbound
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005466031 systemd-udevd[270244]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.599 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.599 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[05275c57-393b-46c7-b939-f98dc0d9c228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.600 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[02d00c4f-edca-4c3f-9266-eaf8eed7738a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 systemd-machined[192227]: New machine qemu-31-instance-00000051.
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005466031 NetworkManager[44907]: <info>  [1759408389.6109] device (tap340cc57e-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:33:09 np0005466031 NetworkManager[44907]: <info>  [1759408389.6118] device (tap340cc57e-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.612 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[dec38027-11da-4d34-bb19-36fceb594bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 systemd[1]: Started Virtual Machine qemu-31-instance-00000051.
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.620 2 DEBUG oslo_concurrency.lockutils [req-b6bd3ba5-dde5-4e9c-8ab5-1c94f5e01bb2 req-8b6dde6b-0bfe-43f5-9627-4602b2751eae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.637 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6e311b55-3f51-48f0-b175-4d841099921e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.664 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a6aa69-ca63-4d2f-b64c-7d69a7cc7849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.669 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d60c5b-3b98-440d-92d1-dd7995350783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 systemd-udevd[270248]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:09 np0005466031 NetworkManager[44907]: <info>  [1759408389.6703] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.697 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[98a9d334-40a7-4c6d-84e3-78698c4abab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.700 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[99a35b89-4638-44d4-80b4-4a9b6742c9f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 NetworkManager[44907]: <info>  [1759408389.7189] device (tape21cd6a6-f0): carrier: link connected
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.725 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[73a8fb2a-c0ff-4a13-a7b3-35fee4ca22e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.738 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[88c94164-3de5-4ad8-9ba2-7ded15b8ff76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624531, 'reachable_time': 26587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270280, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.754 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fabc3409-3733-4ba9-9c6e-7b9421105026]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624531, 'tstamp': 624531}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270281, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.769 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[621ac323-f933-4ff4-961f-dc0a4fe97930]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624531, 'reachable_time': 26587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270282, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.797 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b991f21b-7871-40e5-a2f2-d5c64e729513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.850 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8db2d020-2179-4f94-a09b-8a69ec9f3e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.851 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.852 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.852 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005466031 NetworkManager[44907]: <info>  [1759408389.8545] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Oct  2 08:33:09 np0005466031 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.860 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:09Z|00264|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.882 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.883 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2cfa56d9-3319-408a-be28-586320353415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.884 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:33:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:09.885 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:33:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.907 2 DEBUG nova.compute.manager [req-ca3a0620-6e9d-435f-8489-6f67da5dfcd0 req-b9f34746-7129-4855-bf31-0ae954d46ec1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received event network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.907 2 DEBUG oslo_concurrency.lockutils [req-ca3a0620-6e9d-435f-8489-6f67da5dfcd0 req-b9f34746-7129-4855-bf31-0ae954d46ec1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.907 2 DEBUG oslo_concurrency.lockutils [req-ca3a0620-6e9d-435f-8489-6f67da5dfcd0 req-b9f34746-7129-4855-bf31-0ae954d46ec1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.908 2 DEBUG oslo_concurrency.lockutils [req-ca3a0620-6e9d-435f-8489-6f67da5dfcd0 req-b9f34746-7129-4855-bf31-0ae954d46ec1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:09 np0005466031 nova_compute[235803]: 2025-10-02 12:33:09.908 2 DEBUG nova.compute.manager [req-ca3a0620-6e9d-435f-8489-6f67da5dfcd0 req-b9f34746-7129-4855-bf31-0ae954d46ec1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Processing event network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:33:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:10.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.257 2 DEBUG oslo_concurrency.processutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/disk.config 6b2e9f39-f886-4e6d-939e-cba3d731a330_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.259 2 INFO nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Deleting local config drive /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/disk.config because it was imported into RBD.#033[00m
Oct  2 08:33:10 np0005466031 podman[270362]: 2025-10-02 12:33:10.276481836 +0000 UTC m=+0.070639267 container create 9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:33:10 np0005466031 kernel: tap0661a8d8-df: entered promiscuous mode
Oct  2 08:33:10 np0005466031 NetworkManager[44907]: <info>  [1759408390.3089] manager: (tap0661a8d8-df): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Oct  2 08:33:10 np0005466031 systemd-udevd[270271]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:10Z|00265|binding|INFO|Claiming lport 0661a8d8-df40-4903-ad8c-8e3f7549831e for this chassis.
Oct  2 08:33:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:10Z|00266|binding|INFO|0661a8d8-df40-4903-ad8c-8e3f7549831e: Claiming fa:16:3e:80:95:70 10.100.0.12
Oct  2 08:33:10 np0005466031 systemd[1]: Started libpod-conmon-9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f.scope.
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005466031 podman[270362]: 2025-10-02 12:33:10.229025358 +0000 UTC m=+0.023182819 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:33:10 np0005466031 NetworkManager[44907]: <info>  [1759408390.3268] device (tap0661a8d8-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:33:10 np0005466031 NetworkManager[44907]: <info>  [1759408390.3276] device (tap0661a8d8-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.344 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:95:70 10.100.0.12'], port_security=['fa:16:3e:80:95:70 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6b2e9f39-f886-4e6d-939e-cba3d731a330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3852fde4-27af-4b26-ab2c-21696f5fd593', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=0661a8d8-df40-4903-ad8c-8e3f7549831e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:33:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:10Z|00267|binding|INFO|Setting lport 0661a8d8-df40-4903-ad8c-8e3f7549831e ovn-installed in OVS
Oct  2 08:33:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:10Z|00268|binding|INFO|Setting lport 0661a8d8-df40-4903-ad8c-8e3f7549831e up in Southbound
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2cd9ce5d75752756064f9a4f3c1d47d86714be16ba6e6a2bc21aa618f75300e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:33:10 np0005466031 systemd-machined[192227]: New machine qemu-32-instance-00000050.
Oct  2 08:33:10 np0005466031 systemd[1]: Started Virtual Machine qemu-32-instance-00000050.
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.426 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.427 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408390.4260561, 7852948a-b6c5-4caa-9077-a5f2f0657f2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.427 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] VM Started (Lifecycle Event)#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.433 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.436 2 INFO nova.virt.libvirt.driver [-] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Instance spawned successfully.#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.436 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.462 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.466 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.469 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.470 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.470 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.470 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.471 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.471 2 DEBUG nova.virt.libvirt.driver [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:10 np0005466031 podman[270362]: 2025-10-02 12:33:10.476048518 +0000 UTC m=+0.270205959 container init 9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:33:10 np0005466031 podman[270362]: 2025-10-02 12:33:10.483994817 +0000 UTC m=+0.278152248 container start 9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.502 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.502 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408390.428683, 7852948a-b6c5-4caa-9077-a5f2f0657f2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.503 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:33:10 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[270389]: [NOTICE]   (270402) : New worker (270404) forked
Oct  2 08:33:10 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[270389]: [NOTICE]   (270402) : Loading success.
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.546 2 INFO nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Took 10.58 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.547 2 DEBUG nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.557 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.559 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408390.4306335, 7852948a-b6c5-4caa-9077-a5f2f0657f2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.560 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.605 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.608 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.616 2 INFO nova.compute.manager [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Took 12.41 seconds to build instance.#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.625 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 0661a8d8-df40-4903-ad8c-8e3f7549831e in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.627 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.629 2 DEBUG nova.compute.manager [req-e5a479f8-f128-4bf0-b054-369e9e7d59a3 req-95aeaaa4-ae0b-45ab-a2b2-a2d4b86f0652 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.629 2 DEBUG oslo_concurrency.lockutils [req-e5a479f8-f128-4bf0-b054-369e9e7d59a3 req-95aeaaa4-ae0b-45ab-a2b2-a2d4b86f0652 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.629 2 DEBUG oslo_concurrency.lockutils [req-e5a479f8-f128-4bf0-b054-369e9e7d59a3 req-95aeaaa4-ae0b-45ab-a2b2-a2d4b86f0652 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.629 2 DEBUG oslo_concurrency.lockutils [req-e5a479f8-f128-4bf0-b054-369e9e7d59a3 req-95aeaaa4-ae0b-45ab-a2b2-a2d4b86f0652 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.629 2 DEBUG nova.compute.manager [req-e5a479f8-f128-4bf0-b054-369e9e7d59a3 req-95aeaaa4-ae0b-45ab-a2b2-a2d4b86f0652 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Processing event network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.638 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c358c2a4-dbd4-484b-b8fc-a18df84c1b98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.639 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a187d8a-71 in ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.641 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a187d8a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.641 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3e9b04-3d26-4d3c-886a-c06726ce7656]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.642 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[661e1863-262a-4d8d-b931-047c12d6be05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.642 2 DEBUG oslo_concurrency.lockutils [None req-a4373cbf-e84a-4267-a5be-955721167ef0 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.657 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f7cfd1-a46c-4698-ab73-38c484e2bac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.672 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[628d67d7-5069-47d3-a815-f9ec7374bb83]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.700 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c92d9c08-e08e-4e3e-81b2-c13b432d0e11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 NetworkManager[44907]: <info>  [1759408390.7068] manager: (tap6a187d8a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.706 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b07e4294-1c73-4597-ba28-0b32ecc48881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.738 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fd90352e-9567-4558-9ae0-e65c3e49dbb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.741 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b24ae5-a46e-4aac-9369-4cea4f2cbe73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 NetworkManager[44907]: <info>  [1759408390.7643] device (tap6a187d8a-70): carrier: link connected
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.772 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5a70753d-7857-4121-bf7a-eb132ad1179f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.789 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3b337fca-e9f6-42e6-8b6c-24dc43526403]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624635, 'reachable_time': 37121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270423, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.803 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e01b3f12-1ba3-4040-81d4-30f12a80367d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:e868'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624635, 'tstamp': 624635}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270424, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.822 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1dac6258-d788-4fba-a4a6-10037ff54632]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624635, 'reachable_time': 37121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270425, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.851 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ac088339-9831-4094-b459-692a5932ef51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.899 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ae12cbb2-3a60-49ef-8ec2-6a4d8ea5cc68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.900 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.900 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.901 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005466031 NetworkManager[44907]: <info>  [1759408390.9034] manager: (tap6a187d8a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Oct  2 08:33:10 np0005466031 kernel: tap6a187d8a-70: entered promiscuous mode
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.912 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:10Z|00269|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.923 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.923 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fa07ab1a-2921-4832-b711-d54ed5457c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.924 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/6a187d8a-77c6-4b27-bb13-654f471c1faf.pid.haproxy
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 6a187d8a-77c6-4b27-bb13-654f471c1faf
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:33:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:10.924 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'env', 'PROCESS_TAG=haproxy-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a187d8a-77c6-4b27-bb13-654f471c1faf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:33:10 np0005466031 nova_compute[235803]: 2025-10-02 12:33:10.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:11.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:11 np0005466031 podman[270491]: 2025-10-02 12:33:11.266567623 +0000 UTC m=+0.023584631 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:33:11 np0005466031 podman[270491]: 2025-10-02 12:33:11.563855391 +0000 UTC m=+0.320872369 container create 2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:33:11 np0005466031 systemd[1]: Started libpod-conmon-2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73.scope.
Oct  2 08:33:11 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:33:11 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ab250798383c8ff34f67e14a97298a2c61959a7882292a924150bd04dc2dfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.740 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408391.73999, 6b2e9f39-f886-4e6d-939e-cba3d731a330 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.741 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] VM Started (Lifecycle Event)#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.742 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.758 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.761 2 INFO nova.virt.libvirt.driver [-] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Instance spawned successfully.#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.762 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.787 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.790 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.790 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.790 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.791 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.791 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.791 2 DEBUG nova.virt.libvirt.driver [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.795 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:11 np0005466031 podman[270491]: 2025-10-02 12:33:11.813476766 +0000 UTC m=+0.570493764 container init 2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:33:11 np0005466031 podman[270491]: 2025-10-02 12:33:11.819760156 +0000 UTC m=+0.576777134 container start 2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.837 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.837 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408391.7400942, 6b2e9f39-f886-4e6d-939e-cba3d731a330 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.837 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:33:11 np0005466031 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[270511]: [NOTICE]   (270516) : New worker (270518) forked
Oct  2 08:33:11 np0005466031 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[270511]: [NOTICE]   (270516) : Loading success.
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.856 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.858 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408391.744615, 6b2e9f39-f886-4e6d-939e-cba3d731a330 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.859 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.879 2 INFO nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Took 12.79 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.880 2 DEBUG nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.880 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.888 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.940 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.958 2 INFO nova.compute.manager [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Took 13.91 seconds to build instance.#033[00m
Oct  2 08:33:11 np0005466031 nova_compute[235803]: 2025-10-02 12:33:11.976 2 DEBUG oslo_concurrency.lockutils [None req-715988df-20d6-4dcc-89ad-5490a3b31cf8 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:12 np0005466031 nova_compute[235803]: 2025-10-02 12:33:12.765 2 DEBUG nova.compute.manager [req-04e424ef-1187-4929-b289-cd382de55133 req-f7866c50-c2fa-4f9f-af97-71582369625b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:12 np0005466031 nova_compute[235803]: 2025-10-02 12:33:12.765 2 DEBUG oslo_concurrency.lockutils [req-04e424ef-1187-4929-b289-cd382de55133 req-f7866c50-c2fa-4f9f-af97-71582369625b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:12 np0005466031 nova_compute[235803]: 2025-10-02 12:33:12.765 2 DEBUG oslo_concurrency.lockutils [req-04e424ef-1187-4929-b289-cd382de55133 req-f7866c50-c2fa-4f9f-af97-71582369625b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:12 np0005466031 nova_compute[235803]: 2025-10-02 12:33:12.765 2 DEBUG oslo_concurrency.lockutils [req-04e424ef-1187-4929-b289-cd382de55133 req-f7866c50-c2fa-4f9f-af97-71582369625b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:12 np0005466031 nova_compute[235803]: 2025-10-02 12:33:12.765 2 DEBUG nova.compute.manager [req-04e424ef-1187-4929-b289-cd382de55133 req-f7866c50-c2fa-4f9f-af97-71582369625b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] No waiting events found dispatching network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:12 np0005466031 nova_compute[235803]: 2025-10-02 12:33:12.766 2 WARNING nova.compute.manager [req-04e424ef-1187-4929-b289-cd382de55133 req-f7866c50-c2fa-4f9f-af97-71582369625b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received unexpected event network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:12 np0005466031 nova_compute[235803]: 2025-10-02 12:33:12.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:13.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:13 np0005466031 nova_compute[235803]: 2025-10-02 12:33:13.633 2 DEBUG nova.compute.manager [req-715b6140-a04d-4896-a47d-d1a23444cb66 req-1cce9823-3ff3-45f3-ac10-341414c4ecc8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received event network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:13 np0005466031 nova_compute[235803]: 2025-10-02 12:33:13.633 2 DEBUG oslo_concurrency.lockutils [req-715b6140-a04d-4896-a47d-d1a23444cb66 req-1cce9823-3ff3-45f3-ac10-341414c4ecc8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:13 np0005466031 nova_compute[235803]: 2025-10-02 12:33:13.634 2 DEBUG oslo_concurrency.lockutils [req-715b6140-a04d-4896-a47d-d1a23444cb66 req-1cce9823-3ff3-45f3-ac10-341414c4ecc8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:13 np0005466031 nova_compute[235803]: 2025-10-02 12:33:13.634 2 DEBUG oslo_concurrency.lockutils [req-715b6140-a04d-4896-a47d-d1a23444cb66 req-1cce9823-3ff3-45f3-ac10-341414c4ecc8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:13 np0005466031 nova_compute[235803]: 2025-10-02 12:33:13.634 2 DEBUG nova.compute.manager [req-715b6140-a04d-4896-a47d-d1a23444cb66 req-1cce9823-3ff3-45f3-ac10-341414c4ecc8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] No waiting events found dispatching network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:13 np0005466031 nova_compute[235803]: 2025-10-02 12:33:13.634 2 WARNING nova.compute.manager [req-715b6140-a04d-4896-a47d-d1a23444cb66 req-1cce9823-3ff3-45f3-ac10-341414c4ecc8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received unexpected event network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:13 np0005466031 nova_compute[235803]: 2025-10-02 12:33:13.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:14.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:15.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:16.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:17.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:17 np0005466031 podman[270579]: 2025-10-02 12:33:17.626302293 +0000 UTC m=+0.054864792 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  2 08:33:17 np0005466031 podman[270580]: 2025-10-02 12:33:17.673469132 +0000 UTC m=+0.102736252 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 08:33:17 np0005466031 nova_compute[235803]: 2025-10-02 12:33:17.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:33:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:18.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:33:18 np0005466031 NetworkManager[44907]: <info>  [1759408398.5447] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct  2 08:33:18 np0005466031 NetworkManager[44907]: <info>  [1759408398.5461] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct  2 08:33:18 np0005466031 nova_compute[235803]: 2025-10-02 12:33:18.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466031 nova_compute[235803]: 2025-10-02 12:33:18.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:18Z|00270|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct  2 08:33:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:18Z|00271|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct  2 08:33:18 np0005466031 nova_compute[235803]: 2025-10-02 12:33:18.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466031 nova_compute[235803]: 2025-10-02 12:33:18.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.104 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.105 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.106 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.106 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.106 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.107 2 INFO nova.compute.manager [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Terminating instance#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.108 2 DEBUG nova.compute.manager [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:33:19 np0005466031 kernel: tap340cc57e-78 (unregistering): left promiscuous mode
Oct  2 08:33:19 np0005466031 NetworkManager[44907]: <info>  [1759408399.1501] device (tap340cc57e-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:19Z|00272|binding|INFO|Releasing lport 340cc57e-78d6-4616-a26f-486e389d5a21 from this chassis (sb_readonly=0)
Oct  2 08:33:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:19Z|00273|binding|INFO|Setting lport 340cc57e-78d6-4616-a26f-486e389d5a21 down in Southbound
Oct  2 08:33:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:19Z|00274|binding|INFO|Removing iface tap340cc57e-78 ovn-installed in OVS
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.174 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:17:2b 10.100.0.4'], port_security=['fa:16:3e:ed:17:2b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7852948a-b6c5-4caa-9077-a5f2f0657f2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=340cc57e-78d6-4616-a26f-486e389d5a21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.176 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 340cc57e-78d6-4616-a26f-486e389d5a21 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.177 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.179 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[84ba55fa-cff1-419c-8c17-ca8710b062ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.179 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore#033[00m
Oct  2 08:33:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:19.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:19 np0005466031 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct  2 08:33:19 np0005466031 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000051.scope: Consumed 9.498s CPU time.
Oct  2 08:33:19 np0005466031 systemd-machined[192227]: Machine qemu-31-instance-00000051 terminated.
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.343 2 INFO nova.virt.libvirt.driver [-] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Instance destroyed successfully.#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.344 2 DEBUG nova.objects.instance [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid 7852948a-b6c5-4caa-9077-a5f2f0657f2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:19 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[270389]: [NOTICE]   (270402) : haproxy version is 2.8.14-c23fe91
Oct  2 08:33:19 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[270389]: [NOTICE]   (270402) : path to executable is /usr/sbin/haproxy
Oct  2 08:33:19 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[270389]: [WARNING]  (270402) : Exiting Master process...
Oct  2 08:33:19 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[270389]: [WARNING]  (270402) : Exiting Master process...
Oct  2 08:33:19 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[270389]: [ALERT]    (270402) : Current worker (270404) exited with code 143 (Terminated)
Oct  2 08:33:19 np0005466031 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[270389]: [WARNING]  (270402) : All workers exited. Exiting... (0)
Oct  2 08:33:19 np0005466031 systemd[1]: libpod-9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f.scope: Deactivated successfully.
Oct  2 08:33:19 np0005466031 podman[270648]: 2025-10-02 12:33:19.376671561 +0000 UTC m=+0.109982040 container died 9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.399 2 DEBUG nova.virt.libvirt.vif [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1605248076',display_name='tempest-ServerDiskConfigTestJSON-server-1605248076',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1605248076',id=81,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-5t4gmjjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:17Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=7852948a-b6c5-4caa-9077-a5f2f0657f2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.400 2 DEBUG nova.network.os_vif_util [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "340cc57e-78d6-4616-a26f-486e389d5a21", "address": "fa:16:3e:ed:17:2b", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap340cc57e-78", "ovs_interfaceid": "340cc57e-78d6-4616-a26f-486e389d5a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.401 2 DEBUG nova.network.os_vif_util [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:17:2b,bridge_name='br-int',has_traffic_filtering=True,id=340cc57e-78d6-4616-a26f-486e389d5a21,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340cc57e-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:19 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f-userdata-shm.mount: Deactivated successfully.
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.402 2 DEBUG os_vif [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:17:2b,bridge_name='br-int',has_traffic_filtering=True,id=340cc57e-78d6-4616-a26f-486e389d5a21,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340cc57e-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap340cc57e-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:19 np0005466031 systemd[1]: var-lib-containers-storage-overlay-d2cd9ce5d75752756064f9a4f3c1d47d86714be16ba6e6a2bc21aa618f75300e-merged.mount: Deactivated successfully.
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.412 2 INFO os_vif [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:17:2b,bridge_name='br-int',has_traffic_filtering=True,id=340cc57e-78d6-4616-a26f-486e389d5a21,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap340cc57e-78')#033[00m
Oct  2 08:33:19 np0005466031 podman[270648]: 2025-10-02 12:33:19.421139893 +0000 UTC m=+0.154450392 container cleanup 9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:33:19 np0005466031 systemd[1]: libpod-conmon-9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f.scope: Deactivated successfully.
Oct  2 08:33:19 np0005466031 podman[270704]: 2025-10-02 12:33:19.482473681 +0000 UTC m=+0.041212709 container remove 9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.491 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1ece40-0b66-4e14-81a7-a435d1f8f9fa]: (4, ('Thu Oct  2 12:33:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f)\n9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f\nThu Oct  2 12:33:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f)\n9fc165c0d34e297870b346eb128911beefa36644f4f3f65e762c973dec802f0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.493 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[da9014bb-0df0-4959-92a0-d3af84d3a2de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.494 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 kernel: tape21cd6a6-f0: left promiscuous mode
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.504 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9d759f-9ae4-44bd-a80c-ab70db1203f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.520 2 DEBUG nova.compute.manager [req-3acb1c0e-fcca-4c0d-85c3-adeb0e446930 req-d0d06263-fdda-4df9-8d8d-b882d494dbfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received event network-vif-unplugged-340cc57e-78d6-4616-a26f-486e389d5a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.520 2 DEBUG oslo_concurrency.lockutils [req-3acb1c0e-fcca-4c0d-85c3-adeb0e446930 req-d0d06263-fdda-4df9-8d8d-b882d494dbfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.522 2 DEBUG oslo_concurrency.lockutils [req-3acb1c0e-fcca-4c0d-85c3-adeb0e446930 req-d0d06263-fdda-4df9-8d8d-b882d494dbfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.522 2 DEBUG oslo_concurrency.lockutils [req-3acb1c0e-fcca-4c0d-85c3-adeb0e446930 req-d0d06263-fdda-4df9-8d8d-b882d494dbfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.523 2 DEBUG nova.compute.manager [req-3acb1c0e-fcca-4c0d-85c3-adeb0e446930 req-d0d06263-fdda-4df9-8d8d-b882d494dbfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] No waiting events found dispatching network-vif-unplugged-340cc57e-78d6-4616-a26f-486e389d5a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:19 np0005466031 nova_compute[235803]: 2025-10-02 12:33:19.523 2 DEBUG nova.compute.manager [req-3acb1c0e-fcca-4c0d-85c3-adeb0e446930 req-d0d06263-fdda-4df9-8d8d-b882d494dbfd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received event network-vif-unplugged-340cc57e-78d6-4616-a26f-486e389d5a21 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.534 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6c56abcf-2c5a-4b47-bfc8-8d5cb907e8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.535 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[717e82ee-3edb-47d9-92fc-6ad88823983a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.551 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[765d36da-03e7-4c66-b379-fcc46236c095]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624525, 'reachable_time': 36092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270723, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.553 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:33:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:19.554 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d2cd5ad8-dda6-4d87-820e-b35955161612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466031 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct  2 08:33:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:20.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:20 np0005466031 nova_compute[235803]: 2025-10-02 12:33:20.071 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:20 np0005466031 nova_compute[235803]: 2025-10-02 12:33:20.072 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing instance network info cache due to event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:33:20 np0005466031 nova_compute[235803]: 2025-10-02 12:33:20.073 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:20 np0005466031 nova_compute[235803]: 2025-10-02 12:33:20.073 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:20 np0005466031 nova_compute[235803]: 2025-10-02 12:33:20.073 2 DEBUG nova.network.neutron [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:33:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:21.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:22.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.084 2 DEBUG nova.compute.manager [req-4a34a441-9238-45c4-af43-760e4daa983c req-aafc1253-ca41-4dcf-bfa9-5f7fc6d582f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received event network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.085 2 DEBUG oslo_concurrency.lockutils [req-4a34a441-9238-45c4-af43-760e4daa983c req-aafc1253-ca41-4dcf-bfa9-5f7fc6d582f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.085 2 DEBUG oslo_concurrency.lockutils [req-4a34a441-9238-45c4-af43-760e4daa983c req-aafc1253-ca41-4dcf-bfa9-5f7fc6d582f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.085 2 DEBUG oslo_concurrency.lockutils [req-4a34a441-9238-45c4-af43-760e4daa983c req-aafc1253-ca41-4dcf-bfa9-5f7fc6d582f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.086 2 DEBUG nova.compute.manager [req-4a34a441-9238-45c4-af43-760e4daa983c req-aafc1253-ca41-4dcf-bfa9-5f7fc6d582f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] No waiting events found dispatching network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.086 2 WARNING nova.compute.manager [req-4a34a441-9238-45c4-af43-760e4daa983c req-aafc1253-ca41-4dcf-bfa9-5f7fc6d582f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received unexpected event network-vif-plugged-340cc57e-78d6-4616-a26f-486e389d5a21 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.476 2 DEBUG nova.network.neutron [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updated VIF entry in instance network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.477 2 DEBUG nova.network.neutron [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:22 np0005466031 nova_compute[235803]: 2025-10-02 12:33:22.502 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:23.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:23 np0005466031 nova_compute[235803]: 2025-10-02 12:33:23.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:24.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:24 np0005466031 nova_compute[235803]: 2025-10-02 12:33:24.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:24 np0005466031 nova_compute[235803]: 2025-10-02 12:33:24.850 2 INFO nova.virt.libvirt.driver [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Deleting instance files /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c_del#033[00m
Oct  2 08:33:24 np0005466031 nova_compute[235803]: 2025-10-02 12:33:24.851 2 INFO nova.virt.libvirt.driver [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Deletion of /var/lib/nova/instances/7852948a-b6c5-4caa-9077-a5f2f0657f2c_del complete#033[00m
Oct  2 08:33:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:25 np0005466031 nova_compute[235803]: 2025-10-02 12:33:25.748 2 INFO nova.compute.manager [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Took 6.64 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:33:25 np0005466031 nova_compute[235803]: 2025-10-02 12:33:25.749 2 DEBUG oslo.service.loopingcall [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:33:25 np0005466031 nova_compute[235803]: 2025-10-02 12:33:25.749 2 DEBUG nova.compute.manager [-] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:33:25 np0005466031 nova_compute[235803]: 2025-10-02 12:33:25.750 2 DEBUG nova.network.neutron [-] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:33:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:25.841 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:25.841 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:25.842 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:26.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:26 np0005466031 podman[270730]: 2025-10-02 12:33:26.645401849 +0000 UTC m=+0.069866935 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:33:26 np0005466031 podman[270731]: 2025-10-02 12:33:26.667863136 +0000 UTC m=+0.093568268 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:33:26 np0005466031 nova_compute[235803]: 2025-10-02 12:33:26.674 2 DEBUG nova.network.neutron [-] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:26 np0005466031 nova_compute[235803]: 2025-10-02 12:33:26.695 2 INFO nova.compute.manager [-] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Took 0.95 seconds to deallocate network for instance.#033[00m
Oct  2 08:33:26 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:26Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:95:70 10.100.0.12
Oct  2 08:33:26 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:26Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:95:70 10.100.0.12
Oct  2 08:33:26 np0005466031 nova_compute[235803]: 2025-10-02 12:33:26.756 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:26 np0005466031 nova_compute[235803]: 2025-10-02 12:33:26.756 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:26 np0005466031 nova_compute[235803]: 2025-10-02 12:33:26.895 2 DEBUG oslo_concurrency.processutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:27.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4036348665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:27 np0005466031 nova_compute[235803]: 2025-10-02 12:33:27.407 2 DEBUG oslo_concurrency.processutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:27 np0005466031 nova_compute[235803]: 2025-10-02 12:33:27.413 2 DEBUG nova.compute.provider_tree [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:33:27 np0005466031 nova_compute[235803]: 2025-10-02 12:33:27.437 2 DEBUG nova.scheduler.client.report [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:33:27 np0005466031 nova_compute[235803]: 2025-10-02 12:33:27.475 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:27 np0005466031 nova_compute[235803]: 2025-10-02 12:33:27.540 2 INFO nova.scheduler.client.report [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Deleted allocations for instance 7852948a-b6c5-4caa-9077-a5f2f0657f2c#033[00m
Oct  2 08:33:27 np0005466031 nova_compute[235803]: 2025-10-02 12:33:27.655 2 DEBUG oslo_concurrency.lockutils [None req-1ee09cb1-a2b5-4178-b3ad-abcea373a801 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "7852948a-b6c5-4caa-9077-a5f2f0657f2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:28.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:28 np0005466031 nova_compute[235803]: 2025-10-02 12:33:28.562 2 DEBUG nova.compute.manager [req-4e9f702d-eca3-4386-a068-ce112d9ae115 req-ca4aa7b5-0e86-49d9-8b48-5d446f73a653 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Received event network-vif-deleted-340cc57e-78d6-4616-a26f-486e389d5a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:28 np0005466031 nova_compute[235803]: 2025-10-02 12:33:28.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:29.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:29 np0005466031 nova_compute[235803]: 2025-10-02 12:33:29.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:30.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:31.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:32.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:33.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:33 np0005466031 nova_compute[235803]: 2025-10-02 12:33:33.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:34.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:34 np0005466031 nova_compute[235803]: 2025-10-02 12:33:34.343 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408399.3414087, 7852948a-b6c5-4caa-9077-a5f2f0657f2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:34 np0005466031 nova_compute[235803]: 2025-10-02 12:33:34.343 2 INFO nova.compute.manager [-] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:33:34 np0005466031 nova_compute[235803]: 2025-10-02 12:33:34.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:35.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:35 np0005466031 nova_compute[235803]: 2025-10-02 12:33:35.255 2 DEBUG nova.compute.manager [None req-f44abe6e-56cc-429e-96b5-fd681218f8e5 - - - - - -] [instance: 7852948a-b6c5-4caa-9077-a5f2f0657f2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:37.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:37 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:37Z|00275|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct  2 08:33:37 np0005466031 nova_compute[235803]: 2025-10-02 12:33:37.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:38.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:38 np0005466031 nova_compute[235803]: 2025-10-02 12:33:38.404 2 DEBUG nova.compute.manager [req-cd3ab684-1f1e-4e74-a904-9912643e75da req-b354fad6-e58a-4acc-898e-3fefa8203b80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:38 np0005466031 nova_compute[235803]: 2025-10-02 12:33:38.405 2 DEBUG nova.compute.manager [req-cd3ab684-1f1e-4e74-a904-9912643e75da req-b354fad6-e58a-4acc-898e-3fefa8203b80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing instance network info cache due to event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:33:38 np0005466031 nova_compute[235803]: 2025-10-02 12:33:38.405 2 DEBUG oslo_concurrency.lockutils [req-cd3ab684-1f1e-4e74-a904-9912643e75da req-b354fad6-e58a-4acc-898e-3fefa8203b80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:38 np0005466031 nova_compute[235803]: 2025-10-02 12:33:38.405 2 DEBUG oslo_concurrency.lockutils [req-cd3ab684-1f1e-4e74-a904-9912643e75da req-b354fad6-e58a-4acc-898e-3fefa8203b80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:38 np0005466031 nova_compute[235803]: 2025-10-02 12:33:38.405 2 DEBUG nova.network.neutron [req-cd3ab684-1f1e-4e74-a904-9912643e75da req-b354fad6-e58a-4acc-898e-3fefa8203b80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:33:38 np0005466031 nova_compute[235803]: 2025-10-02 12:33:38.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:39.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:39 np0005466031 nova_compute[235803]: 2025-10-02 12:33:39.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:40.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:40 np0005466031 nova_compute[235803]: 2025-10-02 12:33:40.795 2 DEBUG nova.network.neutron [req-cd3ab684-1f1e-4e74-a904-9912643e75da req-b354fad6-e58a-4acc-898e-3fefa8203b80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updated VIF entry in instance network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:33:40 np0005466031 nova_compute[235803]: 2025-10-02 12:33:40.795 2 DEBUG nova.network.neutron [req-cd3ab684-1f1e-4e74-a904-9912643e75da req-b354fad6-e58a-4acc-898e-3fefa8203b80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:40 np0005466031 nova_compute[235803]: 2025-10-02 12:33:40.833 2 DEBUG oslo_concurrency.lockutils [req-cd3ab684-1f1e-4e74-a904-9912643e75da req-b354fad6-e58a-4acc-898e-3fefa8203b80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:41.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:42.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:43 np0005466031 nova_compute[235803]: 2025-10-02 12:33:43.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.044 2 DEBUG oslo_concurrency.lockutils [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-6b2e9f39-f886-4e6d-939e-cba3d731a330-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.045 2 DEBUG oslo_concurrency.lockutils [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-6b2e9f39-f886-4e6d-939e-cba3d731a330-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.046 2 DEBUG nova.objects.instance [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 6b2e9f39-f886-4e6d-939e-cba3d731a330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:44.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.747 2 DEBUG nova.compute.manager [req-7561d1c4-383f-461c-a7ef-df0e4615e67a req-a1e20a14-fc0c-436a-99e2-ce4bbe9f8f3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.747 2 DEBUG nova.compute.manager [req-7561d1c4-383f-461c-a7ef-df0e4615e67a req-a1e20a14-fc0c-436a-99e2-ce4bbe9f8f3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing instance network info cache due to event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.748 2 DEBUG oslo_concurrency.lockutils [req-7561d1c4-383f-461c-a7ef-df0e4615e67a req-a1e20a14-fc0c-436a-99e2-ce4bbe9f8f3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.748 2 DEBUG oslo_concurrency.lockutils [req-7561d1c4-383f-461c-a7ef-df0e4615e67a req-a1e20a14-fc0c-436a-99e2-ce4bbe9f8f3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:44 np0005466031 nova_compute[235803]: 2025-10-02 12:33:44.748 2 DEBUG nova.network.neutron [req-7561d1c4-383f-461c-a7ef-df0e4615e67a req-a1e20a14-fc0c-436a-99e2-ce4bbe9f8f3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:33:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:45 np0005466031 nova_compute[235803]: 2025-10-02 12:33:45.057 2 DEBUG nova.objects.instance [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6b2e9f39-f886-4e6d-939e-cba3d731a330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:45 np0005466031 nova_compute[235803]: 2025-10-02 12:33:45.076 2 DEBUG nova.network.neutron [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:33:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:45.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:45 np0005466031 nova_compute[235803]: 2025-10-02 12:33:45.572 2 DEBUG nova.policy [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a82e7dc296145a2981f82e64bc5c48e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '812b0ca70f56429383e14031946e37e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:33:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:46.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.737 2 DEBUG nova.network.neutron [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Successfully updated port: 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.754 2 DEBUG nova.network.neutron [req-7561d1c4-383f-461c-a7ef-df0e4615e67a req-a1e20a14-fc0c-436a-99e2-ce4bbe9f8f3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updated VIF entry in instance network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.754 2 DEBUG nova.network.neutron [req-7561d1c4-383f-461c-a7ef-df0e4615e67a req-a1e20a14-fc0c-436a-99e2-ce4bbe9f8f3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.774 2 DEBUG oslo_concurrency.lockutils [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.782 2 DEBUG oslo_concurrency.lockutils [req-7561d1c4-383f-461c-a7ef-df0e4615e67a req-a1e20a14-fc0c-436a-99e2-ce4bbe9f8f3a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.782 2 DEBUG oslo_concurrency.lockutils [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.783 2 DEBUG nova.network.neutron [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.968 2 DEBUG nova.compute.manager [req-b64b3eff-4c9b-476f-b966-0d3b9ef02239 req-be177fd7-ed7b-4678-84be-3b413a811ccf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-changed-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.968 2 DEBUG nova.compute.manager [req-b64b3eff-4c9b-476f-b966-0d3b9ef02239 req-be177fd7-ed7b-4678-84be-3b413a811ccf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing instance network info cache due to event network-changed-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:33:46 np0005466031 nova_compute[235803]: 2025-10-02 12:33:46.968 2 DEBUG oslo_concurrency.lockutils [req-b64b3eff-4c9b-476f-b966-0d3b9ef02239 req-be177fd7-ed7b-4678-84be-3b413a811ccf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:47 np0005466031 nova_compute[235803]: 2025-10-02 12:33:47.089 2 WARNING nova.network.neutron [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] 6a187d8a-77c6-4b27-bb13-654f471c1faf already exists in list: networks containing: ['6a187d8a-77c6-4b27-bb13-654f471c1faf']. ignoring it#033[00m
Oct  2 08:33:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:47.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:47 np0005466031 nova_compute[235803]: 2025-10-02 12:33:47.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:47 np0005466031 nova_compute[235803]: 2025-10-02 12:33:47.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:47 np0005466031 nova_compute[235803]: 2025-10-02 12:33:47.672 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:47 np0005466031 nova_compute[235803]: 2025-10-02 12:33:47.673 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:47 np0005466031 nova_compute[235803]: 2025-10-02 12:33:47.673 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:47 np0005466031 nova_compute[235803]: 2025-10-02 12:33:47.673 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:33:47 np0005466031 nova_compute[235803]: 2025-10-02 12:33:47.673 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:48 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2807161124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.105 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.184 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.184 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:48 np0005466031 podman[270877]: 2025-10-02 12:33:48.197335657 +0000 UTC m=+0.053925426 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:33:48 np0005466031 podman[270878]: 2025-10-02 12:33:48.234512458 +0000 UTC m=+0.088822351 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.371 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.372 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4354MB free_disk=20.806175231933594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.372 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.373 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.492 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 6b2e9f39-f886-4e6d-939e-cba3d731a330 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.492 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.492 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.549 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:48 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2501477229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:48 np0005466031 nova_compute[235803]: 2025-10-02 12:33:48.998 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.005 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.032 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.072 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.073 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.089 2 DEBUG nova.network.neutron [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.147 2 DEBUG oslo_concurrency.lockutils [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.148 2 DEBUG oslo_concurrency.lockutils [req-b64b3eff-4c9b-476f-b966-0d3b9ef02239 req-be177fd7-ed7b-4678-84be-3b413a811ccf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.148 2 DEBUG nova.network.neutron [req-b64b3eff-4c9b-476f-b966-0d3b9ef02239 req-be177fd7-ed7b-4678-84be-3b413a811ccf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing network info cache for port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.152 2 DEBUG nova.virt.libvirt.vif [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-206411601',display_name='tempest-tempest.common.compute-instance-206411601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-206411601',id=80,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-4ya55hgz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=6b2e9f39-f886-4e6d-939e-cba3d731a330,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.152 2 DEBUG nova.network.os_vif_util [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.153 2 DEBUG nova.network.os_vif_util [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.154 2 DEBUG os_vif [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.158 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7548de71-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7548de71-bb, col_values=(('external_ids', {'iface-id': '7548de71-bb71-4ee1-98c9-ad2fd1f6f61f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:08:b5', 'vm-uuid': '6b2e9f39-f886-4e6d-939e-cba3d731a330'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:49 np0005466031 NetworkManager[44907]: <info>  [1759408429.1621] manager: (tap7548de71-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.167 2 INFO os_vif [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb')#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.168 2 DEBUG nova.virt.libvirt.vif [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-206411601',display_name='tempest-tempest.common.compute-instance-206411601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-206411601',id=80,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-4ya55hgz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=6b2e9f39-f886-4e6d-939e-cba3d731a330,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.168 2 DEBUG nova.network.os_vif_util [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.169 2 DEBUG nova.network.os_vif_util [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.171 2 DEBUG nova.virt.libvirt.guest [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <mac address="fa:16:3e:f9:08:b5"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <model type="virtio"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <mtu size="1442"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <target dev="tap7548de71-bb"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]: </interface>
Oct  2 08:33:49 np0005466031 nova_compute[235803]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:33:49 np0005466031 kernel: tap7548de71-bb: entered promiscuous mode
Oct  2 08:33:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:49Z|00276|binding|INFO|Claiming lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for this chassis.
Oct  2 08:33:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:49Z|00277|binding|INFO|7548de71-bb71-4ee1-98c9-ad2fd1f6f61f: Claiming fa:16:3e:f9:08:b5 10.100.0.3
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:49 np0005466031 NetworkManager[44907]: <info>  [1759408429.1849] manager: (tap7548de71-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct  2 08:33:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:49Z|00278|binding|INFO|Setting lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f ovn-installed in OVS
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:49 np0005466031 systemd-udevd[270949]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.216 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:08:b5 10.100.0.3'], port_security=['fa:16:3e:f9:08:b5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1637495957', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6b2e9f39-f886-4e6d-939e-cba3d731a330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1637495957', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:49Z|00279|binding|INFO|Setting lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f up in Southbound
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.217 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf bound to our chassis#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.219 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf#033[00m
Oct  2 08:33:49 np0005466031 NetworkManager[44907]: <info>  [1759408429.2211] device (tap7548de71-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:33:49 np0005466031 NetworkManager[44907]: <info>  [1759408429.2222] device (tap7548de71-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:33:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:49Z|00280|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.233 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[314575a9-ea30-450e-a5ef-6fe1a2ab149b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.261 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a90414-b7c6-4ff5-9462-564823b08557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.264 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5f674e-1a01-4bdb-aed4-93b2afc83504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.291 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6b567fe5-ab7e-4196-a282-3b243ed35eb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.298 2 DEBUG nova.virt.libvirt.driver [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.298 2 DEBUG nova.virt.libvirt.driver [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.299 2 DEBUG nova.virt.libvirt.driver [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:80:95:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.299 2 DEBUG nova.virt.libvirt.driver [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] No VIF found with MAC fa:16:3e:f9:08:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.307 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e513b543-2976-452e-9657-e2f53a720aed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624635, 'reachable_time': 38257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270957, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.320 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f84516-3e51-4ecd-a999-e2271dc06cec]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624646, 'tstamp': 624646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270958, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624648, 'tstamp': 624648}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270958, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.321 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.324 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.324 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.325 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:49.325 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.374 2 DEBUG nova.virt.libvirt.guest [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <nova:name>tempest-tempest.common.compute-instance-206411601</nova:name>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <nova:creationTime>2025-10-02 12:33:49</nova:creationTime>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <nova:flavor name="m1.nano">
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:memory>128</nova:memory>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:disk>1</nova:disk>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:swap>0</nova:swap>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  </nova:flavor>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <nova:owner>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  </nova:owner>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  <nova:ports>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:port uuid="0661a8d8-df40-4903-ad8c-8e3f7549831e">
Oct  2 08:33:49 np0005466031 nova_compute[235803]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    </nova:port>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    <nova:port uuid="7548de71-bb71-4ee1-98c9-ad2fd1f6f61f">
Oct  2 08:33:49 np0005466031 nova_compute[235803]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:    </nova:port>
Oct  2 08:33:49 np0005466031 nova_compute[235803]:  </nova:ports>
Oct  2 08:33:49 np0005466031 nova_compute[235803]: </nova:instance>
Oct  2 08:33:49 np0005466031 nova_compute[235803]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:33:49 np0005466031 nova_compute[235803]: 2025-10-02 12:33:49.484 2 DEBUG oslo_concurrency.lockutils [None req-db753a4b-a28f-4234-945a-ecacd81509f2 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-6b2e9f39-f886-4e6d-939e-cba3d731a330-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.073 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.075 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.075 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:50.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.721 2 DEBUG nova.network.neutron [req-b64b3eff-4c9b-476f-b966-0d3b9ef02239 req-be177fd7-ed7b-4678-84be-3b413a811ccf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updated VIF entry in instance network info cache for port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.722 2 DEBUG nova.network.neutron [req-b64b3eff-4c9b-476f-b966-0d3b9ef02239 req-be177fd7-ed7b-4678-84be-3b413a811ccf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.765 2 DEBUG oslo_concurrency.lockutils [req-b64b3eff-4c9b-476f-b966-0d3b9ef02239 req-be177fd7-ed7b-4678-84be-3b413a811ccf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:50Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:08:b5 10.100.0.3
Oct  2 08:33:50 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:50Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:08:b5 10.100.0.3
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.967 2 DEBUG nova.compute.manager [req-e39b0572-a39f-4653-939c-451b10a0f44d req-73ef0edd-4250-4d7e-a029-d1b342d4afbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.967 2 DEBUG oslo_concurrency.lockutils [req-e39b0572-a39f-4653-939c-451b10a0f44d req-73ef0edd-4250-4d7e-a029-d1b342d4afbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.968 2 DEBUG oslo_concurrency.lockutils [req-e39b0572-a39f-4653-939c-451b10a0f44d req-73ef0edd-4250-4d7e-a029-d1b342d4afbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.968 2 DEBUG oslo_concurrency.lockutils [req-e39b0572-a39f-4653-939c-451b10a0f44d req-73ef0edd-4250-4d7e-a029-d1b342d4afbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.968 2 DEBUG nova.compute.manager [req-e39b0572-a39f-4653-939c-451b10a0f44d req-73ef0edd-4250-4d7e-a029-d1b342d4afbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] No waiting events found dispatching network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:50 np0005466031 nova_compute[235803]: 2025-10-02 12:33:50.969 2 WARNING nova.compute.manager [req-e39b0572-a39f-4653-939c-451b10a0f44d req-73ef0edd-4250-4d7e-a029-d1b342d4afbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received unexpected event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.756 2 DEBUG oslo_concurrency.lockutils [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "interface-6b2e9f39-f886-4e6d-939e-cba3d731a330-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.756 2 DEBUG oslo_concurrency.lockutils [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-6b2e9f39-f886-4e6d-939e-cba3d731a330-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.791 2 DEBUG nova.objects.instance [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'flavor' on Instance uuid 6b2e9f39-f886-4e6d-939e-cba3d731a330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.814 2 DEBUG nova.virt.libvirt.vif [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-206411601',display_name='tempest-tempest.common.compute-instance-206411601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-206411601',id=80,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-4ya55hgz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=6b2e9f39-f886-4e6d-939e-cba3d731a330,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.815 2 DEBUG nova.network.os_vif_util [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.815 2 DEBUG nova.network.os_vif_util [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.818 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.819 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.821 2 DEBUG nova.virt.libvirt.driver [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Attempting to detach device tap7548de71-bb from instance 6b2e9f39-f886-4e6d-939e-cba3d731a330 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.821 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <mac address="fa:16:3e:f9:08:b5"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <model type="virtio"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <mtu size="1442"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <target dev="tap7548de71-bb"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: </interface>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.840 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.843 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface>not found in domain: <domain type='kvm' id='32'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <name>instance-00000050</name>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <uuid>6b2e9f39-f886-4e6d-939e-cba3d731a330</uuid>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:name>tempest-tempest.common.compute-instance-206411601</nova:name>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:creationTime>2025-10-02 12:33:49</nova:creationTime>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:flavor name="m1.nano">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:memory>128</nova:memory>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:disk>1</nova:disk>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:swap>0</nova:swap>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </nova:flavor>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:owner>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </nova:owner>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:ports>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:port uuid="0661a8d8-df40-4903-ad8c-8e3f7549831e">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </nova:port>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:port uuid="7548de71-bb71-4ee1-98c9-ad2fd1f6f61f">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </nova:port>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </nova:ports>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: </nova:instance>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <memory unit='KiB'>131072</memory>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <resource>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <partition>/machine</partition>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </resource>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <sysinfo type='smbios'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='serial'>6b2e9f39-f886-4e6d-939e-cba3d731a330</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='uuid'>6b2e9f39-f886-4e6d-939e-cba3d731a330</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <boot dev='hd'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <smbios mode='sysinfo'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <vmcoreinfo state='on'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <feature policy='require' name='x2apic'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <feature policy='require' name='vme'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <clock offset='utc'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <timer name='hpet' present='no'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <on_reboot>restart</on_reboot>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <on_crash>destroy</on_crash>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <disk type='network' device='disk'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <auth username='openstack'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <source protocol='rbd' name='vms/6b2e9f39-f886-4e6d-939e-cba3d731a330_disk' index='2'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target dev='vda' bus='virtio'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='virtio-disk0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <disk type='network' device='cdrom'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <auth username='openstack'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <source protocol='rbd' name='vms/6b2e9f39-f886-4e6d-939e-cba3d731a330_disk.config' index='1'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target dev='sda' bus='sata'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <readonly/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='sata0-0-0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pcie.0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='1' port='0x10'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='2' port='0x11'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='3' port='0x12'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='4' port='0x13'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.4'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='5' port='0x14'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='6' port='0x15'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.6'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='7' port='0x16'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.7'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='8' port='0x17'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.8'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='9' port='0x18'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.9'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='10' port='0x19'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.10'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='11' port='0x1a'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.11'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='12' port='0x1b'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.12'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='13' port='0x1c'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.13'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='14' port='0x1d'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.14'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='15' port='0x1e'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.15'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='16' port='0x1f'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.16'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='17' port='0x20'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.17'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='18' port='0x21'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.18'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='19' port='0x22'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.19'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='20' port='0x23'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.20'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='21' port='0x24'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.21'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='22' port='0x25'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.22'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='23' port='0x26'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.23'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='24' port='0x27'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.24'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='25' port='0x28'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.25'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-pci-bridge'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.26'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='usb'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='sata' index='0'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='ide'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <interface type='ethernet'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <mac address='fa:16:3e:80:95:70'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target dev='tap0661a8d8-df'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model type='virtio'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <mtu size='1442'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='net0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <interface type='ethernet'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <mac address='fa:16:3e:f9:08:b5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target dev='tap7548de71-bb'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model type='virtio'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <mtu size='1442'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='net1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <serial type='pty'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <source path='/dev/pts/1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <log file='/var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/console.log' append='off'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target type='isa-serial' port='0'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <model name='isa-serial'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </target>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='serial0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <source path='/dev/pts/1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <log file='/var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/console.log' append='off'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target type='serial' port='0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='serial0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </console>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <input type='tablet' bus='usb'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='input0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </input>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <input type='mouse' bus='ps2'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='input1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </input>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <input type='keyboard' bus='ps2'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='input2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </input>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <listen type='address' address='::0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </graphics>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <audio id='1' type='none'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='video0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <watchdog model='itco' action='reset'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='watchdog0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </watchdog>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <memballoon model='virtio'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <stats period='10'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='balloon0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <rng model='virtio'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='rng0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <label>system_u:system_r:svirt_t:s0:c733,c780</label>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c733,c780</imagelabel>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </seclabel>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <label>+107:+107</label>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </seclabel>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.843 2 INFO nova.virt.libvirt.driver [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully detached device tap7548de71-bb from instance 6b2e9f39-f886-4e6d-939e-cba3d731a330 from the persistent domain config.#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.844 2 DEBUG nova.virt.libvirt.driver [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] (1/8): Attempting to detach device tap7548de71-bb with device alias net1 from instance 6b2e9f39-f886-4e6d-939e-cba3d731a330 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.844 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <mac address="fa:16:3e:f9:08:b5"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <model type="virtio"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <mtu size="1442"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <target dev="tap7548de71-bb"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: </interface>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.894 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.894 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.894 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.895 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6b2e9f39-f886-4e6d-939e-cba3d731a330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:51 np0005466031 kernel: tap7548de71-bb (unregistering): left promiscuous mode
Oct  2 08:33:51 np0005466031 NetworkManager[44907]: <info>  [1759408431.9482] device (tap7548de71-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:33:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:51Z|00281|binding|INFO|Releasing lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f from this chassis (sb_readonly=0)
Oct  2 08:33:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:51Z|00282|binding|INFO|Setting lport 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f down in Southbound
Oct  2 08:33:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:33:51Z|00283|binding|INFO|Removing iface tap7548de71-bb ovn-installed in OVS
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:51.966 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:08:b5 10.100.0.3'], port_security=['fa:16:3e:f9:08:b5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1637495957', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6b2e9f39-f886-4e6d-939e-cba3d731a330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1637495957', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8810075-4e55-4c48-9251-ea5cbc49c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:51.967 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis#033[00m
Oct  2 08:33:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:51.968 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a187d8a-77c6-4b27-bb13-654f471c1faf#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.977 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759408431.97671, 6b2e9f39-f886-4e6d-939e-cba3d731a330 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.978 2 DEBUG nova.virt.libvirt.driver [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Start waiting for the detach event from libvirt for device tap7548de71-bb with device alias net1 for instance 6b2e9f39-f886-4e6d-939e-cba3d731a330 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.978 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.982 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f9:08:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap7548de71-bb"/></interface>not found in domain: <domain type='kvm' id='32'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <name>instance-00000050</name>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <uuid>6b2e9f39-f886-4e6d-939e-cba3d731a330</uuid>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:name>tempest-tempest.common.compute-instance-206411601</nova:name>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:creationTime>2025-10-02 12:33:49</nova:creationTime>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:flavor name="m1.nano">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:memory>128</nova:memory>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:disk>1</nova:disk>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:swap>0</nova:swap>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </nova:flavor>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:owner>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </nova:owner>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <nova:ports>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:port uuid="0661a8d8-df40-4903-ad8c-8e3f7549831e">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </nova:port>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <nova:port uuid="7548de71-bb71-4ee1-98c9-ad2fd1f6f61f">
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </nova:port>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </nova:ports>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: </nova:instance>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <memory unit='KiB'>131072</memory>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <resource>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <partition>/machine</partition>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </resource>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <sysinfo type='smbios'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='serial'>6b2e9f39-f886-4e6d-939e-cba3d731a330</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='uuid'>6b2e9f39-f886-4e6d-939e-cba3d731a330</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <boot dev='hd'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <smbios mode='sysinfo'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <vmcoreinfo state='on'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <feature policy='require' name='x2apic'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <feature policy='require' name='vme'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <clock offset='utc'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <timer name='hpet' present='no'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <on_reboot>restart</on_reboot>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <on_crash>destroy</on_crash>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <disk type='network' device='disk'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <auth username='openstack'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <source protocol='rbd' name='vms/6b2e9f39-f886-4e6d-939e-cba3d731a330_disk' index='2'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target dev='vda' bus='virtio'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='virtio-disk0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <disk type='network' device='cdrom'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <auth username='openstack'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <source protocol='rbd' name='vms/6b2e9f39-f886-4e6d-939e-cba3d731a330_disk.config' index='1'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target dev='sda' bus='sata'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <readonly/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='sata0-0-0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pcie.0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='1' port='0x10'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='2' port='0x11'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='3' port='0x12'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='4' port='0x13'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.4'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='5' port='0x14'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='6' port='0x15'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.6'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='7' port='0x16'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.7'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='8' port='0x17'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.8'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='9' port='0x18'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.9'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='10' port='0x19'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.10'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='11' port='0x1a'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.11'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='12' port='0x1b'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.12'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='13' port='0x1c'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.13'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='14' port='0x1d'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.14'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='15' port='0x1e'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.15'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='16' port='0x1f'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.16'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='17' port='0x20'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.17'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='18' port='0x21'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.18'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='19' port='0x22'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.19'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='20' port='0x23'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.20'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='21' port='0x24'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.21'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='22' port='0x25'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.22'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='23' port='0x26'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.23'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='24' port='0x27'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.24'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-root-port'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target chassis='25' port='0x28'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.25'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model name='pcie-pci-bridge'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='pci.26'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='usb'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <controller type='sata' index='0'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='ide'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </controller>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <interface type='ethernet'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <mac address='fa:16:3e:80:95:70'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target dev='tap0661a8d8-df'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model type='virtio'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <mtu size='1442'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='net0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <serial type='pty'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <source path='/dev/pts/1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <log file='/var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/console.log' append='off'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target type='isa-serial' port='0'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:        <model name='isa-serial'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      </target>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='serial0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <source path='/dev/pts/1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <log file='/var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330/console.log' append='off'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <target type='serial' port='0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='serial0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </console>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <input type='tablet' bus='usb'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='input0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </input>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <input type='mouse' bus='ps2'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='input1'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </input>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <input type='keyboard' bus='ps2'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='input2'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </input>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <listen type='address' address='::0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </graphics>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <audio id='1' type='none'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='video0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <watchdog model='itco' action='reset'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='watchdog0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </watchdog>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <memballoon model='virtio'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <stats period='10'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='balloon0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <rng model='virtio'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <alias name='rng0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <label>system_u:system_r:svirt_t:s0:c733,c780</label>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c733,c780</imagelabel>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </seclabel>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <label>+107:+107</label>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:33:51 np0005466031 nova_compute[235803]:  </seclabel>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:33:51 np0005466031 nova_compute[235803]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.983 2 INFO nova.virt.libvirt.driver [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully detached device tap7548de71-bb from instance 6b2e9f39-f886-4e6d-939e-cba3d731a330 from the live domain config.#033[00m
Oct  2 08:33:51 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.984 2 DEBUG nova.virt.libvirt.vif [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-206411601',display_name='tempest-tempest.common.compute-instance-206411601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-206411601',id=80,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-4ya55hgz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=6b2e9f39-f886-4e6d-939e-cba3d731a330,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.984 2 DEBUG nova.network.os_vif_util [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:51.984 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[da5f75f7-0249-412e-9cc4-44fb0e7b3a82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.985 2 DEBUG nova.network.os_vif_util [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.985 2 DEBUG os_vif [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7548de71-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.993 2 INFO os_vif [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:08:b5,bridge_name='br-int',has_traffic_filtering=True,id=7548de71-bb71-4ee1-98c9-ad2fd1f6f61f,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7548de71-bb')#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:51.993 2 DEBUG nova.virt.libvirt.guest [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  <nova:name>tempest-tempest.common.compute-instance-206411601</nova:name>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  <nova:creationTime>2025-10-02 12:33:51</nova:creationTime>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  <nova:flavor name="m1.nano">
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    <nova:memory>128</nova:memory>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    <nova:disk>1</nova:disk>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    <nova:swap>0</nova:swap>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  </nova:flavor>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  <nova:owner>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    <nova:user uuid="7a82e7dc296145a2981f82e64bc5c48e">tempest-AttachInterfacesTestJSON-2085837243-project-member</nova:user>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    <nova:project uuid="812b0ca70f56429383e14031946e37e5">tempest-AttachInterfacesTestJSON-2085837243</nova:project>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  </nova:owner>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  <nova:ports>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    <nova:port uuid="0661a8d8-df40-4903-ad8c-8e3f7549831e">
Oct  2 08:33:52 np0005466031 nova_compute[235803]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:    </nova:port>
Oct  2 08:33:52 np0005466031 nova_compute[235803]:  </nova:ports>
Oct  2 08:33:52 np0005466031 nova_compute[235803]: </nova:instance>
Oct  2 08:33:52 np0005466031 nova_compute[235803]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.015 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c94d8ea3-5b93-4dd6-9606-04ab2adb840b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.019 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5af4f4-440c-40d0-a1ed-90c706c3f532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.046 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a431c1-a3c1-44ab-85b5-404d360c5319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.063 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d13435d2-4b96-47bd-a7b4-bbfe73fa88ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a187d8a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:e8:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624635, 'reachable_time': 38257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270970, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.079 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[41b39eb2-ecf6-41cf-9ae7-fd243bbe9287]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624646, 'tstamp': 624646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270971, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a187d8a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624648, 'tstamp': 624648}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270971, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.080 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:52.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005466031 nova_compute[235803]: 2025-10-02 12:33:52.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.083 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a187d8a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.083 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.084 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a187d8a-70, col_values=(('external_ids', {'iface-id': '2f45c0ec-cf0f-42c4-ae95-b4febe84bc01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:33:52.084 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.164 2 DEBUG nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.164 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.164 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.164 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.165 2 DEBUG nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] No waiting events found dispatching network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.165 2 WARNING nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received unexpected event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.165 2 DEBUG nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-unplugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.165 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.165 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.165 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.166 2 DEBUG nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] No waiting events found dispatching network-vif-unplugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.166 2 WARNING nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received unexpected event network-vif-unplugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.166 2 DEBUG nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.166 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.166 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.167 2 DEBUG oslo_concurrency.lockutils [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.167 2 DEBUG nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] No waiting events found dispatching network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.167 2 WARNING nova.compute.manager [req-8b329ff5-b2d7-4e27-b8a5-d90902913561 req-b9c54a19-4227-48d6-b193-992863b15702 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received unexpected event network-vif-plugged-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:53.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.458 2 DEBUG oslo_concurrency.lockutils [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:53 np0005466031 nova_compute[235803]: 2025-10-02 12:33:53.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.229 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "address": "fa:16:3e:f9:08:b5", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7548de71-bb", "ovs_interfaceid": "7548de71-bb71-4ee1-98c9-ad2fd1f6f61f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.273 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.274 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.274 2 DEBUG oslo_concurrency.lockutils [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.274 2 DEBUG nova.network.neutron [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.275 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.276 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.276 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:33:55 np0005466031 nova_compute[235803]: 2025-10-02 12:33:55.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:33:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:56.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:33:56 np0005466031 podman[270999]: 2025-10-02 12:33:56.869319692 +0000 UTC m=+0.058990112 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:33:56 np0005466031 podman[270998]: 2025-10-02 12:33:56.89388913 +0000 UTC m=+0.084527728 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:33:56 np0005466031 nova_compute[235803]: 2025-10-02 12:33:56.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.537 2 DEBUG nova.compute.manager [req-e28fcf35-a02b-46cd-841a-251364189d89 req-694382bc-bc0a-44d4-bdde-6f0b4bc88e16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.538 2 DEBUG nova.compute.manager [req-e28fcf35-a02b-46cd-841a-251364189d89 req-694382bc-bc0a-44d4-bdde-6f0b4bc88e16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing instance network info cache due to event network-changed-0661a8d8-df40-4903-ad8c-8e3f7549831e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.538 2 DEBUG oslo_concurrency.lockutils [req-e28fcf35-a02b-46cd-841a-251364189d89 req-694382bc-bc0a-44d4-bdde-6f0b4bc88e16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.634 2 INFO nova.network.neutron [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Port 7548de71-bb71-4ee1-98c9-ad2fd1f6f61f from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.634 2 DEBUG nova.network.neutron [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.654 2 DEBUG oslo_concurrency.lockutils [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.657 2 DEBUG oslo_concurrency.lockutils [req-e28fcf35-a02b-46cd-841a-251364189d89 req-694382bc-bc0a-44d4-bdde-6f0b4bc88e16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.657 2 DEBUG nova.network.neutron [req-e28fcf35-a02b-46cd-841a-251364189d89 req-694382bc-bc0a-44d4-bdde-6f0b4bc88e16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Refreshing network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:33:57 np0005466031 nova_compute[235803]: 2025-10-02 12:33:57.680 2 DEBUG oslo_concurrency.lockutils [None req-0ea5a364-814c-42e7-85c4-ae904e38a1d4 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "interface-6b2e9f39-f886-4e6d-939e-cba3d731a330-7548de71-bb71-4ee1-98c9-ad2fd1f6f61f" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:58.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:58 np0005466031 nova_compute[235803]: 2025-10-02 12:33:58.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:59 np0005466031 nova_compute[235803]: 2025-10-02 12:33:59.183 2 DEBUG nova.network.neutron [req-e28fcf35-a02b-46cd-841a-251364189d89 req-694382bc-bc0a-44d4-bdde-6f0b4bc88e16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updated VIF entry in instance network info cache for port 0661a8d8-df40-4903-ad8c-8e3f7549831e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:33:59 np0005466031 nova_compute[235803]: 2025-10-02 12:33:59.184 2 DEBUG nova.network.neutron [req-e28fcf35-a02b-46cd-841a-251364189d89 req-694382bc-bc0a-44d4-bdde-6f0b4bc88e16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [{"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:59 np0005466031 nova_compute[235803]: 2025-10-02 12:33:59.203 2 DEBUG oslo_concurrency.lockutils [req-e28fcf35-a02b-46cd-841a-251364189d89 req-694382bc-bc0a-44d4-bdde-6f0b4bc88e16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6b2e9f39-f886-4e6d-939e-cba3d731a330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:33:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 08:33:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:59.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 08:33:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 08:34:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:00.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 08:34:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:01.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:01 np0005466031 nova_compute[235803]: 2025-10-02 12:34:01.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:01 np0005466031 nova_compute[235803]: 2025-10-02 12:34:01.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:34:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:34:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:03 np0005466031 nova_compute[235803]: 2025-10-02 12:34:03.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:04.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:05.064 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:05.065 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:34:05 np0005466031 nova_compute[235803]: 2025-10-02 12:34:05.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:05 np0005466031 podman[271230]: 2025-10-02 12:34:05.117158991 +0000 UTC m=+0.070940305 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 08:34:05 np0005466031 podman[271230]: 2025-10-02 12:34:05.227124221 +0000 UTC m=+0.180905335 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 08:34:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:05.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:05 np0005466031 podman[271369]: 2025-10-02 12:34:05.731733664 +0000 UTC m=+0.057341054 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:34:05 np0005466031 podman[271369]: 2025-10-02 12:34:05.747811877 +0000 UTC m=+0.073419237 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:34:05 np0005466031 podman[271437]: 2025-10-02 12:34:05.947583545 +0000 UTC m=+0.045488492 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct  2 08:34:05 np0005466031 podman[271437]: 2025-10-02 12:34:05.99008348 +0000 UTC m=+0.087988427 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, version=2.2.4, distribution-scope=public, io.openshift.expose-services=)
Oct  2 08:34:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:06.067 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:06.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:06 np0005466031 nova_compute[235803]: 2025-10-02 12:34:06.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:07Z|00284|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct  2 08:34:07 np0005466031 nova_compute[235803]: 2025-10-02 12:34:07.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:07.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:34:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:08.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:34:08 np0005466031 nova_compute[235803]: 2025-10-02 12:34:08.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:34:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:09.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:34:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:10.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:11.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:11 np0005466031 nova_compute[235803]: 2025-10-02 12:34:11.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:12.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:13.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:13 np0005466031 nova_compute[235803]: 2025-10-02 12:34:13.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:14.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:15Z|00285|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct  2 08:34:15 np0005466031 nova_compute[235803]: 2025-10-02 12:34:15.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:15.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:15 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Oct  2 08:34:15 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:15.805516) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:34:15 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Oct  2 08:34:15 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455805586, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2411, "num_deletes": 253, "total_data_size": 5669483, "memory_usage": 5747680, "flush_reason": "Manual Compaction"}
Oct  2 08:34:15 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Oct  2 08:34:15 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455917969, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 3718074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39247, "largest_seqno": 41653, "table_properties": {"data_size": 3708324, "index_size": 6116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20779, "raw_average_key_size": 20, "raw_value_size": 3688661, "raw_average_value_size": 3670, "num_data_blocks": 266, "num_entries": 1005, "num_filter_entries": 1005, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408249, "oldest_key_time": 1759408249, "file_creation_time": 1759408455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:34:15 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 112500 microseconds, and 8525 cpu microseconds.
Oct  2 08:34:15 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:34:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:15Z|00286|binding|INFO|Releasing lport 2f45c0ec-cf0f-42c4-ae95-b4febe84bc01 from this chassis (sb_readonly=0)
Oct  2 08:34:15 np0005466031 nova_compute[235803]: 2025-10-02 12:34:15.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:15.918014) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 3718074 bytes OK
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:15.918032) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.020320) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.020360) EVENT_LOG_v1 {"time_micros": 1759408456020352, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.020379) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5658821, prev total WAL file size 5658821, number of live WAL files 2.
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.021638) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(3630KB)], [75(9948KB)]
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408456021676, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13905788, "oldest_snapshot_seqno": -1}
Oct  2 08:34:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:16.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6663 keys, 11949582 bytes, temperature: kUnknown
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408456136312, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 11949582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11902390, "index_size": 29416, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 170758, "raw_average_key_size": 25, "raw_value_size": 11780449, "raw_average_value_size": 1768, "num_data_blocks": 1177, "num_entries": 6663, "num_filter_entries": 6663, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.136509) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11949582 bytes
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.139232) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.3 rd, 104.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.7 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7187, records dropped: 524 output_compression: NoCompression
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.139248) EVENT_LOG_v1 {"time_micros": 1759408456139240, "job": 46, "event": "compaction_finished", "compaction_time_micros": 114686, "compaction_time_cpu_micros": 28651, "output_level": 6, "num_output_files": 1, "total_output_size": 11949582, "num_input_records": 7187, "num_output_records": 6663, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408456140030, "job": 46, "event": "table_file_deletion", "file_number": 77}
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408456141733, "job": 46, "event": "table_file_deletion", "file_number": 75}
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.021522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.141832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.141839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.141841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.141842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:34:16.141844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:16 np0005466031 nova_compute[235803]: 2025-10-02 12:34:16.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:17.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:18.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:18 np0005466031 podman[271722]: 2025-10-02 12:34:18.64286627 +0000 UTC m=+0.065883420 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:34:18 np0005466031 podman[271723]: 2025-10-02 12:34:18.679529936 +0000 UTC m=+0.098785408 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:34:18 np0005466031 nova_compute[235803]: 2025-10-02 12:34:18.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:20.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:20 np0005466031 nova_compute[235803]: 2025-10-02 12:34:20.999 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:20.999 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.000 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.000 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.000 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.002 2 INFO nova.compute.manager [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Terminating instance#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.003 2 DEBUG nova.compute.manager [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:34:21 np0005466031 kernel: tap0661a8d8-df (unregistering): left promiscuous mode
Oct  2 08:34:21 np0005466031 NetworkManager[44907]: <info>  [1759408461.1563] device (tap0661a8d8-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:34:21 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:21Z|00287|binding|INFO|Releasing lport 0661a8d8-df40-4903-ad8c-8e3f7549831e from this chassis (sb_readonly=0)
Oct  2 08:34:21 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:21Z|00288|binding|INFO|Setting lport 0661a8d8-df40-4903-ad8c-8e3f7549831e down in Southbound
Oct  2 08:34:21 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:21Z|00289|binding|INFO|Removing iface tap0661a8d8-df ovn-installed in OVS
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.180 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:95:70 10.100.0.12'], port_security=['fa:16:3e:80:95:70 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6b2e9f39-f886-4e6d-939e-cba3d731a330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '812b0ca70f56429383e14031946e37e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3852fde4-27af-4b26-ab2c-21696f5fd593', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01e8e393-26fb-455a-9f27-eedcfd8792b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=0661a8d8-df40-4903-ad8c-8e3f7549831e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.182 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 0661a8d8-df40-4903-ad8c-8e3f7549831e in datapath 6a187d8a-77c6-4b27-bb13-654f471c1faf unbound from our chassis#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.185 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a187d8a-77c6-4b27-bb13-654f471c1faf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.186 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf8dd53-0b47-4b7b-82d5-dcf7a1881c7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.187 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf namespace which is not needed anymore#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005466031 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000050.scope: Deactivated successfully.
Oct  2 08:34:21 np0005466031 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000050.scope: Consumed 16.812s CPU time.
Oct  2 08:34:21 np0005466031 systemd-machined[192227]: Machine qemu-32-instance-00000050 terminated.
Oct  2 08:34:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:21.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:21 np0005466031 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[270511]: [NOTICE]   (270516) : haproxy version is 2.8.14-c23fe91
Oct  2 08:34:21 np0005466031 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[270511]: [NOTICE]   (270516) : path to executable is /usr/sbin/haproxy
Oct  2 08:34:21 np0005466031 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[270511]: [WARNING]  (270516) : Exiting Master process...
Oct  2 08:34:21 np0005466031 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[270511]: [WARNING]  (270516) : Exiting Master process...
Oct  2 08:34:21 np0005466031 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[270511]: [ALERT]    (270516) : Current worker (270518) exited with code 143 (Terminated)
Oct  2 08:34:21 np0005466031 neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf[270511]: [WARNING]  (270516) : All workers exited. Exiting... (0)
Oct  2 08:34:21 np0005466031 systemd[1]: libpod-2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73.scope: Deactivated successfully.
Oct  2 08:34:21 np0005466031 podman[271789]: 2025-10-02 12:34:21.344008811 +0000 UTC m=+0.046193382 container died 2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:34:21 np0005466031 systemd[1]: var-lib-containers-storage-overlay-a2ab250798383c8ff34f67e14a97298a2c61959a7882292a924150bd04dc2dfd-merged.mount: Deactivated successfully.
Oct  2 08:34:21 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73-userdata-shm.mount: Deactivated successfully.
Oct  2 08:34:21 np0005466031 podman[271789]: 2025-10-02 12:34:21.387890646 +0000 UTC m=+0.090075217 container cleanup 2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:34:21 np0005466031 systemd[1]: libpod-conmon-2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73.scope: Deactivated successfully.
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.438 2 INFO nova.virt.libvirt.driver [-] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Instance destroyed successfully.#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.440 2 DEBUG nova.objects.instance [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lazy-loading 'resources' on Instance uuid 6b2e9f39-f886-4e6d-939e-cba3d731a330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.454 2 DEBUG nova.virt.libvirt.vif [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-206411601',display_name='tempest-tempest.common.compute-instance-206411601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-206411601',id=80,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOJUolpUYrL9WAekDjhlifDnr1CKsPa/4Jl/CoAVLJO279RAWMGD6raq9N5nnuOTcFoZzRu0wSSYM2wFx7yXE/Waqix7cuyjGgRUriwur1iAZw19c0faTsezW/Uh5IZFUQ==',key_name='tempest-keypair-425621826',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='812b0ca70f56429383e14031946e37e5',ramdisk_id='',reservation_id='r-4ya55hgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2085837243',owner_user_name='tempest-AttachInterfacesTestJSON-2085837243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7a82e7dc296145a2981f82e64bc5c48e',uuid=6b2e9f39-f886-4e6d-939e-cba3d731a330,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.456 2 DEBUG nova.network.os_vif_util [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converting VIF {"id": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "address": "fa:16:3e:80:95:70", "network": {"id": "6a187d8a-77c6-4b27-bb13-654f471c1faf", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1306871209-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "812b0ca70f56429383e14031946e37e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0661a8d8-df", "ovs_interfaceid": "0661a8d8-df40-4903-ad8c-8e3f7549831e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.458 2 DEBUG nova.network.os_vif_util [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:95:70,bridge_name='br-int',has_traffic_filtering=True,id=0661a8d8-df40-4903-ad8c-8e3f7549831e,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0661a8d8-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.458 2 DEBUG os_vif [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:95:70,bridge_name='br-int',has_traffic_filtering=True,id=0661a8d8-df40-4903-ad8c-8e3f7549831e,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0661a8d8-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.460 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0661a8d8-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:21 np0005466031 podman[271819]: 2025-10-02 12:34:21.460933711 +0000 UTC m=+0.050097895 container remove 2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.464 2 INFO os_vif [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:95:70,bridge_name='br-int',has_traffic_filtering=True,id=0661a8d8-df40-4903-ad8c-8e3f7549831e,network=Network(6a187d8a-77c6-4b27-bb13-654f471c1faf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0661a8d8-df')#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.468 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b93adcdc-ca38-4d07-9c87-434e57cb0f95]: (4, ('Thu Oct  2 12:34:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf (2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73)\n2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73\nThu Oct  2 12:34:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf (2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73)\n2c14540f79f7490bb6eea406f6aac03401a5a1d95e653678471fde6dedb39a73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.470 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ce108694-2fc1-4485-a4a0-5146fb3076fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.471 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a187d8a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:21 np0005466031 kernel: tap6a187d8a-70: left promiscuous mode
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.489 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[44d3d702-0b4d-47bc-86c4-c65c60940f82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.515 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7bee39-9504-4564-ad56-85bc0b2b066c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.517 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf27fd4-9a85-42d9-a1be-3b986f4237f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.523 2 DEBUG nova.compute.manager [req-127e887e-2a1c-456c-af8e-12d7289bff36 req-b8bf04ff-1ead-40c2-b28a-6a06b1b39cc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-unplugged-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.524 2 DEBUG oslo_concurrency.lockutils [req-127e887e-2a1c-456c-af8e-12d7289bff36 req-b8bf04ff-1ead-40c2-b28a-6a06b1b39cc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.524 2 DEBUG oslo_concurrency.lockutils [req-127e887e-2a1c-456c-af8e-12d7289bff36 req-b8bf04ff-1ead-40c2-b28a-6a06b1b39cc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.524 2 DEBUG oslo_concurrency.lockutils [req-127e887e-2a1c-456c-af8e-12d7289bff36 req-b8bf04ff-1ead-40c2-b28a-6a06b1b39cc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.524 2 DEBUG nova.compute.manager [req-127e887e-2a1c-456c-af8e-12d7289bff36 req-b8bf04ff-1ead-40c2-b28a-6a06b1b39cc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] No waiting events found dispatching network-vif-unplugged-0661a8d8-df40-4903-ad8c-8e3f7549831e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:21 np0005466031 nova_compute[235803]: 2025-10-02 12:34:21.524 2 DEBUG nova.compute.manager [req-127e887e-2a1c-456c-af8e-12d7289bff36 req-b8bf04ff-1ead-40c2-b28a-6a06b1b39cc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-unplugged-0661a8d8-df40-4903-ad8c-8e3f7549831e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.533 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[eb680fe6-cd07-48ce-900f-b156b5f03a97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624628, 'reachable_time': 42032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271863, 'error': None, 'target': 'ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005466031 systemd[1]: run-netns-ovnmeta\x2d6a187d8a\x2d77c6\x2d4b27\x2dbb13\x2d654f471c1faf.mount: Deactivated successfully.
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.537 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a187d8a-77c6-4b27-bb13-654f471c1faf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:34:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:21.537 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ba2c15-8717-453f-b48b-1ce97ee7cf53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.040 2 INFO nova.virt.libvirt.driver [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Deleting instance files /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330_del#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.042 2 INFO nova.virt.libvirt.driver [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Deletion of /var/lib/nova/instances/6b2e9f39-f886-4e6d-939e-cba3d731a330_del complete#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.105 2 INFO nova.compute.manager [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.107 2 DEBUG oslo.service.loopingcall [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.107 2 DEBUG nova.compute.manager [-] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.108 2 DEBUG nova.network.neutron [-] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:34:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:22.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.830 2 DEBUG nova.network.neutron [-] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.869 2 INFO nova.compute.manager [-] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Took 0.76 seconds to deallocate network for instance.#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.912 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.913 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.941 2 DEBUG nova.compute.manager [req-d37c1d83-aafd-4401-9661-ff3c123be01d req-1ffd7635-b25a-4d71-8687-92635901ee33 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-deleted-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:22 np0005466031 nova_compute[235803]: 2025-10-02 12:34:22.967 2 DEBUG oslo_concurrency.processutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:23.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/186885800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.432 2 DEBUG oslo_concurrency.processutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.438 2 DEBUG nova.compute.provider_tree [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.450 2 DEBUG nova.scheduler.client.report [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.469 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.494 2 INFO nova.scheduler.client.report [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Deleted allocations for instance 6b2e9f39-f886-4e6d-939e-cba3d731a330#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.548 2 DEBUG oslo_concurrency.lockutils [None req-9a6f5e50-5303-4648-9219-d57473261e99 7a82e7dc296145a2981f82e64bc5c48e 812b0ca70f56429383e14031946e37e5 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.616 2 DEBUG nova.compute.manager [req-0eb16ff7-6e02-48e1-b121-05552f6b8d19 req-a52b01dd-3481-4cd8-b3c2-5274462140fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received event network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.616 2 DEBUG oslo_concurrency.lockutils [req-0eb16ff7-6e02-48e1-b121-05552f6b8d19 req-a52b01dd-3481-4cd8-b3c2-5274462140fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.617 2 DEBUG oslo_concurrency.lockutils [req-0eb16ff7-6e02-48e1-b121-05552f6b8d19 req-a52b01dd-3481-4cd8-b3c2-5274462140fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.617 2 DEBUG oslo_concurrency.lockutils [req-0eb16ff7-6e02-48e1-b121-05552f6b8d19 req-a52b01dd-3481-4cd8-b3c2-5274462140fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6b2e9f39-f886-4e6d-939e-cba3d731a330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.617 2 DEBUG nova.compute.manager [req-0eb16ff7-6e02-48e1-b121-05552f6b8d19 req-a52b01dd-3481-4cd8-b3c2-5274462140fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] No waiting events found dispatching network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.618 2 WARNING nova.compute.manager [req-0eb16ff7-6e02-48e1-b121-05552f6b8d19 req-a52b01dd-3481-4cd8-b3c2-5274462140fc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Received unexpected event network-vif-plugged-0661a8d8-df40-4903-ad8c-8e3f7549831e for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:23 np0005466031 nova_compute[235803]: 2025-10-02 12:34:23.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:24.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:25.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:25.842 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:25.843 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:25.843 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:26.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:26 np0005466031 nova_compute[235803]: 2025-10-02 12:34:26.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:27 np0005466031 nova_compute[235803]: 2025-10-02 12:34:27.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:27.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:27 np0005466031 podman[271891]: 2025-10-02 12:34:27.644599727 +0000 UTC m=+0.070879344 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:34:27 np0005466031 podman[271890]: 2025-10-02 12:34:27.671525423 +0000 UTC m=+0.098641684 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.068 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.068 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.094 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:34:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:28.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.200 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.200 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.208 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.208 2 INFO nova.compute.claims [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.511 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/629050486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.975 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:28 np0005466031 nova_compute[235803]: 2025-10-02 12:34:28.983 2 DEBUG nova.compute.provider_tree [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.004 2 DEBUG nova.scheduler.client.report [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.024 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.025 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.074 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.075 2 DEBUG nova.network.neutron [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.099 2 INFO nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.121 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.230 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.231 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.232 2 INFO nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Creating image(s)#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.263 2 DEBUG nova.storage.rbd_utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.302 2 DEBUG nova.storage.rbd_utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:29.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.328 2 DEBUG nova.storage.rbd_utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.331 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.332 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.388 2 DEBUG nova.policy [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c0d7f2725ce3440b9e998e6efddc4628', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35e544965644721a29ebea7dd0cc74e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:29 np0005466031 nova_compute[235803]: 2025-10-02 12:34:29.689 2 DEBUG nova.virt.libvirt.imagebackend [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/52ef509e-0e22-464e-93c9-3ddcf574cd64/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/52ef509e-0e22-464e-93c9-3ddcf574cd64/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:34:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:30.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:30 np0005466031 nova_compute[235803]: 2025-10-02 12:34:30.889 2 DEBUG nova.network.neutron [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Successfully created port: 6936fa44-52a7-42a8-aeae-c4c4b7681742 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.243 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.316 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.317 2 DEBUG nova.virt.images [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] 52ef509e-0e22-464e-93c9-3ddcf574cd64 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.319 2 DEBUG nova.privsep.utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.319 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.519 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.524 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.598 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.599 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.620 2 DEBUG nova.storage.rbd_utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.623 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.806 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.806 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.853 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.925 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.926 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.933 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:34:31 np0005466031 nova_compute[235803]: 2025-10-02 12:34:31.935 2 INFO nova.compute.claims [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.115 2 DEBUG nova.network.neutron [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Successfully updated port: 6936fa44-52a7-42a8-aeae-c4c4b7681742 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:34:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:32.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.137 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.138 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquired lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.138 2 DEBUG nova.network.neutron [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.153 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1778796044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.230 2 DEBUG nova.compute.manager [req-73da2062-2dcc-4b2f-b6a6-fd5afaefb6bc req-0e8bd10b-691b-436e-b3f0-f368a3ace7e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received event network-changed-6936fa44-52a7-42a8-aeae-c4c4b7681742 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.231 2 DEBUG nova.compute.manager [req-73da2062-2dcc-4b2f-b6a6-fd5afaefb6bc req-0e8bd10b-691b-436e-b3f0-f368a3ace7e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Refreshing instance network info cache due to event network-changed-6936fa44-52a7-42a8-aeae-c4c4b7681742. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.231 2 DEBUG oslo_concurrency.lockutils [req-73da2062-2dcc-4b2f-b6a6-fd5afaefb6bc req-0e8bd10b-691b-436e-b3f0-f368a3ace7e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.367 2 DEBUG nova.network.neutron [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:34:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1785355022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.609 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.614 2 DEBUG nova.compute.provider_tree [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.631 2 DEBUG nova.scheduler.client.report [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.656 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.656 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.702 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.702 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.721 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.766 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.915 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.916 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.917 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Creating image(s)#033[00m
Oct  2 08:34:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1293371938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.941 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.965 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.985 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:32 np0005466031 nova_compute[235803]: 2025-10-02 12:34:32.987 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.016 2 DEBUG nova.policy [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34a9da53e0cc446593d0cea2f498c53e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.051 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.052 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.052 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.052 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.078 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.083 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.148 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.238 2 DEBUG nova.storage.rbd_utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] resizing rbd image c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:34:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:33.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:33 np0005466031 nova_compute[235803]: 2025-10-02 12:34:33.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.113 2 DEBUG nova.objects.instance [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'migration_context' on Instance uuid c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.134 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.135 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Ensure instance console log exists: /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.135 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.135 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.136 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:34.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.392 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.452 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] resizing rbd image a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.616 2 DEBUG nova.network.neutron [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Updating instance_info_cache with network_info: [{"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.640 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Releasing lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.641 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Instance network_info: |[{"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.641 2 DEBUG oslo_concurrency.lockutils [req-73da2062-2dcc-4b2f-b6a6-fd5afaefb6bc req-0e8bd10b-691b-436e-b3f0-f368a3ace7e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.641 2 DEBUG nova.network.neutron [req-73da2062-2dcc-4b2f-b6a6-fd5afaefb6bc req-0e8bd10b-691b-436e-b3f0-f368a3ace7e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Refreshing network info cache for port 6936fa44-52a7-42a8-aeae-c4c4b7681742 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.644 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Start _get_guest_xml network_info=[{"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '52ef509e-0e22-464e-93c9-3ddcf574cd64'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.684 2 DEBUG nova.objects.instance [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'migration_context' on Instance uuid a6c6f19d-ac72-4091-b1d5-cd8c6499bef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.687 2 WARNING nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.691 2 DEBUG nova.virt.libvirt.host [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.692 2 DEBUG nova.virt.libvirt.host [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.697 2 DEBUG nova.virt.libvirt.host [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.697 2 DEBUG nova.virt.libvirt.host [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.698 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.699 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.699 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.699 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.699 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.699 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.700 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.700 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.700 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.700 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.700 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.701 2 DEBUG nova.virt.hardware [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.703 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.727 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Successfully created port: 2a52488c-e574-430a-a6ef-056feb3051e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.732 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.733 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Ensure instance console log exists: /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.733 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.733 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:34 np0005466031 nova_compute[235803]: 2025-10-02 12:34:34.734 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3308431859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.137 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.168 2 DEBUG nova.storage.rbd_utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.172 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:35.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3443077683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.635 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.636 2 DEBUG nova.virt.libvirt.vif [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-770672439',display_name='tempest-ListServerFiltersTestJSON-instance-770672439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-770672439',id=86,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-sehv668s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:29Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.636 2 DEBUG nova.network.os_vif_util [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.637 2 DEBUG nova.network.os_vif_util [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:0b:ec,bridge_name='br-int',has_traffic_filtering=True,id=6936fa44-52a7-42a8-aeae-c4c4b7681742,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6936fa44-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.638 2 DEBUG nova.objects.instance [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'pci_devices' on Instance uuid c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.653 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <uuid>c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c</uuid>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <name>instance-00000056</name>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-770672439</nova:name>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:34:34</nova:creationTime>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <nova:user uuid="c0d7f2725ce3440b9e998e6efddc4628">tempest-ListServerFiltersTestJSON-495234271-project-member</nova:user>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <nova:project uuid="b35e544965644721a29ebea7dd0cc74e">tempest-ListServerFiltersTestJSON-495234271</nova:project>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <nova:port uuid="6936fa44-52a7-42a8-aeae-c4c4b7681742">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <entry name="serial">c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c</entry>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <entry name="uuid">c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c</entry>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk.config">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:03:0b:ec"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <target dev="tap6936fa44-52"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c/console.log" append="off"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:34:35 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:34:35 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:34:35 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:34:35 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.654 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Preparing to wait for external event network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.655 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.655 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.655 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.656 2 DEBUG nova.virt.libvirt.vif [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-770672439',display_name='tempest-ListServerFiltersTestJSON-instance-770672439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-770672439',id=86,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-sehv668s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:29Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.656 2 DEBUG nova.network.os_vif_util [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.657 2 DEBUG nova.network.os_vif_util [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:0b:ec,bridge_name='br-int',has_traffic_filtering=True,id=6936fa44-52a7-42a8-aeae-c4c4b7681742,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6936fa44-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.657 2 DEBUG os_vif [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:0b:ec,bridge_name='br-int',has_traffic_filtering=True,id=6936fa44-52a7-42a8-aeae-c4c4b7681742,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6936fa44-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.659 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6936fa44-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6936fa44-52, col_values=(('external_ids', {'iface-id': '6936fa44-52a7-42a8-aeae-c4c4b7681742', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:0b:ec', 'vm-uuid': 'c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:35 np0005466031 NetworkManager[44907]: <info>  [1759408475.6653] manager: (tap6936fa44-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.670 2 INFO os_vif [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:0b:ec,bridge_name='br-int',has_traffic_filtering=True,id=6936fa44-52a7-42a8-aeae-c4c4b7681742,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6936fa44-52')#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.742 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.743 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.743 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] No VIF found with MAC fa:16:3e:03:0b:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.744 2 INFO nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Using config drive#033[00m
Oct  2 08:34:35 np0005466031 nova_compute[235803]: 2025-10-02 12:34:35.769 2 DEBUG nova.storage.rbd_utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:36.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:36 np0005466031 nova_compute[235803]: 2025-10-02 12:34:36.437 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408461.4367683, 6b2e9f39-f886-4e6d-939e-cba3d731a330 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:36 np0005466031 nova_compute[235803]: 2025-10-02 12:34:36.437 2 INFO nova.compute.manager [-] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:34:36 np0005466031 nova_compute[235803]: 2025-10-02 12:34:36.456 2 DEBUG nova.compute.manager [None req-5434a885-a52c-492b-a423-541bdd35442e - - - - - -] [instance: 6b2e9f39-f886-4e6d-939e-cba3d731a330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:36 np0005466031 nova_compute[235803]: 2025-10-02 12:34:36.568 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Successfully updated port: 2a52488c-e574-430a-a6ef-056feb3051e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:34:36 np0005466031 nova_compute[235803]: 2025-10-02 12:34:36.599 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "refresh_cache-a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:36 np0005466031 nova_compute[235803]: 2025-10-02 12:34:36.599 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquired lock "refresh_cache-a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:36 np0005466031 nova_compute[235803]: 2025-10-02 12:34:36.599 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:34:36 np0005466031 nova_compute[235803]: 2025-10-02 12:34:36.876 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.125 2 DEBUG nova.compute.manager [req-6fa41750-e1d2-4993-a208-8699da78bbf4 req-e35d36c5-c9f0-41b5-9a86-eedb00556a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received event network-changed-2a52488c-e574-430a-a6ef-056feb3051e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.125 2 DEBUG nova.compute.manager [req-6fa41750-e1d2-4993-a208-8699da78bbf4 req-e35d36c5-c9f0-41b5-9a86-eedb00556a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Refreshing instance network info cache due to event network-changed-2a52488c-e574-430a-a6ef-056feb3051e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.126 2 DEBUG oslo_concurrency.lockutils [req-6fa41750-e1d2-4993-a208-8699da78bbf4 req-e35d36c5-c9f0-41b5-9a86-eedb00556a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:37.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.336 2 INFO nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Creating config drive at /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c/disk.config#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.342 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp03__uw53 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.476 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp03__uw53" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.513 2 DEBUG nova.storage.rbd_utils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] rbd image c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.517 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c/disk.config c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.871 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Updating instance_info_cache with network_info: [{"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.954 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Releasing lock "refresh_cache-a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.955 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Instance network_info: |[{"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.956 2 DEBUG oslo_concurrency.lockutils [req-6fa41750-e1d2-4993-a208-8699da78bbf4 req-e35d36c5-c9f0-41b5-9a86-eedb00556a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.957 2 DEBUG nova.network.neutron [req-6fa41750-e1d2-4993-a208-8699da78bbf4 req-e35d36c5-c9f0-41b5-9a86-eedb00556a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Refreshing network info cache for port 2a52488c-e574-430a-a6ef-056feb3051e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.962 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Start _get_guest_xml network_info=[{"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.970 2 WARNING nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.974 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.976 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.979 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.979 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.980 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.980 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.981 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.981 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.981 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.981 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.981 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.982 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.982 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.982 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.982 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.982 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:34:37 np0005466031 nova_compute[235803]: 2025-10-02 12:34:37.985 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.021 2 DEBUG nova.network.neutron [req-73da2062-2dcc-4b2f-b6a6-fd5afaefb6bc req-0e8bd10b-691b-436e-b3f0-f368a3ace7e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Updated VIF entry in instance network info cache for port 6936fa44-52a7-42a8-aeae-c4c4b7681742. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.022 2 DEBUG nova.network.neutron [req-73da2062-2dcc-4b2f-b6a6-fd5afaefb6bc req-0e8bd10b-691b-436e-b3f0-f368a3ace7e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Updating instance_info_cache with network_info: [{"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.068 2 DEBUG oslo_concurrency.lockutils [req-73da2062-2dcc-4b2f-b6a6-fd5afaefb6bc req-0e8bd10b-691b-436e-b3f0-f368a3ace7e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.178 2 DEBUG oslo_concurrency.processutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c/disk.config c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.180 2 INFO nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Deleting local config drive /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c/disk.config because it was imported into RBD.#033[00m
Oct  2 08:34:38 np0005466031 kernel: tap6936fa44-52: entered promiscuous mode
Oct  2 08:34:38 np0005466031 NetworkManager[44907]: <info>  [1759408478.2355] manager: (tap6936fa44-52): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Oct  2 08:34:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:38Z|00290|binding|INFO|Claiming lport 6936fa44-52a7-42a8-aeae-c4c4b7681742 for this chassis.
Oct  2 08:34:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:38Z|00291|binding|INFO|6936fa44-52a7-42a8-aeae-c4c4b7681742: Claiming fa:16:3e:03:0b:ec 10.100.0.5
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:38 np0005466031 systemd-udevd[272532]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:34:38 np0005466031 systemd-machined[192227]: New machine qemu-33-instance-00000056.
Oct  2 08:34:38 np0005466031 NetworkManager[44907]: <info>  [1759408478.2803] device (tap6936fa44-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:34:38 np0005466031 NetworkManager[44907]: <info>  [1759408478.2815] device (tap6936fa44-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:34:38 np0005466031 systemd[1]: Started Virtual Machine qemu-33-instance-00000056.
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:38Z|00292|binding|INFO|Setting lport 6936fa44-52a7-42a8-aeae-c4c4b7681742 ovn-installed in OVS
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:38Z|00293|binding|INFO|Setting lport 6936fa44-52a7-42a8-aeae-c4c4b7681742 up in Southbound
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.318 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:0b:ec 10.100.0.5'], port_security=['fa:16:3e:03:0b:ec 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=6936fa44-52a7-42a8-aeae-c4c4b7681742) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.319 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 6936fa44-52a7-42a8-aeae-c4c4b7681742 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a bound to our chassis#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.320 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dd1e489-9cc3-4420-8577-3a250b110c9a#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.334 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8be81c7b-47be-4913-9fb5-2984b9fa3967]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.335 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4dd1e489-91 in ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.337 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4dd1e489-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.337 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ebd2c5-72c5-4b09-8c07-de7d1afe8108]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.337 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7191989e-049a-422c-9ae1-5842d26638ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.350 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4bbc95-2de9-469d-9eac-9665b1b95332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.371 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[47dbf602-8e21-4811-98be-8a5701938fb1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.399 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3df4fc45-783e-4118-b6f7-ee5f67e750a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 NetworkManager[44907]: <info>  [1759408478.4051] manager: (tap4dd1e489-90): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.403 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[41f51e41-c006-4b07-88ae-bb08e932bd36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.442 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a341f4-a47e-4101-8c7f-6bc223252aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.445 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[88babafa-a0d0-4974-b4c9-6f18ce4faced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 NetworkManager[44907]: <info>  [1759408478.4679] device (tap4dd1e489-90): carrier: link connected
Oct  2 08:34:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:38 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1600688275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.473 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f0aeef37-829d-48f6-9115-e098303d1121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.490 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c80ed831-bed6-49ac-9989-b62e23807dd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dd1e489-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633405, 'reachable_time': 28219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272567, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.491 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.505 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[71e32aab-1b6c-4e00-8143-789861432aa5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:9264'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633405, 'tstamp': 633405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272568, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.519 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.522 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dc63c58e-1b19-45d5-affa-afddabdf921e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dd1e489-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633405, 'reachable_time': 28219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272584, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.526 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.551 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[60d4c25f-1a1f-4021-898d-79a790f42c9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.617 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfba004-0b6e-4ff1-8ce5-31d31e65ed6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.618 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd1e489-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.618 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.619 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dd1e489-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:38 np0005466031 NetworkManager[44907]: <info>  [1759408478.6212] manager: (tap4dd1e489-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Oct  2 08:34:38 np0005466031 kernel: tap4dd1e489-90: entered promiscuous mode
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.623 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dd1e489-90, col_values=(('external_ids', {'iface-id': 'a25f83dc-1cb4-467b-80c7-496d6a4bfac5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:38Z|00294|binding|INFO|Releasing lport a25f83dc-1cb4-467b-80c7-496d6a4bfac5 from this chassis (sb_readonly=0)
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.641 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4dd1e489-9cc3-4420-8577-3a250b110c9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4dd1e489-9cc3-4420-8577-3a250b110c9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.642 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc31577-61f5-48af-9bfe-028bfe82a780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.643 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/4dd1e489-9cc3-4420-8577-3a250b110c9a.pid.haproxy
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 4dd1e489-9cc3-4420-8577-3a250b110c9a
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:34:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:38.643 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'env', 'PROCESS_TAG=haproxy-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4dd1e489-9cc3-4420-8577-3a250b110c9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:34:38 np0005466031 nova_compute[235803]: 2025-10-02 12:34:38.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1102977573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:39 np0005466031 podman[272680]: 2025-10-02 12:34:39.024048386 +0000 UTC m=+0.053584276 container create 38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:34:39 np0005466031 systemd[1]: Started libpod-conmon-38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9.scope.
Oct  2 08:34:39 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:34:39 np0005466031 podman[272680]: 2025-10-02 12:34:38.999646852 +0000 UTC m=+0.029182762 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:34:39 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad3f4b76021fba75374398f3fd0b44bf1dea9798fcb6a8286dc825d4b56e3ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:39 np0005466031 podman[272680]: 2025-10-02 12:34:39.112778713 +0000 UTC m=+0.142314633 container init 38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.117 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:39 np0005466031 podman[272680]: 2025-10-02 12:34:39.118409355 +0000 UTC m=+0.147945245 container start 38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.119 2 DEBUG nova.virt.libvirt.vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-1',id=88,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:32Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=a6c6f19d-ac72-4091-b1d5-cd8c6499bef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.120 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.120 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:7d:78,bridge_name='br-int',has_traffic_filtering=True,id=2a52488c-e574-430a-a6ef-056feb3051e3,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a52488c-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.122 2 DEBUG nova.objects.instance [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6c6f19d-ac72-4091-b1d5-cd8c6499bef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.138 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <uuid>a6c6f19d-ac72-4091-b1d5-cd8c6499bef5</uuid>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <name>instance-00000058</name>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <nova:name>tempest-tempest.common.compute-instance-1446553806-1</nova:name>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:34:37</nova:creationTime>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <nova:user uuid="34a9da53e0cc446593d0cea2f498c53e">tempest-MultipleCreateTestJSON-1074010337-project-member</nova:user>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <nova:project uuid="ed58e2bfccb04353b29ae652cfed3546">tempest-MultipleCreateTestJSON-1074010337</nova:project>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <nova:port uuid="2a52488c-e574-430a-a6ef-056feb3051e3">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <entry name="serial">a6c6f19d-ac72-4091-b1d5-cd8c6499bef5</entry>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <entry name="uuid">a6c6f19d-ac72-4091-b1d5-cd8c6499bef5</entry>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk.config">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:fe:7d:78"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <target dev="tap2a52488c-e5"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5/console.log" append="off"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:34:39 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:34:39 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:34:39 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:34:39 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.140 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Preparing to wait for external event network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.140 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.140 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.141 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.141 2 DEBUG nova.virt.libvirt.vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-1',id=88,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:32Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=a6c6f19d-ac72-4091-b1d5-cd8c6499bef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.142 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.142 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:7d:78,bridge_name='br-int',has_traffic_filtering=True,id=2a52488c-e574-430a-a6ef-056feb3051e3,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a52488c-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.145 2 DEBUG os_vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:7d:78,bridge_name='br-int',has_traffic_filtering=True,id=2a52488c-e574-430a-a6ef-056feb3051e3,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a52488c-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.151 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a52488c-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.152 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a52488c-e5, col_values=(('external_ids', {'iface-id': '2a52488c-e574-430a-a6ef-056feb3051e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:7d:78', 'vm-uuid': 'a6c6f19d-ac72-4091-b1d5-cd8c6499bef5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:39 np0005466031 NetworkManager[44907]: <info>  [1759408479.1539] manager: (tap2a52488c-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:39 np0005466031 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[272696]: [NOTICE]   (272701) : New worker (272704) forked
Oct  2 08:34:39 np0005466031 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[272696]: [NOTICE]   (272701) : Loading success.
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.159 2 INFO os_vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:7d:78,bridge_name='br-int',has_traffic_filtering=True,id=2a52488c-e574-430a-a6ef-056feb3051e3,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a52488c-e5')#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.236 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.237 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.237 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No VIF found with MAC fa:16:3e:fe:7d:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.238 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Using config drive#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.268 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.322 2 DEBUG nova.compute.manager [req-507208be-c835-4f4d-afc2-71f11bdd1a19 req-d63215d9-b53a-44f1-883e-99e4f4f76172 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received event network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.323 2 DEBUG oslo_concurrency.lockutils [req-507208be-c835-4f4d-afc2-71f11bdd1a19 req-d63215d9-b53a-44f1-883e-99e4f4f76172 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.323 2 DEBUG oslo_concurrency.lockutils [req-507208be-c835-4f4d-afc2-71f11bdd1a19 req-d63215d9-b53a-44f1-883e-99e4f4f76172 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.323 2 DEBUG oslo_concurrency.lockutils [req-507208be-c835-4f4d-afc2-71f11bdd1a19 req-d63215d9-b53a-44f1-883e-99e4f4f76172 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.324 2 DEBUG nova.compute.manager [req-507208be-c835-4f4d-afc2-71f11bdd1a19 req-d63215d9-b53a-44f1-883e-99e4f4f76172 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Processing event network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.324 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408479.3231387, c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.324 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] VM Started (Lifecycle Event)#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.327 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.330 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.334 2 INFO nova.virt.libvirt.driver [-] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Instance spawned successfully.#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.335 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:34:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:39.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.354 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.358 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.359 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.359 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.360 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.360 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.360 2 DEBUG nova.virt.libvirt.driver [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.364 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.398 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.399 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408479.3240736, c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.399 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.432 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.437 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408479.329459, c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.437 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.446 2 INFO nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Took 10.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.446 2 DEBUG nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.480 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.484 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.511 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.538 2 INFO nova.compute.manager [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Took 11.39 seconds to build instance.#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.561 2 DEBUG oslo_concurrency.lockutils [None req-b8eb06f3-75d1-4d02-8c49-ba9bdc3b7d17 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.656 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Creating config drive at /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5/disk.config#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.661 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv43dn1av execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.798 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv43dn1av" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.834 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:39 np0005466031 nova_compute[235803]: 2025-10-02 12:34:39.838 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5/disk.config a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:40.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:34:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:41.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.397 2 DEBUG nova.compute.manager [req-66f74e0b-6c95-4a80-bb26-8b2ee40ec44f req-17a4763a-507c-49f5-8460-df413dffca80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received event network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.397 2 DEBUG oslo_concurrency.lockutils [req-66f74e0b-6c95-4a80-bb26-8b2ee40ec44f req-17a4763a-507c-49f5-8460-df413dffca80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.397 2 DEBUG oslo_concurrency.lockutils [req-66f74e0b-6c95-4a80-bb26-8b2ee40ec44f req-17a4763a-507c-49f5-8460-df413dffca80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.398 2 DEBUG oslo_concurrency.lockutils [req-66f74e0b-6c95-4a80-bb26-8b2ee40ec44f req-17a4763a-507c-49f5-8460-df413dffca80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.398 2 DEBUG nova.compute.manager [req-66f74e0b-6c95-4a80-bb26-8b2ee40ec44f req-17a4763a-507c-49f5-8460-df413dffca80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] No waiting events found dispatching network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.398 2 WARNING nova.compute.manager [req-66f74e0b-6c95-4a80-bb26-8b2ee40ec44f req-17a4763a-507c-49f5-8460-df413dffca80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received unexpected event network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.432 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5/disk.config a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.433 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Deleting local config drive /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5/disk.config because it was imported into RBD.#033[00m
Oct  2 08:34:41 np0005466031 NetworkManager[44907]: <info>  [1759408481.4836] manager: (tap2a52488c-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Oct  2 08:34:41 np0005466031 kernel: tap2a52488c-e5: entered promiscuous mode
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:41Z|00295|binding|INFO|Claiming lport 2a52488c-e574-430a-a6ef-056feb3051e3 for this chassis.
Oct  2 08:34:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:41Z|00296|binding|INFO|2a52488c-e574-430a-a6ef-056feb3051e3: Claiming fa:16:3e:fe:7d:78 10.100.0.5
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.500 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:7d:78 10.100.0.5'], port_security=['fa:16:3e:fe:7d:78 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a6c6f19d-ac72-4091-b1d5-cd8c6499bef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=2a52488c-e574-430a-a6ef-056feb3051e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.502 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 2a52488c-e574-430a-a6ef-056feb3051e3 in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 bound to our chassis#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.503 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.514 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1a86fe6b-e4f6-49d4-b3f7-94dfe5cf079d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.516 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap885ece2c-b1 in ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.517 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap885ece2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.517 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e09d55-b9cf-4ae5-a6e9-dbdca7f6d96c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.518 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8df25157-aa6c-4dfa-a0c8-a1c3dd40b82a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 systemd-udevd[272788]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.534 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[b7365e5a-9242-4d33-9338-e0b1899047fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 systemd-machined[192227]: New machine qemu-34-instance-00000058.
Oct  2 08:34:41 np0005466031 NetworkManager[44907]: <info>  [1759408481.5403] device (tap2a52488c-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:34:41 np0005466031 NetworkManager[44907]: <info>  [1759408481.5414] device (tap2a52488c-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.551 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[92d4606c-7814-4831-98c0-7607f20cf15f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 systemd[1]: Started Virtual Machine qemu-34-instance-00000058.
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:41Z|00297|binding|INFO|Setting lport 2a52488c-e574-430a-a6ef-056feb3051e3 ovn-installed in OVS
Oct  2 08:34:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:41Z|00298|binding|INFO|Setting lport 2a52488c-e574-430a-a6ef-056feb3051e3 up in Southbound
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.590 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[261e6ace-7cbd-48d7-818d-5506f7f76e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 NetworkManager[44907]: <info>  [1759408481.5962] manager: (tap885ece2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.595 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a095b5-1aed-4d9f-b844-beac72b38ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.636 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[475f66b5-990f-4140-94a0-6374633eb9a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.639 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f042f55c-3ae9-4c3a-ae59-cfe1b0118fdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.642 2 DEBUG nova.network.neutron [req-6fa41750-e1d2-4993-a208-8699da78bbf4 req-e35d36c5-c9f0-41b5-9a86-eedb00556a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Updated VIF entry in instance network info cache for port 2a52488c-e574-430a-a6ef-056feb3051e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.642 2 DEBUG nova.network.neutron [req-6fa41750-e1d2-4993-a208-8699da78bbf4 req-e35d36c5-c9f0-41b5-9a86-eedb00556a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Updating instance_info_cache with network_info: [{"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:41 np0005466031 NetworkManager[44907]: <info>  [1759408481.6608] device (tap885ece2c-b0): carrier: link connected
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.662 2 DEBUG oslo_concurrency.lockutils [req-6fa41750-e1d2-4993-a208-8699da78bbf4 req-e35d36c5-c9f0-41b5-9a86-eedb00556a45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.664 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ed456a-551d-4158-b727-c793620a0bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.681 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ac15eb29-0491-4e08-a625-343339f73e2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633725, 'reachable_time': 29936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272819, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.696 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[549f1fa8-b3d2-4f19-8005-53eddc1186e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:5893'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633725, 'tstamp': 633725}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272820, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.712 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[febe2e74-7299-447b-85fb-49275d4aeb3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633725, 'reachable_time': 29936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272821, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.746 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2491ddf0-8fb8-48cd-84a9-41300bc6666d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.808 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[939fa1cd-100d-4bd8-902c-fa941292562a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.809 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.810 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.810 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap885ece2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:41 np0005466031 NetworkManager[44907]: <info>  [1759408481.8121] manager: (tap885ece2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct  2 08:34:41 np0005466031 kernel: tap885ece2c-b0: entered promiscuous mode
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.823 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap885ece2c-b0, col_values=(('external_ids', {'iface-id': '24355553-27f6-4ebd-99c0-4f861ce0339d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:41Z|00299|binding|INFO|Releasing lport 24355553-27f6-4ebd-99c0-4f861ce0339d from this chassis (sb_readonly=0)
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.827 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.828 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ff78879d-b39f-4755-a974-3c524de1b15a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.829 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:34:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:41.829 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'env', 'PROCESS_TAG=haproxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:34:41 np0005466031 nova_compute[235803]: 2025-10-02 12:34:41.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.035 2 DEBUG nova.compute.manager [req-bbf3a1a7-689a-4d6e-a238-9e8277fdef3f req-de34c02f-9ee8-4b45-8fde-69a569d95318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received event network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.035 2 DEBUG oslo_concurrency.lockutils [req-bbf3a1a7-689a-4d6e-a238-9e8277fdef3f req-de34c02f-9ee8-4b45-8fde-69a569d95318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.041 2 DEBUG oslo_concurrency.lockutils [req-bbf3a1a7-689a-4d6e-a238-9e8277fdef3f req-de34c02f-9ee8-4b45-8fde-69a569d95318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.041 2 DEBUG oslo_concurrency.lockutils [req-bbf3a1a7-689a-4d6e-a238-9e8277fdef3f req-de34c02f-9ee8-4b45-8fde-69a569d95318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.041 2 DEBUG nova.compute.manager [req-bbf3a1a7-689a-4d6e-a238-9e8277fdef3f req-de34c02f-9ee8-4b45-8fde-69a569d95318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Processing event network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:34:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:42.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:42 np0005466031 podman[272877]: 2025-10-02 12:34:42.201580178 +0000 UTC m=+0.057907200 container create 87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:34:42 np0005466031 systemd[1]: Started libpod-conmon-87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869.scope.
Oct  2 08:34:42 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:34:42 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662ed61440151b6e3cf0d1a07aae7d4e07e8e0e6c23f84f49489c90397cffe14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:42 np0005466031 podman[272877]: 2025-10-02 12:34:42.173988343 +0000 UTC m=+0.030315395 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:34:42 np0005466031 podman[272877]: 2025-10-02 12:34:42.284963572 +0000 UTC m=+0.141290614 container init 87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:34:42 np0005466031 podman[272877]: 2025-10-02 12:34:42.290636595 +0000 UTC m=+0.146963607 container start 87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:34:42 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[272909]: [NOTICE]   (272913) : New worker (272915) forked
Oct  2 08:34:42 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[272909]: [NOTICE]   (272913) : Loading success.
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.644 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.644 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408482.643653, a6c6f19d-ac72-4091-b1d5-cd8c6499bef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.645 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] VM Started (Lifecycle Event)#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.647 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.650 2 INFO nova.virt.libvirt.driver [-] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Instance spawned successfully.#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.651 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.677 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.683 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.687 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.687 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.687 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.688 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.688 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.688 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.730 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.731 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408482.6445217, a6c6f19d-ac72-4091-b1d5-cd8c6499bef5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.731 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.758 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.761 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408482.647243, a6c6f19d-ac72-4091-b1d5-cd8c6499bef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.761 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.791 2 INFO nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Took 9.88 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.792 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.799 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.801 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.847 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.892 2 INFO nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Took 10.99 seconds to build instance.#033[00m
Oct  2 08:34:42 np0005466031 nova_compute[235803]: 2025-10-02 12:34:42.963 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/810285189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 08:34:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:43.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 08:34:43 np0005466031 nova_compute[235803]: 2025-10-02 12:34:43.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:44.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:44 np0005466031 nova_compute[235803]: 2025-10-02 12:34:44.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:44 np0005466031 nova_compute[235803]: 2025-10-02 12:34:44.179 2 DEBUG nova.compute.manager [req-229e7eee-6592-423e-ad07-95495cbb41c4 req-64e891ac-c2f0-43c0-9fba-3122df104ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received event network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:44 np0005466031 nova_compute[235803]: 2025-10-02 12:34:44.179 2 DEBUG oslo_concurrency.lockutils [req-229e7eee-6592-423e-ad07-95495cbb41c4 req-64e891ac-c2f0-43c0-9fba-3122df104ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:44 np0005466031 nova_compute[235803]: 2025-10-02 12:34:44.180 2 DEBUG oslo_concurrency.lockutils [req-229e7eee-6592-423e-ad07-95495cbb41c4 req-64e891ac-c2f0-43c0-9fba-3122df104ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:44 np0005466031 nova_compute[235803]: 2025-10-02 12:34:44.180 2 DEBUG oslo_concurrency.lockutils [req-229e7eee-6592-423e-ad07-95495cbb41c4 req-64e891ac-c2f0-43c0-9fba-3122df104ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:44 np0005466031 nova_compute[235803]: 2025-10-02 12:34:44.180 2 DEBUG nova.compute.manager [req-229e7eee-6592-423e-ad07-95495cbb41c4 req-64e891ac-c2f0-43c0-9fba-3122df104ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] No waiting events found dispatching network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:44 np0005466031 nova_compute[235803]: 2025-10-02 12:34:44.181 2 WARNING nova.compute.manager [req-229e7eee-6592-423e-ad07-95495cbb41c4 req-64e891ac-c2f0-43c0-9fba-3122df104ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received unexpected event network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:34:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:45.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:46.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:47.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:48.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.694 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.695 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.695 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.695 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.695 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.696 2 INFO nova.compute.manager [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Terminating instance#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.697 2 DEBUG nova.compute.manager [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:34:48 np0005466031 kernel: tap2a52488c-e5 (unregistering): left promiscuous mode
Oct  2 08:34:48 np0005466031 NetworkManager[44907]: <info>  [1759408488.7443] device (tap2a52488c-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:48Z|00300|binding|INFO|Releasing lport 2a52488c-e574-430a-a6ef-056feb3051e3 from this chassis (sb_readonly=0)
Oct  2 08:34:48 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:48Z|00301|binding|INFO|Setting lport 2a52488c-e574-430a-a6ef-056feb3051e3 down in Southbound
Oct  2 08:34:48 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:48Z|00302|binding|INFO|Removing iface tap2a52488c-e5 ovn-installed in OVS
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:48.760 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:7d:78 10.100.0.5'], port_security=['fa:16:3e:fe:7d:78 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a6c6f19d-ac72-4091-b1d5-cd8c6499bef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=2a52488c-e574-430a-a6ef-056feb3051e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:48.761 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 2a52488c-e574-430a-a6ef-056feb3051e3 in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 unbound from our chassis#033[00m
Oct  2 08:34:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:48.762 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:34:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:48.763 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[411d2bea-d0a5-4af5-b956-2650ac819359]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:48.764 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace which is not needed anymore#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005466031 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct  2 08:34:48 np0005466031 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000058.scope: Consumed 6.927s CPU time.
Oct  2 08:34:48 np0005466031 systemd-machined[192227]: Machine qemu-34-instance-00000058 terminated.
Oct  2 08:34:48 np0005466031 podman[272927]: 2025-10-02 12:34:48.842831773 +0000 UTC m=+0.064774958 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:34:48 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[272909]: [NOTICE]   (272913) : haproxy version is 2.8.14-c23fe91
Oct  2 08:34:48 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[272909]: [NOTICE]   (272913) : path to executable is /usr/sbin/haproxy
Oct  2 08:34:48 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[272909]: [WARNING]  (272913) : Exiting Master process...
Oct  2 08:34:48 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[272909]: [WARNING]  (272913) : Exiting Master process...
Oct  2 08:34:48 np0005466031 podman[272930]: 2025-10-02 12:34:48.886859302 +0000 UTC m=+0.092417865 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:34:48 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[272909]: [ALERT]    (272913) : Current worker (272915) exited with code 143 (Terminated)
Oct  2 08:34:48 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[272909]: [WARNING]  (272913) : All workers exited. Exiting... (0)
Oct  2 08:34:48 np0005466031 systemd[1]: libpod-87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869.scope: Deactivated successfully.
Oct  2 08:34:48 np0005466031 podman[272984]: 2025-10-02 12:34:48.898022723 +0000 UTC m=+0.045788230 container died 87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.940 2 INFO nova.virt.libvirt.driver [-] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Instance destroyed successfully.#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.941 2 DEBUG nova.objects.instance [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'resources' on Instance uuid a6c6f19d-ac72-4091-b1d5-cd8c6499bef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.953 2 DEBUG nova.virt.libvirt.vif [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-1',id=88,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:42Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=a6c6f19d-ac72-4091-b1d5-cd8c6499bef5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.956 2 DEBUG nova.network.os_vif_util [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "2a52488c-e574-430a-a6ef-056feb3051e3", "address": "fa:16:3e:fe:7d:78", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a52488c-e5", "ovs_interfaceid": "2a52488c-e574-430a-a6ef-056feb3051e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.957 2 DEBUG nova.network.os_vif_util [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:7d:78,bridge_name='br-int',has_traffic_filtering=True,id=2a52488c-e574-430a-a6ef-056feb3051e3,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a52488c-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.958 2 DEBUG os_vif [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:7d:78,bridge_name='br-int',has_traffic_filtering=True,id=2a52488c-e574-430a-a6ef-056feb3051e3,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a52488c-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a52488c-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:48 np0005466031 nova_compute[235803]: 2025-10-02 12:34:48.967 2 INFO os_vif [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:7d:78,bridge_name='br-int',has_traffic_filtering=True,id=2a52488c-e574-430a-a6ef-056feb3051e3,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a52488c-e5')#033[00m
Oct  2 08:34:48 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869-userdata-shm.mount: Deactivated successfully.
Oct  2 08:34:48 np0005466031 systemd[1]: var-lib-containers-storage-overlay-662ed61440151b6e3cf0d1a07aae7d4e07e8e0e6c23f84f49489c90397cffe14-merged.mount: Deactivated successfully.
Oct  2 08:34:48 np0005466031 podman[272984]: 2025-10-02 12:34:48.994652918 +0000 UTC m=+0.142418415 container cleanup 87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:34:49 np0005466031 systemd[1]: libpod-conmon-87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869.scope: Deactivated successfully.
Oct  2 08:34:49 np0005466031 podman[273046]: 2025-10-02 12:34:49.086033242 +0000 UTC m=+0.069373240 container remove 87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.095 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f90740-f6a3-48fb-a8f3-a28461716ef9]: (4, ('Thu Oct  2 12:34:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869)\n87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869\nThu Oct  2 12:34:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869)\n87546acb50cecbb1eb6057966f9ddae4bb0d7c38b8f75801db20f0e1bb695869\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.097 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5bafb34f-e750-47b4-a50f-7d27215ba862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.098 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:49 np0005466031 kernel: tap885ece2c-b0: left promiscuous mode
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.118 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e449b4-573d-4339-b22f-dce57102b18c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.145 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1005b45c-36c7-462b-92d5-7a488706a674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.146 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[919ad1b6-cafb-4f4c-bc0f-0613b179c780]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.163 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[63bf2803-8050-40a2-a0b4-c32f38bb57c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633717, 'reachable_time': 33534, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273064, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:49 np0005466031 systemd[1]: run-netns-ovnmeta\x2d885ece2c\x2db1ca\x2d4d5a\x2d9ddf\x2d20d1baf155c7.mount: Deactivated successfully.
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.166 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:34:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:34:49.166 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[685977d5-a162-4260-9ad5-3d5f3594b1bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.630 2 DEBUG nova.compute.manager [req-d4ced9dd-3c85-4693-94f2-e39b4d2254a0 req-819713b2-2bcd-4cf8-96b1-e84293b703ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received event network-vif-unplugged-2a52488c-e574-430a-a6ef-056feb3051e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.630 2 DEBUG oslo_concurrency.lockutils [req-d4ced9dd-3c85-4693-94f2-e39b4d2254a0 req-819713b2-2bcd-4cf8-96b1-e84293b703ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.630 2 DEBUG oslo_concurrency.lockutils [req-d4ced9dd-3c85-4693-94f2-e39b4d2254a0 req-819713b2-2bcd-4cf8-96b1-e84293b703ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.630 2 DEBUG oslo_concurrency.lockutils [req-d4ced9dd-3c85-4693-94f2-e39b4d2254a0 req-819713b2-2bcd-4cf8-96b1-e84293b703ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.631 2 DEBUG nova.compute.manager [req-d4ced9dd-3c85-4693-94f2-e39b4d2254a0 req-819713b2-2bcd-4cf8-96b1-e84293b703ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] No waiting events found dispatching network-vif-unplugged-2a52488c-e574-430a-a6ef-056feb3051e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.631 2 DEBUG nova.compute.manager [req-d4ced9dd-3c85-4693-94f2-e39b4d2254a0 req-819713b2-2bcd-4cf8-96b1-e84293b703ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received event network-vif-unplugged-2a52488c-e574-430a-a6ef-056feb3051e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.659 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.659 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:34:49 np0005466031 nova_compute[235803]: 2025-10-02 12:34:49.659 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1478600791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.166 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.248 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.248 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.250 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.250 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.392 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.393 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4369MB free_disk=20.838119506835938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.393 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.394 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.453 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.453 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance a6c6f19d-ac72-4091-b1d5-cd8c6499bef5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.454 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.454 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.522 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3763843049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.991 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:50 np0005466031 nova_compute[235803]: 2025-10-02 12:34:50.996 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.037 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.063 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.064 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:51.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.722 2 DEBUG nova.compute.manager [req-5797263c-9c55-425e-9394-a871ef01464e req-d18b5ee8-fbad-487f-8222-a062675941e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received event network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.723 2 DEBUG oslo_concurrency.lockutils [req-5797263c-9c55-425e-9394-a871ef01464e req-d18b5ee8-fbad-487f-8222-a062675941e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.723 2 DEBUG oslo_concurrency.lockutils [req-5797263c-9c55-425e-9394-a871ef01464e req-d18b5ee8-fbad-487f-8222-a062675941e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.723 2 DEBUG oslo_concurrency.lockutils [req-5797263c-9c55-425e-9394-a871ef01464e req-d18b5ee8-fbad-487f-8222-a062675941e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.723 2 DEBUG nova.compute.manager [req-5797263c-9c55-425e-9394-a871ef01464e req-d18b5ee8-fbad-487f-8222-a062675941e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] No waiting events found dispatching network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:51 np0005466031 nova_compute[235803]: 2025-10-02 12:34:51.723 2 WARNING nova.compute.manager [req-5797263c-9c55-425e-9394-a871ef01464e req-d18b5ee8-fbad-487f-8222-a062675941e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received unexpected event network-vif-plugged-2a52488c-e574-430a-a6ef-056feb3051e3 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:34:52 np0005466031 nova_compute[235803]: 2025-10-02 12:34:52.065 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:52 np0005466031 nova_compute[235803]: 2025-10-02 12:34:52.065 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:34:52 np0005466031 nova_compute[235803]: 2025-10-02 12:34:52.066 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:34:52 np0005466031 nova_compute[235803]: 2025-10-02 12:34:52.094 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:34:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:34:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:34:52 np0005466031 nova_compute[235803]: 2025-10-02 12:34:52.586 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:52 np0005466031 nova_compute[235803]: 2025-10-02 12:34:52.586 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:52 np0005466031 nova_compute[235803]: 2025-10-02 12:34:52.586 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:34:52 np0005466031 nova_compute[235803]: 2025-10-02 12:34:52.587 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:34:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:53.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:34:53 np0005466031 nova_compute[235803]: 2025-10-02 12:34:53.463 2 INFO nova.virt.libvirt.driver [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Deleting instance files /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_del#033[00m
Oct  2 08:34:53 np0005466031 nova_compute[235803]: 2025-10-02 12:34:53.464 2 INFO nova.virt.libvirt.driver [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Deletion of /var/lib/nova/instances/a6c6f19d-ac72-4091-b1d5-cd8c6499bef5_del complete#033[00m
Oct  2 08:34:53 np0005466031 nova_compute[235803]: 2025-10-02 12:34:53.557 2 INFO nova.compute.manager [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Took 4.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:34:53 np0005466031 nova_compute[235803]: 2025-10-02 12:34:53.558 2 DEBUG oslo.service.loopingcall [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:34:53 np0005466031 nova_compute[235803]: 2025-10-02 12:34:53.558 2 DEBUG nova.compute.manager [-] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:34:53 np0005466031 nova_compute[235803]: 2025-10-02 12:34:53.559 2 DEBUG nova.network.neutron [-] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:34:53 np0005466031 nova_compute[235803]: 2025-10-02 12:34:53.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005466031 nova_compute[235803]: 2025-10-02 12:34:53.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:54 np0005466031 nova_compute[235803]: 2025-10-02 12:34:54.740 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Updating instance_info_cache with network_info: [{"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:54 np0005466031 nova_compute[235803]: 2025-10-02 12:34:54.913 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:54 np0005466031 nova_compute[235803]: 2025-10-02 12:34:54.913 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:34:54 np0005466031 nova_compute[235803]: 2025-10-02 12:34:54.913 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:54 np0005466031 nova_compute[235803]: 2025-10-02 12:34:54.914 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:54 np0005466031 nova_compute[235803]: 2025-10-02 12:34:54.914 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:54 np0005466031 nova_compute[235803]: 2025-10-02 12:34:54.914 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:54 np0005466031 nova_compute[235803]: 2025-10-02 12:34:54.914 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:34:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.005 2 DEBUG nova.network.neutron [-] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.031 2 INFO nova.compute.manager [-] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Took 1.47 seconds to deallocate network for instance.#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.082 2 DEBUG nova.compute.manager [req-3e768e03-adce-4a4f-9c4b-f2a4c2029f24 req-75c1ce62-64a8-4d75-9aba-163a666df116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Received event network-vif-deleted-2a52488c-e574-430a-a6ef-056feb3051e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.089 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.090 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.111 2 DEBUG nova.scheduler.client.report [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.125 2 DEBUG nova.scheduler.client.report [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.126 2 DEBUG nova.compute.provider_tree [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.140 2 DEBUG nova.scheduler.client.report [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.167 2 DEBUG nova.scheduler.client.report [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.212 2 DEBUG oslo_concurrency.processutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:55Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:0b:ec 10.100.0.5
Oct  2 08:34:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:34:55Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:0b:ec 10.100.0.5
Oct  2 08:34:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:55.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/43636201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.767 2 DEBUG oslo_concurrency.processutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.772 2 DEBUG nova.compute.provider_tree [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.786 2 DEBUG nova.scheduler.client.report [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.808 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.851 2 INFO nova.scheduler.client.report [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Deleted allocations for instance a6c6f19d-ac72-4091-b1d5-cd8c6499bef5#033[00m
Oct  2 08:34:55 np0005466031 nova_compute[235803]: 2025-10-02 12:34:55.938 2 DEBUG oslo_concurrency.lockutils [None req-550dfab3-7c11-4bdb-b318-2df448c0dd3b 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "a6c6f19d-ac72-4091-b1d5-cd8c6499bef5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:56.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:57.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:58.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:58 np0005466031 podman[273189]: 2025-10-02 12:34:58.648582164 +0000 UTC m=+0.068651469 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:34:58 np0005466031 podman[273188]: 2025-10-02 12:34:58.670609749 +0000 UTC m=+0.084286200 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd)
Oct  2 08:34:58 np0005466031 nova_compute[235803]: 2025-10-02 12:34:58.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:59 np0005466031 nova_compute[235803]: 2025-10-02 12:34:59.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:34:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:00.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:35:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:01.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:35:01 np0005466031 nova_compute[235803]: 2025-10-02 12:35:01.460 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "2691f2bd-9b64-4627-9831-2a97cf3279fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:01 np0005466031 nova_compute[235803]: 2025-10-02 12:35:01.461 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:01 np0005466031 nova_compute[235803]: 2025-10-02 12:35:01.486 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:35:01 np0005466031 nova_compute[235803]: 2025-10-02 12:35:01.584 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:01 np0005466031 nova_compute[235803]: 2025-10-02 12:35:01.585 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:01 np0005466031 nova_compute[235803]: 2025-10-02 12:35:01.593 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:35:01 np0005466031 nova_compute[235803]: 2025-10-02 12:35:01.593 2 INFO nova.compute.claims [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:35:01 np0005466031 nova_compute[235803]: 2025-10-02 12:35:01.782 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:02.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3510125751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.204 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.210 2 DEBUG nova.compute.provider_tree [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.224 2 DEBUG nova.scheduler.client.report [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.243 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.244 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.286 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.286 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.303 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.388 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.641 2 DEBUG nova.policy [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34a9da53e0cc446593d0cea2f498c53e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.918 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.920 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:35:02 np0005466031 nova_compute[235803]: 2025-10-02 12:35:02.920 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Creating image(s)#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.084 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.119 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.154 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.158 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.226 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.227 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.228 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.228 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:03.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.404 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.408 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.934 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408488.9323916, a6c6f19d-ac72-4091-b1d5-cd8c6499bef5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.935 2 INFO nova.compute.manager [-] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:03 np0005466031 nova_compute[235803]: 2025-10-02 12:35:03.965 2 DEBUG nova.compute.manager [None req-19c454a3-361a-4199-9a74-17f95ed6e307 - - - - - -] [instance: a6c6f19d-ac72-4091-b1d5-cd8c6499bef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:04 np0005466031 nova_compute[235803]: 2025-10-02 12:35:04.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:04.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:04 np0005466031 nova_compute[235803]: 2025-10-02 12:35:04.560 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Successfully created port: 25e34da3-3184-4cc0-bf80-e5c92054631c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:35:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:05.298 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:05.300 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:35:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:05.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.632 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Successfully updated port: 25e34da3-3184-4cc0-bf80-e5c92054631c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.646 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "refresh_cache-2691f2bd-9b64-4627-9831-2a97cf3279fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.646 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquired lock "refresh_cache-2691f2bd-9b64-4627-9831-2a97cf3279fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.647 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.724 2 DEBUG nova.compute.manager [req-72290aa5-96b8-40d3-8d8e-44abe8c5c3fe req-41f1a161-2819-4ce0-be18-6ec77b5a0550 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received event network-changed-25e34da3-3184-4cc0-bf80-e5c92054631c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.725 2 DEBUG nova.compute.manager [req-72290aa5-96b8-40d3-8d8e-44abe8c5c3fe req-41f1a161-2819-4ce0-be18-6ec77b5a0550 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Refreshing instance network info cache due to event network-changed-25e34da3-3184-4cc0-bf80-e5c92054631c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.725 2 DEBUG oslo_concurrency.lockutils [req-72290aa5-96b8-40d3-8d8e-44abe8c5c3fe req-41f1a161-2819-4ce0-be18-6ec77b5a0550 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2691f2bd-9b64-4627-9831-2a97cf3279fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:05 np0005466031 nova_compute[235803]: 2025-10-02 12:35:05.831 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:35:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:06.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e246 e246: 3 total, 3 up, 3 in
Oct  2 08:35:06 np0005466031 nova_compute[235803]: 2025-10-02 12:35:06.556 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:07 np0005466031 nova_compute[235803]: 2025-10-02 12:35:07.052 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] resizing rbd image 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:35:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:07.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.078 2 DEBUG nova.network.neutron [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Updating instance_info_cache with network_info: [{"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:08.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.202 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Releasing lock "refresh_cache-2691f2bd-9b64-4627-9831-2a97cf3279fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.203 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Instance network_info: |[{"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.204 2 DEBUG oslo_concurrency.lockutils [req-72290aa5-96b8-40d3-8d8e-44abe8c5c3fe req-41f1a161-2819-4ce0-be18-6ec77b5a0550 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2691f2bd-9b64-4627-9831-2a97cf3279fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.205 2 DEBUG nova.network.neutron [req-72290aa5-96b8-40d3-8d8e-44abe8c5c3fe req-41f1a161-2819-4ce0-be18-6ec77b5a0550 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Refreshing network info cache for port 25e34da3-3184-4cc0-bf80-e5c92054631c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.623 2 DEBUG nova.objects.instance [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'migration_context' on Instance uuid 2691f2bd-9b64-4627-9831-2a97cf3279fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.654 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.654 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Ensure instance console log exists: /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.655 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.655 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.655 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.658 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Start _get_guest_xml network_info=[{"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.662 2 WARNING nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.666 2 DEBUG nova.virt.libvirt.host [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.666 2 DEBUG nova.virt.libvirt.host [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.670 2 DEBUG nova.virt.libvirt.host [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.670 2 DEBUG nova.virt.libvirt.host [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.671 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.672 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.672 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.672 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.673 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.673 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.673 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.673 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.674 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.674 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.674 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.675 2 DEBUG nova.virt.hardware [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.677 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:08 np0005466031 nova_compute[235803]: 2025-10-02 12:35:08.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3932958621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.111 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.143 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.149 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:09.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1976850205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.611 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.613 2 DEBUG nova.virt.libvirt.vif [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-119748352',display_name='tempest-MultipleCreateTestJSON-server-119748352-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-119748352-1',id=90,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-tcned940',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:02Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=2691f2bd-9b64-4627-9831-2a97cf3279fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.613 2 DEBUG nova.network.os_vif_util [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.614 2 DEBUG nova.network.os_vif_util [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:97:5a,bridge_name='br-int',has_traffic_filtering=True,id=25e34da3-3184-4cc0-bf80-e5c92054631c,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25e34da3-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.615 2 DEBUG nova.objects.instance [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2691f2bd-9b64-4627-9831-2a97cf3279fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.654 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <uuid>2691f2bd-9b64-4627-9831-2a97cf3279fb</uuid>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <name>instance-0000005a</name>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <nova:name>tempest-MultipleCreateTestJSON-server-119748352-1</nova:name>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:35:08</nova:creationTime>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <nova:user uuid="34a9da53e0cc446593d0cea2f498c53e">tempest-MultipleCreateTestJSON-1074010337-project-member</nova:user>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <nova:project uuid="ed58e2bfccb04353b29ae652cfed3546">tempest-MultipleCreateTestJSON-1074010337</nova:project>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <nova:port uuid="25e34da3-3184-4cc0-bf80-e5c92054631c">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <entry name="serial">2691f2bd-9b64-4627-9831-2a97cf3279fb</entry>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <entry name="uuid">2691f2bd-9b64-4627-9831-2a97cf3279fb</entry>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2691f2bd-9b64-4627-9831-2a97cf3279fb_disk">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2691f2bd-9b64-4627-9831-2a97cf3279fb_disk.config">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:5e:97:5a"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <target dev="tap25e34da3-31"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb/console.log" append="off"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:35:09 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:35:09 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:35:09 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:35:09 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.656 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Preparing to wait for external event network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.656 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.656 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.656 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.657 2 DEBUG nova.virt.libvirt.vif [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-119748352',display_name='tempest-MultipleCreateTestJSON-server-119748352-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-119748352-1',id=90,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-tcned940',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:02Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=2691f2bd-9b64-4627-9831-2a97cf3279fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.657 2 DEBUG nova.network.os_vif_util [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.658 2 DEBUG nova.network.os_vif_util [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:97:5a,bridge_name='br-int',has_traffic_filtering=True,id=25e34da3-3184-4cc0-bf80-e5c92054631c,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25e34da3-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.658 2 DEBUG os_vif [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:97:5a,bridge_name='br-int',has_traffic_filtering=True,id=25e34da3-3184-4cc0-bf80-e5c92054631c,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25e34da3-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.659 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.659 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25e34da3-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25e34da3-31, col_values=(('external_ids', {'iface-id': '25e34da3-3184-4cc0-bf80-e5c92054631c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:97:5a', 'vm-uuid': '2691f2bd-9b64-4627-9831-2a97cf3279fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:09 np0005466031 NetworkManager[44907]: <info>  [1759408509.6641] manager: (tap25e34da3-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.669 2 INFO os_vif [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:97:5a,bridge_name='br-int',has_traffic_filtering=True,id=25e34da3-3184-4cc0-bf80-e5c92054631c,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25e34da3-31')#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.774 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.774 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.774 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No VIF found with MAC fa:16:3e:5e:97:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.775 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Using config drive#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.799 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.807 2 DEBUG nova.network.neutron [req-72290aa5-96b8-40d3-8d8e-44abe8c5c3fe req-41f1a161-2819-4ce0-be18-6ec77b5a0550 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Updated VIF entry in instance network info cache for port 25e34da3-3184-4cc0-bf80-e5c92054631c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.808 2 DEBUG nova.network.neutron [req-72290aa5-96b8-40d3-8d8e-44abe8c5c3fe req-41f1a161-2819-4ce0-be18-6ec77b5a0550 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Updating instance_info_cache with network_info: [{"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:09 np0005466031 nova_compute[235803]: 2025-10-02 12:35:09.855 2 DEBUG oslo_concurrency.lockutils [req-72290aa5-96b8-40d3-8d8e-44abe8c5c3fe req-41f1a161-2819-4ce0-be18-6ec77b5a0550 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2691f2bd-9b64-4627-9831-2a97cf3279fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:10 np0005466031 nova_compute[235803]: 2025-10-02 12:35:10.156 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Creating config drive at /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb/disk.config#033[00m
Oct  2 08:35:10 np0005466031 nova_compute[235803]: 2025-10-02 12:35:10.160 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps3sox10n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:10.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:10 np0005466031 nova_compute[235803]: 2025-10-02 12:35:10.291 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps3sox10n" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:10 np0005466031 nova_compute[235803]: 2025-10-02 12:35:10.515 2 DEBUG nova.storage.rbd_utils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:10 np0005466031 nova_compute[235803]: 2025-10-02 12:35:10.519 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb/disk.config 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:11.302 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:11.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:11 np0005466031 nova_compute[235803]: 2025-10-02 12:35:11.931 2 DEBUG oslo_concurrency.processutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb/disk.config 2691f2bd-9b64-4627-9831-2a97cf3279fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:11 np0005466031 nova_compute[235803]: 2025-10-02 12:35:11.932 2 INFO nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Deleting local config drive /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb/disk.config because it was imported into RBD.#033[00m
Oct  2 08:35:11 np0005466031 kernel: tap25e34da3-31: entered promiscuous mode
Oct  2 08:35:11 np0005466031 NetworkManager[44907]: <info>  [1759408511.9784] manager: (tap25e34da3-31): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Oct  2 08:35:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:11Z|00303|binding|INFO|Claiming lport 25e34da3-3184-4cc0-bf80-e5c92054631c for this chassis.
Oct  2 08:35:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:11Z|00304|binding|INFO|25e34da3-3184-4cc0-bf80-e5c92054631c: Claiming fa:16:3e:5e:97:5a 10.100.0.8
Oct  2 08:35:11 np0005466031 nova_compute[235803]: 2025-10-02 12:35:11.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:12 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:12Z|00305|binding|INFO|Setting lport 25e34da3-3184-4cc0-bf80-e5c92054631c ovn-installed in OVS
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:12 np0005466031 systemd-machined[192227]: New machine qemu-35-instance-0000005a.
Oct  2 08:35:12 np0005466031 systemd-udevd[273556]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:12 np0005466031 systemd[1]: Started Virtual Machine qemu-35-instance-0000005a.
Oct  2 08:35:12 np0005466031 NetworkManager[44907]: <info>  [1759408512.0238] device (tap25e34da3-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:35:12 np0005466031 NetworkManager[44907]: <info>  [1759408512.0251] device (tap25e34da3-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.032 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:97:5a 10.100.0.8'], port_security=['fa:16:3e:5e:97:5a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2691f2bd-9b64-4627-9831-2a97cf3279fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=25e34da3-3184-4cc0-bf80-e5c92054631c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.033 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 25e34da3-3184-4cc0-bf80-e5c92054631c in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 bound to our chassis#033[00m
Oct  2 08:35:12 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:12Z|00306|binding|INFO|Setting lport 25e34da3-3184-4cc0-bf80-e5c92054631c up in Southbound
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.034 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.046 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[35177ee1-cf97-4b12-bb66-0a8dcc2b7b14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.047 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap885ece2c-b1 in ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.048 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap885ece2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.048 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e21aaac7-f3ba-4d8c-9b7a-a5697df2c809]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.049 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[612598f9-5384-4455-9c5d-fefa1df1d32d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.066 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[24bafb3d-914d-40f7-8087-541e2f7fa39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.096 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e189ba70-b788-4ddb-bf19-ef0094170e2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.133 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[202ca4b2-2195-4a73-a0c5-007b7d6f4207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.137 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9e74b287-4505-49ea-a792-57e8087249b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 NetworkManager[44907]: <info>  [1759408512.1390] manager: (tap885ece2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/154)
Oct  2 08:35:12 np0005466031 systemd-udevd[273558]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.178 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc3b0ea-7908-4fb0-84df-83c83ca363c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.180 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5a398a68-885d-4a51-8807-0e82582e3d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:12.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:12 np0005466031 NetworkManager[44907]: <info>  [1759408512.2029] device (tap885ece2c-b0): carrier: link connected
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.209 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[819906d3-c248-437e-aa41-e5fd61ae4f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.226 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f0cc20a1-3eb0-434b-8209-efa2e7c6c712]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636779, 'reachable_time': 23151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273589, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.242 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[be498cb4-e5fc-4de4-a0e4-b243fcf14db9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:5893'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636779, 'tstamp': 636779}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273590, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.260 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f0833bdd-df8a-478f-bfa7-28d395e63eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636779, 'reachable_time': 23151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273591, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.288 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c5c846-c3ec-4f2d-89d8-0cbdb66f4143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.348 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2aeaee55-240e-4731-a396-b3d179b87d0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.350 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.350 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.350 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap885ece2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:12 np0005466031 kernel: tap885ece2c-b0: entered promiscuous mode
Oct  2 08:35:12 np0005466031 NetworkManager[44907]: <info>  [1759408512.3531] manager: (tap885ece2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.355 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap885ece2c-b0, col_values=(('external_ids', {'iface-id': '24355553-27f6-4ebd-99c0-4f861ce0339d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:12 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:12Z|00307|binding|INFO|Releasing lport 24355553-27f6-4ebd-99c0-4f861ce0339d from this chassis (sb_readonly=0)
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.371 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.372 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[44854156-15a4-4282-b96e-5dc4aeba0d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.373 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:35:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:12.374 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'env', 'PROCESS_TAG=haproxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:35:12 np0005466031 podman[273623]: 2025-10-02 12:35:12.733305442 +0000 UTC m=+0.048978573 container create a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:35:12 np0005466031 systemd[1]: Started libpod-conmon-a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40.scope.
Oct  2 08:35:12 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:35:12 np0005466031 podman[273623]: 2025-10-02 12:35:12.709002401 +0000 UTC m=+0.024675552 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:35:12 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575af49ed578206ed33be7db79857ca5b02a5afaf489d1914a65ace813a9bc3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:35:12 np0005466031 podman[273623]: 2025-10-02 12:35:12.823369237 +0000 UTC m=+0.139042388 container init a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:35:12 np0005466031 podman[273623]: 2025-10-02 12:35:12.829078872 +0000 UTC m=+0.144752003 container start a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:35:12 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[273638]: [NOTICE]   (273642) : New worker (273644) forked
Oct  2 08:35:12 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[273638]: [NOTICE]   (273642) : Loading success.
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.950 2 DEBUG nova.compute.manager [req-12ddc79e-a4a9-4ccd-9859-ab88424224b3 req-76e52a6c-1701-417e-8a85-be4c9b2a26d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received event network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.951 2 DEBUG oslo_concurrency.lockutils [req-12ddc79e-a4a9-4ccd-9859-ab88424224b3 req-76e52a6c-1701-417e-8a85-be4c9b2a26d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.951 2 DEBUG oslo_concurrency.lockutils [req-12ddc79e-a4a9-4ccd-9859-ab88424224b3 req-76e52a6c-1701-417e-8a85-be4c9b2a26d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.951 2 DEBUG oslo_concurrency.lockutils [req-12ddc79e-a4a9-4ccd-9859-ab88424224b3 req-76e52a6c-1701-417e-8a85-be4c9b2a26d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:12 np0005466031 nova_compute[235803]: 2025-10-02 12:35:12.952 2 DEBUG nova.compute.manager [req-12ddc79e-a4a9-4ccd-9859-ab88424224b3 req-76e52a6c-1701-417e-8a85-be4c9b2a26d6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Processing event network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:35:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.691 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408513.6906781, 2691f2bd-9b64-4627-9831-2a97cf3279fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.691 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] VM Started (Lifecycle Event)#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.694 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.697 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.700 2 INFO nova.virt.libvirt.driver [-] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Instance spawned successfully.#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.700 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.844 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.848 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.848 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.848 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.849 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.849 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.849 2 DEBUG nova.virt.libvirt.driver [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.854 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:13 np0005466031 nova_compute[235803]: 2025-10-02 12:35:13.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.024 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.024 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408513.69095, 2691f2bd-9b64-4627-9831-2a97cf3279fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.024 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.150 2 INFO nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Took 11.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.151 2 DEBUG nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.152 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.159 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408513.696457, 2691f2bd-9b64-4627-9831-2a97cf3279fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.160 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:35:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:14.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.440 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.443 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.538 2 INFO nova.compute.manager [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Took 12.99 seconds to build instance.#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:14 np0005466031 nova_compute[235803]: 2025-10-02 12:35:14.717 2 DEBUG oslo_concurrency.lockutils [None req-9125fd21-9433-4170-b56d-b900aaaf50bc 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:15.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:16.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:17 np0005466031 nova_compute[235803]: 2025-10-02 12:35:17.110 2 DEBUG nova.compute.manager [req-c70398b0-a056-464c-96ce-105e8e55db36 req-4acadcbc-38ae-4089-9abd-9e053bc5cf7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received event network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:17 np0005466031 nova_compute[235803]: 2025-10-02 12:35:17.111 2 DEBUG oslo_concurrency.lockutils [req-c70398b0-a056-464c-96ce-105e8e55db36 req-4acadcbc-38ae-4089-9abd-9e053bc5cf7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:17 np0005466031 nova_compute[235803]: 2025-10-02 12:35:17.112 2 DEBUG oslo_concurrency.lockutils [req-c70398b0-a056-464c-96ce-105e8e55db36 req-4acadcbc-38ae-4089-9abd-9e053bc5cf7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:17 np0005466031 nova_compute[235803]: 2025-10-02 12:35:17.112 2 DEBUG oslo_concurrency.lockutils [req-c70398b0-a056-464c-96ce-105e8e55db36 req-4acadcbc-38ae-4089-9abd-9e053bc5cf7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:17 np0005466031 nova_compute[235803]: 2025-10-02 12:35:17.112 2 DEBUG nova.compute.manager [req-c70398b0-a056-464c-96ce-105e8e55db36 req-4acadcbc-38ae-4089-9abd-9e053bc5cf7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] No waiting events found dispatching network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:17 np0005466031 nova_compute[235803]: 2025-10-02 12:35:17.112 2 WARNING nova.compute.manager [req-c70398b0-a056-464c-96ce-105e8e55db36 req-4acadcbc-38ae-4089-9abd-9e053bc5cf7a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received unexpected event network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c for instance with vm_state active and task_state None.#033[00m
Oct  2 08:35:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:17.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:18.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:35:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:35:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:35:18 np0005466031 nova_compute[235803]: 2025-10-02 12:35:18.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:18 np0005466031 nova_compute[235803]: 2025-10-02 12:35:18.983 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "2691f2bd-9b64-4627-9831-2a97cf3279fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:18 np0005466031 nova_compute[235803]: 2025-10-02 12:35:18.984 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:18 np0005466031 nova_compute[235803]: 2025-10-02 12:35:18.985 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:18 np0005466031 nova_compute[235803]: 2025-10-02 12:35:18.986 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:18 np0005466031 nova_compute[235803]: 2025-10-02 12:35:18.986 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:18 np0005466031 nova_compute[235803]: 2025-10-02 12:35:18.989 2 INFO nova.compute.manager [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Terminating instance#033[00m
Oct  2 08:35:18 np0005466031 nova_compute[235803]: 2025-10-02 12:35:18.990 2 DEBUG nova.compute.manager [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:35:19 np0005466031 kernel: tap25e34da3-31 (unregistering): left promiscuous mode
Oct  2 08:35:19 np0005466031 NetworkManager[44907]: <info>  [1759408519.3073] device (tap25e34da3-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:35:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:19Z|00308|binding|INFO|Releasing lport 25e34da3-3184-4cc0-bf80-e5c92054631c from this chassis (sb_readonly=0)
Oct  2 08:35:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:19Z|00309|binding|INFO|Setting lport 25e34da3-3184-4cc0-bf80-e5c92054631c down in Southbound
Oct  2 08:35:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:19Z|00310|binding|INFO|Removing iface tap25e34da3-31 ovn-installed in OVS
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.326 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:97:5a 10.100.0.8'], port_security=['fa:16:3e:5e:97:5a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2691f2bd-9b64-4627-9831-2a97cf3279fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=25e34da3-3184-4cc0-bf80-e5c92054631c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.328 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 25e34da3-3184-4cc0-bf80-e5c92054631c in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 unbound from our chassis#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.330 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.332 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0d9b63-8e3b-49b4-8b0d-bdd3fd1493ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.333 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace which is not needed anymore#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct  2 08:35:19 np0005466031 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000005a.scope: Consumed 6.836s CPU time.
Oct  2 08:35:19 np0005466031 systemd-machined[192227]: Machine qemu-35-instance-0000005a terminated.
Oct  2 08:35:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 podman[273880]: 2025-10-02 12:35:19.423899768 +0000 UTC m=+0.091236691 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.436 2 INFO nova.virt.libvirt.driver [-] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Instance destroyed successfully.#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.437 2 DEBUG nova.objects.instance [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'resources' on Instance uuid 2691f2bd-9b64-4627-9831-2a97cf3279fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:19 np0005466031 podman[273883]: 2025-10-02 12:35:19.441062663 +0000 UTC m=+0.101470856 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.460 2 DEBUG nova.virt.libvirt.vif [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-119748352',display_name='tempest-MultipleCreateTestJSON-server-119748352-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-119748352-1',id=90,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-tcned940',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:14Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=2691f2bd-9b64-4627-9831-2a97cf3279fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.460 2 DEBUG nova.network.os_vif_util [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "25e34da3-3184-4cc0-bf80-e5c92054631c", "address": "fa:16:3e:5e:97:5a", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25e34da3-31", "ovs_interfaceid": "25e34da3-3184-4cc0-bf80-e5c92054631c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.461 2 DEBUG nova.network.os_vif_util [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:97:5a,bridge_name='br-int',has_traffic_filtering=True,id=25e34da3-3184-4cc0-bf80-e5c92054631c,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25e34da3-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.461 2 DEBUG os_vif [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:97:5a,bridge_name='br-int',has_traffic_filtering=True,id=25e34da3-3184-4cc0-bf80-e5c92054631c,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25e34da3-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25e34da3-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.467 2 INFO os_vif [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:97:5a,bridge_name='br-int',has_traffic_filtering=True,id=25e34da3-3184-4cc0-bf80-e5c92054631c,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25e34da3-31')#033[00m
Oct  2 08:35:19 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[273638]: [NOTICE]   (273642) : haproxy version is 2.8.14-c23fe91
Oct  2 08:35:19 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[273638]: [NOTICE]   (273642) : path to executable is /usr/sbin/haproxy
Oct  2 08:35:19 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[273638]: [WARNING]  (273642) : Exiting Master process...
Oct  2 08:35:19 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[273638]: [WARNING]  (273642) : Exiting Master process...
Oct  2 08:35:19 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[273638]: [ALERT]    (273642) : Current worker (273644) exited with code 143 (Terminated)
Oct  2 08:35:19 np0005466031 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[273638]: [WARNING]  (273642) : All workers exited. Exiting... (0)
Oct  2 08:35:19 np0005466031 systemd[1]: libpod-a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40.scope: Deactivated successfully.
Oct  2 08:35:19 np0005466031 conmon[273638]: conmon a6fda9893d5485b9a163 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40.scope/container/memory.events
Oct  2 08:35:19 np0005466031 podman[273945]: 2025-10-02 12:35:19.48505334 +0000 UTC m=+0.053036749 container died a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:35:19 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40-userdata-shm.mount: Deactivated successfully.
Oct  2 08:35:19 np0005466031 systemd[1]: var-lib-containers-storage-overlay-575af49ed578206ed33be7db79857ca5b02a5afaf489d1914a65ace813a9bc3f-merged.mount: Deactivated successfully.
Oct  2 08:35:19 np0005466031 podman[273945]: 2025-10-02 12:35:19.530677845 +0000 UTC m=+0.098661244 container cleanup a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:35:19 np0005466031 systemd[1]: libpod-conmon-a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40.scope: Deactivated successfully.
Oct  2 08:35:19 np0005466031 podman[274004]: 2025-10-02 12:35:19.588110001 +0000 UTC m=+0.039215372 container remove a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.595 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[55798cf9-73b1-414c-beae-57148b45d0fc]: (4, ('Thu Oct  2 12:35:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40)\na6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40\nThu Oct  2 12:35:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (a6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40)\na6fda9893d5485b9a1633f7b02a8289e6a10be610e9f4a9067f44ba20f0b8d40\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.596 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[03271fcb-1c4f-448f-8842-4343328441ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.597 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 kernel: tap885ece2c-b0: left promiscuous mode
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.617 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3576ab-ffaf-4151-8e79-b0e05ad1eefe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.646 2 DEBUG nova.compute.manager [req-10d12bc0-4ed9-4dc9-8a6b-cf8a626b9f53 req-82e699dc-d720-41ff-b454-f9bb34e207a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received event network-vif-unplugged-25e34da3-3184-4cc0-bf80-e5c92054631c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.647 2 DEBUG oslo_concurrency.lockutils [req-10d12bc0-4ed9-4dc9-8a6b-cf8a626b9f53 req-82e699dc-d720-41ff-b454-f9bb34e207a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.647 2 DEBUG oslo_concurrency.lockutils [req-10d12bc0-4ed9-4dc9-8a6b-cf8a626b9f53 req-82e699dc-d720-41ff-b454-f9bb34e207a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.648 2 DEBUG oslo_concurrency.lockutils [req-10d12bc0-4ed9-4dc9-8a6b-cf8a626b9f53 req-82e699dc-d720-41ff-b454-f9bb34e207a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.648 2 DEBUG nova.compute.manager [req-10d12bc0-4ed9-4dc9-8a6b-cf8a626b9f53 req-82e699dc-d720-41ff-b454-f9bb34e207a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] No waiting events found dispatching network-vif-unplugged-25e34da3-3184-4cc0-bf80-e5c92054631c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:19 np0005466031 nova_compute[235803]: 2025-10-02 12:35:19.648 2 DEBUG nova.compute.manager [req-10d12bc0-4ed9-4dc9-8a6b-cf8a626b9f53 req-82e699dc-d720-41ff-b454-f9bb34e207a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received event network-vif-unplugged-25e34da3-3184-4cc0-bf80-e5c92054631c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.650 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e7694f5b-64c9-426d-8fd0-32e843433467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.652 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6394d3d8-99c0-486a-b037-8ffada30a6d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.668 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9b45ab-51a9-4f2f-bb2e-93e2421dbc14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636771, 'reachable_time': 22265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274020, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.671 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:35:19 np0005466031 systemd[1]: run-netns-ovnmeta\x2d885ece2c\x2db1ca\x2d4d5a\x2d9ddf\x2d20d1baf155c7.mount: Deactivated successfully.
Oct  2 08:35:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:19.671 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffa435c-1fc9-46ef-a83a-88bf9e2f4b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e247 e247: 3 total, 3 up, 3 in
Oct  2 08:35:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.920620) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520920682, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 934, "num_deletes": 250, "total_data_size": 1754988, "memory_usage": 1784056, "flush_reason": "Manual Compaction"}
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520926938, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 743955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41658, "largest_seqno": 42587, "table_properties": {"data_size": 740342, "index_size": 1329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10119, "raw_average_key_size": 21, "raw_value_size": 732439, "raw_average_value_size": 1525, "num_data_blocks": 59, "num_entries": 480, "num_filter_entries": 480, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408456, "oldest_key_time": 1759408456, "file_creation_time": 1759408520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 6338 microseconds, and 2628 cpu microseconds.
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.926971) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 743955 bytes OK
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.926986) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.929464) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.929478) EVENT_LOG_v1 {"time_micros": 1759408520929474, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.929494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1750216, prev total WAL file size 1750216, number of live WAL files 2.
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.930399) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(726KB)], [78(11MB)]
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520930455, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 12693537, "oldest_snapshot_seqno": -1}
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6656 keys, 9260706 bytes, temperature: kUnknown
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520995167, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9260706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9217556, "index_size": 25438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 171013, "raw_average_key_size": 25, "raw_value_size": 9099667, "raw_average_value_size": 1367, "num_data_blocks": 1012, "num_entries": 6656, "num_filter_entries": 6656, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.995417) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9260706 bytes
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.998762) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.9 rd, 142.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.4 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(29.5) write-amplify(12.4) OK, records in: 7143, records dropped: 487 output_compression: NoCompression
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.998789) EVENT_LOG_v1 {"time_micros": 1759408520998779, "job": 48, "event": "compaction_finished", "compaction_time_micros": 64786, "compaction_time_cpu_micros": 21183, "output_level": 6, "num_output_files": 1, "total_output_size": 9260706, "num_input_records": 7143, "num_output_records": 6656, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:35:20 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520999127, "job": 48, "event": "table_file_deletion", "file_number": 80}
Oct  2 08:35:21 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:35:21 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521001930, "job": 48, "event": "table_file_deletion", "file_number": 78}
Oct  2 08:35:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:20.930277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:21.001971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:21.001976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:21.001978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:21.001981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:35:21.001983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:21 np0005466031 nova_compute[235803]: 2025-10-02 12:35:21.779 2 DEBUG nova.compute.manager [req-2ea82bb8-484e-46a3-8bc2-ef7481d7aa73 req-9c162056-d1ae-4cc7-9cec-d069d2c0bb7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received event network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:21 np0005466031 nova_compute[235803]: 2025-10-02 12:35:21.780 2 DEBUG oslo_concurrency.lockutils [req-2ea82bb8-484e-46a3-8bc2-ef7481d7aa73 req-9c162056-d1ae-4cc7-9cec-d069d2c0bb7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:21 np0005466031 nova_compute[235803]: 2025-10-02 12:35:21.780 2 DEBUG oslo_concurrency.lockutils [req-2ea82bb8-484e-46a3-8bc2-ef7481d7aa73 req-9c162056-d1ae-4cc7-9cec-d069d2c0bb7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:21 np0005466031 nova_compute[235803]: 2025-10-02 12:35:21.780 2 DEBUG oslo_concurrency.lockutils [req-2ea82bb8-484e-46a3-8bc2-ef7481d7aa73 req-9c162056-d1ae-4cc7-9cec-d069d2c0bb7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:21 np0005466031 nova_compute[235803]: 2025-10-02 12:35:21.780 2 DEBUG nova.compute.manager [req-2ea82bb8-484e-46a3-8bc2-ef7481d7aa73 req-9c162056-d1ae-4cc7-9cec-d069d2c0bb7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] No waiting events found dispatching network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:21 np0005466031 nova_compute[235803]: 2025-10-02 12:35:21.780 2 WARNING nova.compute.manager [req-2ea82bb8-484e-46a3-8bc2-ef7481d7aa73 req-9c162056-d1ae-4cc7-9cec-d069d2c0bb7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received unexpected event network-vif-plugged-25e34da3-3184-4cc0-bf80-e5c92054631c for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:35:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:22.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:23.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:23 np0005466031 nova_compute[235803]: 2025-10-02 12:35:23.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:24 np0005466031 nova_compute[235803]: 2025-10-02 12:35:24.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:24 np0005466031 nova_compute[235803]: 2025-10-02 12:35:24.740 2 INFO nova.virt.libvirt.driver [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Deleting instance files /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb_del#033[00m
Oct  2 08:35:24 np0005466031 nova_compute[235803]: 2025-10-02 12:35:24.741 2 INFO nova.virt.libvirt.driver [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Deletion of /var/lib/nova/instances/2691f2bd-9b64-4627-9831-2a97cf3279fb_del complete#033[00m
Oct  2 08:35:24 np0005466031 nova_compute[235803]: 2025-10-02 12:35:24.871 2 INFO nova.compute.manager [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Took 5.88 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:35:24 np0005466031 nova_compute[235803]: 2025-10-02 12:35:24.872 2 DEBUG oslo.service.loopingcall [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:35:24 np0005466031 nova_compute[235803]: 2025-10-02 12:35:24.872 2 DEBUG nova.compute.manager [-] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:35:24 np0005466031 nova_compute[235803]: 2025-10-02 12:35:24.872 2 DEBUG nova.network.neutron [-] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:35:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:25.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:25.844 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:25.844 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:25.845 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:26.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.121 2 DEBUG nova.network.neutron [-] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.141 2 INFO nova.compute.manager [-] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Took 2.27 seconds to deallocate network for instance.#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.197 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.198 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.289 2 DEBUG oslo_concurrency.processutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:27.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3724517341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.709 2 DEBUG oslo_concurrency.processutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.718 2 DEBUG nova.compute.provider_tree [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.740 2 DEBUG nova.scheduler.client.report [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.772 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.804 2 INFO nova.scheduler.client.report [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Deleted allocations for instance 2691f2bd-9b64-4627-9831-2a97cf3279fb#033[00m
Oct  2 08:35:27 np0005466031 nova_compute[235803]: 2025-10-02 12:35:27.982 2 DEBUG oslo_concurrency.lockutils [None req-c6f82bfb-568d-497a-ab8f-6fcf432d1f0c 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "2691f2bd-9b64-4627-9831-2a97cf3279fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:28.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:28 np0005466031 nova_compute[235803]: 2025-10-02 12:35:28.234 2 DEBUG nova.compute.manager [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Received event network-vif-deleted-25e34da3-3184-4cc0-bf80-e5c92054631c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:28 np0005466031 nova_compute[235803]: 2025-10-02 12:35:28.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:29 np0005466031 podman[274074]: 2025-10-02 12:35:29.015258249 +0000 UTC m=+0.055430629 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 08:35:29 np0005466031 podman[274073]: 2025-10-02 12:35:29.021343164 +0000 UTC m=+0.063893092 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:35:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:35:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:35:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:29.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:29 np0005466031 nova_compute[235803]: 2025-10-02 12:35:29.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:30.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:30 np0005466031 nova_compute[235803]: 2025-10-02 12:35:30.680 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:30 np0005466031 nova_compute[235803]: 2025-10-02 12:35:30.681 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:30 np0005466031 nova_compute[235803]: 2025-10-02 12:35:30.682 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:30 np0005466031 nova_compute[235803]: 2025-10-02 12:35:30.682 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:30 np0005466031 nova_compute[235803]: 2025-10-02 12:35:30.683 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:30 np0005466031 nova_compute[235803]: 2025-10-02 12:35:30.685 2 INFO nova.compute.manager [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Terminating instance#033[00m
Oct  2 08:35:30 np0005466031 nova_compute[235803]: 2025-10-02 12:35:30.686 2 DEBUG nova.compute.manager [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:35:31 np0005466031 kernel: tap6936fa44-52 (unregistering): left promiscuous mode
Oct  2 08:35:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 e248: 3 total, 3 up, 3 in
Oct  2 08:35:31 np0005466031 NetworkManager[44907]: <info>  [1759408531.2060] device (tap6936fa44-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:35:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:31Z|00311|binding|INFO|Releasing lport 6936fa44-52a7-42a8-aeae-c4c4b7681742 from this chassis (sb_readonly=0)
Oct  2 08:35:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:31Z|00312|binding|INFO|Setting lport 6936fa44-52a7-42a8-aeae-c4c4b7681742 down in Southbound
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:31Z|00313|binding|INFO|Removing iface tap6936fa44-52 ovn-installed in OVS
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.245 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:0b:ec 10.100.0.5'], port_security=['fa:16:3e:03:0b:ec 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35e544965644721a29ebea7dd0cc74e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc6adb97-938b-4809-a20b-8e2efe39ddba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c769c030-e38a-4799-8979-0a203014e262, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=6936fa44-52a7-42a8-aeae-c4c4b7681742) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.246 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 6936fa44-52a7-42a8-aeae-c4c4b7681742 in datapath 4dd1e489-9cc3-4420-8577-3a250b110c9a unbound from our chassis#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.247 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4dd1e489-9cc3-4420-8577-3a250b110c9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.249 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d711f963-3a40-4587-b9fb-3e57d33fdbbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.249 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a namespace which is not needed anymore#033[00m
Oct  2 08:35:31 np0005466031 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000056.scope: Deactivated successfully.
Oct  2 08:35:31 np0005466031 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000056.scope: Consumed 15.840s CPU time.
Oct  2 08:35:31 np0005466031 systemd-machined[192227]: Machine qemu-33-instance-00000056 terminated.
Oct  2 08:35:31 np0005466031 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[272696]: [NOTICE]   (272701) : haproxy version is 2.8.14-c23fe91
Oct  2 08:35:31 np0005466031 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[272696]: [NOTICE]   (272701) : path to executable is /usr/sbin/haproxy
Oct  2 08:35:31 np0005466031 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[272696]: [WARNING]  (272701) : Exiting Master process...
Oct  2 08:35:31 np0005466031 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[272696]: [ALERT]    (272701) : Current worker (272704) exited with code 143 (Terminated)
Oct  2 08:35:31 np0005466031 neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a[272696]: [WARNING]  (272701) : All workers exited. Exiting... (0)
Oct  2 08:35:31 np0005466031 systemd[1]: libpod-38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9.scope: Deactivated successfully.
Oct  2 08:35:31 np0005466031 podman[274165]: 2025-10-02 12:35:31.411156614 +0000 UTC m=+0.054295265 container died 38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:35:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:31 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9-userdata-shm.mount: Deactivated successfully.
Oct  2 08:35:31 np0005466031 systemd[1]: var-lib-containers-storage-overlay-9ad3f4b76021fba75374398f3fd0b44bf1dea9798fcb6a8286dc825d4b56e3ef-merged.mount: Deactivated successfully.
Oct  2 08:35:31 np0005466031 podman[274165]: 2025-10-02 12:35:31.457281913 +0000 UTC m=+0.100420564 container cleanup 38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:35:31 np0005466031 systemd[1]: libpod-conmon-38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9.scope: Deactivated successfully.
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.530 2 INFO nova.virt.libvirt.driver [-] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Instance destroyed successfully.#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.530 2 DEBUG nova.objects.instance [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lazy-loading 'resources' on Instance uuid c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:31 np0005466031 podman[274196]: 2025-10-02 12:35:31.533689725 +0000 UTC m=+0.051563017 container remove 38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.541 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[426d58c4-dcf1-4676-ba77-b7860640fb85]: (4, ('Thu Oct  2 12:35:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a (38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9)\n38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9\nThu Oct  2 12:35:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a (38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9)\n38f2713a70a0f3d065e75f1bb7b400244a1ccb5cec670facff419c41779a50e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.542 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b59ccc59-7369-4413-a8fc-304e062f66ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.543 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd1e489-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005466031 kernel: tap4dd1e489-90: left promiscuous mode
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.559 2 DEBUG nova.virt.libvirt.vif [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-770672439',display_name='tempest-ListServerFiltersTestJSON-instance-770672439',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-770672439',id=86,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35e544965644721a29ebea7dd0cc74e',ramdisk_id='',reservation_id='r-sehv668s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-495234271',owner_user_name='tempest-ListServerFiltersTestJSON-495234271-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:39Z,user_data=None,user_id='c0d7f2725ce3440b9e998e6efddc4628',uuid=c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.559 2 DEBUG nova.network.os_vif_util [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converting VIF {"id": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "address": "fa:16:3e:03:0b:ec", "network": {"id": "4dd1e489-9cc3-4420-8577-3a250b110c9a", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-781293881-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35e544965644721a29ebea7dd0cc74e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6936fa44-52", "ovs_interfaceid": "6936fa44-52a7-42a8-aeae-c4c4b7681742", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.560 2 DEBUG nova.network.os_vif_util [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:0b:ec,bridge_name='br-int',has_traffic_filtering=True,id=6936fa44-52a7-42a8-aeae-c4c4b7681742,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6936fa44-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.560 2 DEBUG os_vif [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:0b:ec,bridge_name='br-int',has_traffic_filtering=True,id=6936fa44-52a7-42a8-aeae-c4c4b7681742,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6936fa44-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6936fa44-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.565 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[20e6317c-8201-4d7f-bb5c-051cbaf6640d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005466031 nova_compute[235803]: 2025-10-02 12:35:31.574 2 INFO os_vif [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:0b:ec,bridge_name='br-int',has_traffic_filtering=True,id=6936fa44-52a7-42a8-aeae-c4c4b7681742,network=Network(4dd1e489-9cc3-4420-8577-3a250b110c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6936fa44-52')#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.590 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[903ecb1e-f502-4822-83ed-19e8516c83ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.591 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[db80546d-1c8d-4691-b221-1b93513f62e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.606 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9edafe89-26e1-4b3d-a99b-3691878665f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633398, 'reachable_time': 29058, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274238, 'error': None, 'target': 'ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.608 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4dd1e489-9cc3-4420-8577-3a250b110c9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:35:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:31.609 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[dc21c977-0fac-4f52-9690-22e5ebbef9e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:31 np0005466031 systemd[1]: run-netns-ovnmeta\x2d4dd1e489\x2d9cc3\x2d4420\x2d8577\x2d3a250b110c9a.mount: Deactivated successfully.
Oct  2 08:35:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:32.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:33.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:33 np0005466031 nova_compute[235803]: 2025-10-02 12:35:33.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.297 2 DEBUG nova.compute.manager [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received event network-vif-unplugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.298 2 DEBUG oslo_concurrency.lockutils [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.298 2 DEBUG oslo_concurrency.lockutils [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.298 2 DEBUG oslo_concurrency.lockutils [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.298 2 DEBUG nova.compute.manager [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] No waiting events found dispatching network-vif-unplugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.298 2 DEBUG nova.compute.manager [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received event network-vif-unplugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.298 2 DEBUG nova.compute.manager [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received event network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.299 2 DEBUG oslo_concurrency.lockutils [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.299 2 DEBUG oslo_concurrency.lockutils [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.299 2 DEBUG oslo_concurrency.lockutils [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.299 2 DEBUG nova.compute.manager [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] No waiting events found dispatching network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.299 2 WARNING nova.compute.manager [req-29f397bb-34ac-45f1-995f-cc96d693ad2f req-ea5aeac8-c741-40fa-96d0-ee3227db14e3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received unexpected event network-vif-plugged-6936fa44-52a7-42a8-aeae-c4c4b7681742 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.434 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408519.4327307, 2691f2bd-9b64-4627-9831-2a97cf3279fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.434 2 INFO nova.compute.manager [-] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:35:34 np0005466031 nova_compute[235803]: 2025-10-02 12:35:34.490 2 DEBUG nova.compute.manager [None req-2ecd37a2-0ee4-4703-931d-97aca9a1ab1f - - - - - -] [instance: 2691f2bd-9b64-4627-9831-2a97cf3279fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:35 np0005466031 nova_compute[235803]: 2025-10-02 12:35:35.318 2 INFO nova.virt.libvirt.driver [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Deleting instance files /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_del#033[00m
Oct  2 08:35:35 np0005466031 nova_compute[235803]: 2025-10-02 12:35:35.319 2 INFO nova.virt.libvirt.driver [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Deletion of /var/lib/nova/instances/c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c_del complete#033[00m
Oct  2 08:35:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:35.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:35 np0005466031 nova_compute[235803]: 2025-10-02 12:35:35.489 2 INFO nova.compute.manager [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Took 4.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:35:35 np0005466031 nova_compute[235803]: 2025-10-02 12:35:35.490 2 DEBUG oslo.service.loopingcall [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:35:35 np0005466031 nova_compute[235803]: 2025-10-02 12:35:35.490 2 DEBUG nova.compute.manager [-] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:35:35 np0005466031 nova_compute[235803]: 2025-10-02 12:35:35.490 2 DEBUG nova.network.neutron [-] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:35:36 np0005466031 nova_compute[235803]: 2025-10-02 12:35:36.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:36.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:36 np0005466031 nova_compute[235803]: 2025-10-02 12:35:36.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:37 np0005466031 nova_compute[235803]: 2025-10-02 12:35:37.392 2 DEBUG nova.network.neutron [-] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:37 np0005466031 nova_compute[235803]: 2025-10-02 12:35:37.459 2 INFO nova.compute.manager [-] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Took 1.97 seconds to deallocate network for instance.#033[00m
Oct  2 08:35:37 np0005466031 nova_compute[235803]: 2025-10-02 12:35:37.535 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:37 np0005466031 nova_compute[235803]: 2025-10-02 12:35:37.535 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:37 np0005466031 nova_compute[235803]: 2025-10-02 12:35:37.592 2 DEBUG nova.compute.manager [req-2d89f00c-22f7-4bbc-bdd4-9b78c9177348 req-bcd3fa31-2a0a-40fd-a342-8dca4f793f13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Received event network-vif-deleted-6936fa44-52a7-42a8-aeae-c4c4b7681742 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:37 np0005466031 nova_compute[235803]: 2025-10-02 12:35:37.677 2 DEBUG oslo_concurrency.processutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:38 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1625734000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:38 np0005466031 nova_compute[235803]: 2025-10-02 12:35:38.123 2 DEBUG oslo_concurrency.processutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:38 np0005466031 nova_compute[235803]: 2025-10-02 12:35:38.130 2 DEBUG nova.compute.provider_tree [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:38 np0005466031 nova_compute[235803]: 2025-10-02 12:35:38.150 2 DEBUG nova.scheduler.client.report [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:38 np0005466031 nova_compute[235803]: 2025-10-02 12:35:38.187 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:38 np0005466031 nova_compute[235803]: 2025-10-02 12:35:38.214 2 INFO nova.scheduler.client.report [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Deleted allocations for instance c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c#033[00m
Oct  2 08:35:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:38.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:38 np0005466031 nova_compute[235803]: 2025-10-02 12:35:38.279 2 DEBUG oslo_concurrency.lockutils [None req-d762389a-dca8-4064-9fca-cc5df107f066 c0d7f2725ce3440b9e998e6efddc4628 b35e544965644721a29ebea7dd0cc74e - - default default] Lock "c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:38 np0005466031 nova_compute[235803]: 2025-10-02 12:35:38.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.009 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.009 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.025 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.119 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.120 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.128 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.129 2 INFO nova.compute.claims [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.305 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:39.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/9764884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.738 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.745 2 DEBUG nova.compute.provider_tree [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:39 np0005466031 nova_compute[235803]: 2025-10-02 12:35:39.989 2 DEBUG nova.scheduler.client.report [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.038 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.039 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.090 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.091 2 DEBUG nova.network.neutron [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:35:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.127 2 INFO nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.178 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:35:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:40.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.300 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.302 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.302 2 INFO nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Creating image(s)#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.326 2 DEBUG nova.storage.rbd_utils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.354 2 DEBUG nova.storage.rbd_utils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.377 2 DEBUG nova.storage.rbd_utils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.381 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.458 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.459 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.459 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.460 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.485 2 DEBUG nova.storage.rbd_utils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.489 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:40 np0005466031 nova_compute[235803]: 2025-10-02 12:35:40.825 2 DEBUG nova.policy [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '71d69bc37f274fad8a0b06c0b96f2a64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:35:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:41.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:41 np0005466031 nova_compute[235803]: 2025-10-02 12:35:41.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.012 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.073 2 DEBUG nova.storage.rbd_utils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] resizing rbd image a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:35:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.670 2 DEBUG nova.network.neutron [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Successfully created port: b31bc9d2-5589-460c-9a78-a1d800087345 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.788 2 DEBUG nova.objects.instance [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.815 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.815 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Ensure instance console log exists: /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.816 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.816 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:42 np0005466031 nova_compute[235803]: 2025-10-02 12:35:42.816 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:43.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:43 np0005466031 nova_compute[235803]: 2025-10-02 12:35:43.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:44 np0005466031 nova_compute[235803]: 2025-10-02 12:35:44.151 2 DEBUG nova.network.neutron [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Successfully updated port: b31bc9d2-5589-460c-9a78-a1d800087345 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:35:44 np0005466031 nova_compute[235803]: 2025-10-02 12:35:44.184 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:44 np0005466031 nova_compute[235803]: 2025-10-02 12:35:44.184 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:44 np0005466031 nova_compute[235803]: 2025-10-02 12:35:44.185 2 DEBUG nova.network.neutron [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:35:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:44.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:44 np0005466031 nova_compute[235803]: 2025-10-02 12:35:44.299 2 DEBUG nova.compute.manager [req-e43aade4-62a3-43be-835c-51ff72fb5d63 req-7f0fdf40-17ac-4cce-819e-6f4c7b641cf5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:44 np0005466031 nova_compute[235803]: 2025-10-02 12:35:44.300 2 DEBUG nova.compute.manager [req-e43aade4-62a3-43be-835c-51ff72fb5d63 req-7f0fdf40-17ac-4cce-819e-6f4c7b641cf5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing instance network info cache due to event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:35:44 np0005466031 nova_compute[235803]: 2025-10-02 12:35:44.300 2 DEBUG oslo_concurrency.lockutils [req-e43aade4-62a3-43be-835c-51ff72fb5d63 req-7f0fdf40-17ac-4cce-819e-6f4c7b641cf5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:44 np0005466031 nova_compute[235803]: 2025-10-02 12:35:44.695 2 DEBUG nova.network.neutron [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:35:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:45.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.201 2 DEBUG nova.network.neutron [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:46.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.246 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.246 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance network_info: |[{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.247 2 DEBUG oslo_concurrency.lockutils [req-e43aade4-62a3-43be-835c-51ff72fb5d63 req-7f0fdf40-17ac-4cce-819e-6f4c7b641cf5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.247 2 DEBUG nova.network.neutron [req-e43aade4-62a3-43be-835c-51ff72fb5d63 req-7f0fdf40-17ac-4cce-819e-6f4c7b641cf5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.249 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Start _get_guest_xml network_info=[{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.253 2 WARNING nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.257 2 DEBUG nova.virt.libvirt.host [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.257 2 DEBUG nova.virt.libvirt.host [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.265 2 DEBUG nova.virt.libvirt.host [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.266 2 DEBUG nova.virt.libvirt.host [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.267 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.267 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.267 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.268 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.268 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.268 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.268 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.268 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.268 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.269 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.269 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.269 2 DEBUG nova.virt.hardware [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.271 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.528 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408531.5280066, c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.529 2 INFO nova.compute.manager [-] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.555 2 DEBUG nova.compute.manager [None req-f4ca2825-b672-42d7-9181-d19e93939f7a - - - - - -] [instance: c9fee9ad-99d7-429a-a27c-8d5e3e7e3a7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:46 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/169662621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.711 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.734 2 DEBUG nova.storage.rbd_utils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:46 np0005466031 nova_compute[235803]: 2025-10-02 12:35:46.737 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4057833807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.150 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.152 2 DEBUG nova.virt.libvirt.vif [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.152 2 DEBUG nova.network.os_vif_util [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.153 2 DEBUG nova.network.os_vif_util [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.154 2 DEBUG nova.objects.instance [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.178 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <uuid>a7ee799a-27f6-41a6-86dc-694c480fc3a1</uuid>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <name>instance-0000005d</name>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerActionsTestJSON-server-1748262975</nova:name>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:35:46</nova:creationTime>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <nova:port uuid="b31bc9d2-5589-460c-9a78-a1d800087345">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <entry name="serial">a7ee799a-27f6-41a6-86dc-694c480fc3a1</entry>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <entry name="uuid">a7ee799a-27f6-41a6-86dc-694c480fc3a1</entry>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:46:e0:75"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <target dev="tapb31bc9d2-55"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/console.log" append="off"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:35:47 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:35:47 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:35:47 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:35:47 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.180 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Preparing to wait for external event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.180 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.181 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.181 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.181 2 DEBUG nova.virt.libvirt.vif [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.182 2 DEBUG nova.network.os_vif_util [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.182 2 DEBUG nova.network.os_vif_util [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.183 2 DEBUG os_vif [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb31bc9d2-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb31bc9d2-55, col_values=(('external_ids', {'iface-id': 'b31bc9d2-5589-460c-9a78-a1d800087345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:e0:75', 'vm-uuid': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:47 np0005466031 NetworkManager[44907]: <info>  [1759408547.1902] manager: (tapb31bc9d2-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.195 2 INFO os_vif [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55')#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.265 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.266 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.266 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:46:e0:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.266 2 INFO nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Using config drive#033[00m
Oct  2 08:35:47 np0005466031 nova_compute[235803]: 2025-10-02 12:35:47.284 2 DEBUG nova.storage.rbd_utils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:47.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:48 np0005466031 nova_compute[235803]: 2025-10-02 12:35:48.037 2 INFO nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Creating config drive at /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/disk.config#033[00m
Oct  2 08:35:48 np0005466031 nova_compute[235803]: 2025-10-02 12:35:48.045 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzfsasqcn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:48 np0005466031 nova_compute[235803]: 2025-10-02 12:35:48.187 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzfsasqcn" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:48 np0005466031 nova_compute[235803]: 2025-10-02 12:35:48.216 2 DEBUG nova.storage.rbd_utils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:48 np0005466031 nova_compute[235803]: 2025-10-02 12:35:48.221 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/disk.config a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:48.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:48 np0005466031 nova_compute[235803]: 2025-10-02 12:35:48.288 2 DEBUG nova.network.neutron [req-e43aade4-62a3-43be-835c-51ff72fb5d63 req-7f0fdf40-17ac-4cce-819e-6f4c7b641cf5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated VIF entry in instance network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:35:48 np0005466031 nova_compute[235803]: 2025-10-02 12:35:48.289 2 DEBUG nova.network.neutron [req-e43aade4-62a3-43be-835c-51ff72fb5d63 req-7f0fdf40-17ac-4cce-819e-6f4c7b641cf5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:48 np0005466031 nova_compute[235803]: 2025-10-02 12:35:48.306 2 DEBUG oslo_concurrency.lockutils [req-e43aade4-62a3-43be-835c-51ff72fb5d63 req-7f0fdf40-17ac-4cce-819e-6f4c7b641cf5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:49.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.622 2 DEBUG oslo_concurrency.processutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/disk.config a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.622 2 INFO nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Deleting local config drive /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/disk.config because it was imported into RBD.#033[00m
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:49 np0005466031 podman[274637]: 2025-10-02 12:35:49.652344219 +0000 UTC m=+0.082338914 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:35:49 np0005466031 podman[274636]: 2025-10-02 12:35:49.666417065 +0000 UTC m=+0.089770219 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  2 08:35:49 np0005466031 kernel: tapb31bc9d2-55: entered promiscuous mode
Oct  2 08:35:49 np0005466031 NetworkManager[44907]: <info>  [1759408549.6713] manager: (tapb31bc9d2-55): new Tun device (/org/freedesktop/NetworkManager/Devices/157)
Oct  2 08:35:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:49Z|00314|binding|INFO|Claiming lport b31bc9d2-5589-460c-9a78-a1d800087345 for this chassis.
Oct  2 08:35:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:49Z|00315|binding|INFO|b31bc9d2-5589-460c-9a78-a1d800087345: Claiming fa:16:3e:46:e0:75 10.100.0.12
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.697 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:e0:75 10.100.0.12'], port_security=['fa:16:3e:46:e0:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=b31bc9d2-5589-460c-9a78-a1d800087345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.698 141898 INFO neutron.agent.ovn.metadata.agent [-] Port b31bc9d2-5589-460c-9a78-a1d800087345 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.700 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:35:49 np0005466031 systemd-machined[192227]: New machine qemu-36-instance-0000005d.
Oct  2 08:35:49 np0005466031 systemd-udevd[274694]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.711 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[40e771f9-d7ad-4250-9622-981c988b1c9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.712 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.714 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.714 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[27a7f97b-f0f8-40ad-86aa-0f655443db8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.714 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1c97fb9d-864c-4fdf-893c-e0aa9acdf930]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 NetworkManager[44907]: <info>  [1759408549.7193] device (tapb31bc9d2-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:35:49 np0005466031 NetworkManager[44907]: <info>  [1759408549.7202] device (tapb31bc9d2-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:35:49 np0005466031 systemd[1]: Started Virtual Machine qemu-36-instance-0000005d.
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.726 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6dcab0-0f5c-4d11-a2c2-8f596a742672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.749 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[95f46caa-662d-4420-b421-d92d3e9598e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:49Z|00316|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 ovn-installed in OVS
Oct  2 08:35:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:49Z|00317|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 up in Southbound
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.777 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6829576b-3c85-4452-9531-c443103e2bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.782 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff70e61-7abe-4ac3-b76a-a9d1ec7ca329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 NetworkManager[44907]: <info>  [1759408549.7831] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/158)
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.821 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4e353e3d-d42a-40ad-b36c-3c1348868815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.824 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bc7327-d796-4e29-9953-eacac27b9bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 NetworkManager[44907]: <info>  [1759408549.8456] device (tapf011efa4-00): carrier: link connected
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.852 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[70673dec-f5d2-497b-8271-1d0afef2fd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.869 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9c956ea4-e269-4c21-b833-d4e932b239d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640543, 'reachable_time': 30504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274728, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.884 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7322b6fd-e348-4195-87a5-2632b238be7e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640543, 'tstamp': 640543}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274729, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.897 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4fa3b8-bdea-4f8e-afaf-337e627db8ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640543, 'reachable_time': 30504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274730, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.926 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[20db7f4d-0531-43bc-873d-c95098e2394c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.978 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[45a8e7b0-f049-46ea-8f4f-67ce53479c7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.979 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.980 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.980 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:49 np0005466031 NetworkManager[44907]: <info>  [1759408549.9827] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Oct  2 08:35:49 np0005466031 kernel: tapf011efa4-00: entered promiscuous mode
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.988 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:49 np0005466031 nova_compute[235803]: 2025-10-02 12:35:49.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:49Z|00318|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.991 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.992 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[10f82eb6-51fd-4a94-a107-e0445e83a7a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.993 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:35:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:49.993 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:50.139 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:50.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.240 2 DEBUG nova.compute.manager [req-dee116e3-819e-4b43-a5ca-543deb7254c6 req-3095f0d6-2a91-4dc1-bf2c-1a699b32c591 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.240 2 DEBUG oslo_concurrency.lockutils [req-dee116e3-819e-4b43-a5ca-543deb7254c6 req-3095f0d6-2a91-4dc1-bf2c-1a699b32c591 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.241 2 DEBUG oslo_concurrency.lockutils [req-dee116e3-819e-4b43-a5ca-543deb7254c6 req-3095f0d6-2a91-4dc1-bf2c-1a699b32c591 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.241 2 DEBUG oslo_concurrency.lockutils [req-dee116e3-819e-4b43-a5ca-543deb7254c6 req-3095f0d6-2a91-4dc1-bf2c-1a699b32c591 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.241 2 DEBUG nova.compute.manager [req-dee116e3-819e-4b43-a5ca-543deb7254c6 req-3095f0d6-2a91-4dc1-bf2c-1a699b32c591 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Processing event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:35:50 np0005466031 podman[274798]: 2025-10-02 12:35:50.397195508 +0000 UTC m=+0.070581356 container create c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:35:50 np0005466031 systemd[1]: Started libpod-conmon-c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70.scope.
Oct  2 08:35:50 np0005466031 podman[274798]: 2025-10-02 12:35:50.353010784 +0000 UTC m=+0.026396662 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:35:50 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:35:50 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372f6c2cd1f0e3f1f99cfa6989a085cfda1fd71de6824ffdb970a65585f159df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:35:50 np0005466031 podman[274798]: 2025-10-02 12:35:50.495168011 +0000 UTC m=+0.168553879 container init c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:35:50 np0005466031 podman[274798]: 2025-10-02 12:35:50.500289999 +0000 UTC m=+0.173675847 container start c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:35:50 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[274819]: [NOTICE]   (274823) : New worker (274825) forked
Oct  2 08:35:50 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[274819]: [NOTICE]   (274823) : Loading success.
Oct  2 08:35:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:50.560 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.804 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408550.8039126, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.806 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Started (Lifecycle Event)#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.809 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.812 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.815 2 INFO nova.virt.libvirt.driver [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance spawned successfully.#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.815 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.849 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.854 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.857 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.858 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.858 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.859 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.859 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.860 2 DEBUG nova.virt.libvirt.driver [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.885 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.886 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408550.804026, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.886 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.908 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.915 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408550.811887, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.916 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.947 2 INFO nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Took 10.65 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.947 2 DEBUG nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.948 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:50 np0005466031 nova_compute[235803]: 2025-10-02 12:35:50.952 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:51 np0005466031 nova_compute[235803]: 2025-10-02 12:35:51.059 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:35:51 np0005466031 nova_compute[235803]: 2025-10-02 12:35:51.099 2 INFO nova.compute.manager [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Took 12.00 seconds to build instance.#033[00m
Oct  2 08:35:51 np0005466031 nova_compute[235803]: 2025-10-02 12:35:51.130 2 DEBUG oslo_concurrency.lockutils [None req-7991d93f-3c06-478f-81c4-3228b056e06d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:51.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:51 np0005466031 nova_compute[235803]: 2025-10-02 12:35:51.656 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:51 np0005466031 nova_compute[235803]: 2025-10-02 12:35:51.656 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:35:51 np0005466031 nova_compute[235803]: 2025-10-02 12:35:51.656 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:35:52 np0005466031 nova_compute[235803]: 2025-10-02 12:35:52.182 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:52 np0005466031 nova_compute[235803]: 2025-10-02 12:35:52.183 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:52 np0005466031 nova_compute[235803]: 2025-10-02 12:35:52.183 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:35:52 np0005466031 nova_compute[235803]: 2025-10-02 12:35:52.183 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:52 np0005466031 nova_compute[235803]: 2025-10-02 12:35:52.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:35:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:52.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:35:53 np0005466031 nova_compute[235803]: 2025-10-02 12:35:53.014 2 DEBUG nova.compute.manager [req-8f711421-7675-499c-bc7a-47b553f8cdc2 req-18ad45b0-fbcd-4008-a63f-3a79c4b49f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:53 np0005466031 nova_compute[235803]: 2025-10-02 12:35:53.014 2 DEBUG oslo_concurrency.lockutils [req-8f711421-7675-499c-bc7a-47b553f8cdc2 req-18ad45b0-fbcd-4008-a63f-3a79c4b49f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:53 np0005466031 nova_compute[235803]: 2025-10-02 12:35:53.015 2 DEBUG oslo_concurrency.lockutils [req-8f711421-7675-499c-bc7a-47b553f8cdc2 req-18ad45b0-fbcd-4008-a63f-3a79c4b49f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:53 np0005466031 nova_compute[235803]: 2025-10-02 12:35:53.015 2 DEBUG oslo_concurrency.lockutils [req-8f711421-7675-499c-bc7a-47b553f8cdc2 req-18ad45b0-fbcd-4008-a63f-3a79c4b49f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:53 np0005466031 nova_compute[235803]: 2025-10-02 12:35:53.015 2 DEBUG nova.compute.manager [req-8f711421-7675-499c-bc7a-47b553f8cdc2 req-18ad45b0-fbcd-4008-a63f-3a79c4b49f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:53 np0005466031 nova_compute[235803]: 2025-10-02 12:35:53.016 2 WARNING nova.compute.manager [req-8f711421-7675-499c-bc7a-47b553f8cdc2 req-18ad45b0-fbcd-4008-a63f-3a79c4b49f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:35:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:53.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.891 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.914 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.914 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.915 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.915 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.915 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.916 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.916 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.916 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.950 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.950 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.950 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.950 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:35:54 np0005466031 nova_compute[235803]: 2025-10-02 12:35:54.951 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:55 np0005466031 NetworkManager[44907]: <info>  [1759408555.3087] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Oct  2 08:35:55 np0005466031 NetworkManager[44907]: <info>  [1759408555.3094] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct  2 08:35:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3979436471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.418 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:55Z|00319|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:55.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.502 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.503 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.643 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.644 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4419MB free_disk=20.967193603515625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.644 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.644 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.806 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance a7ee799a-27f6-41a6-86dc-694c480fc3a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.806 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.806 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.961 2 DEBUG nova.compute.manager [req-ac4d866f-ce39-4b68-8f06-3509a8ec9c69 req-28867fdd-c3a6-449b-b1e5-71d403712583 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.962 2 DEBUG nova.compute.manager [req-ac4d866f-ce39-4b68-8f06-3509a8ec9c69 req-28867fdd-c3a6-449b-b1e5-71d403712583 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing instance network info cache due to event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.962 2 DEBUG oslo_concurrency.lockutils [req-ac4d866f-ce39-4b68-8f06-3509a8ec9c69 req-28867fdd-c3a6-449b-b1e5-71d403712583 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.962 2 DEBUG oslo_concurrency.lockutils [req-ac4d866f-ce39-4b68-8f06-3509a8ec9c69 req-28867fdd-c3a6-449b-b1e5-71d403712583 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:55 np0005466031 nova_compute[235803]: 2025-10-02 12:35:55.963 2 DEBUG nova.network.neutron [req-ac4d866f-ce39-4b68-8f06-3509a8ec9c69 req-28867fdd-c3a6-449b-b1e5-71d403712583 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:35:56 np0005466031 nova_compute[235803]: 2025-10-02 12:35:56.063 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:56 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/177129438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:56 np0005466031 nova_compute[235803]: 2025-10-02 12:35:56.517 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:56 np0005466031 nova_compute[235803]: 2025-10-02 12:35:56.522 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:56 np0005466031 nova_compute[235803]: 2025-10-02 12:35:56.546 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:35:56.562 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:56 np0005466031 nova_compute[235803]: 2025-10-02 12:35:56.581 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:35:56 np0005466031 nova_compute[235803]: 2025-10-02 12:35:56.582 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:57 np0005466031 nova_compute[235803]: 2025-10-02 12:35:57.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:57.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:57 np0005466031 nova_compute[235803]: 2025-10-02 12:35:57.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:57 np0005466031 nova_compute[235803]: 2025-10-02 12:35:57.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:57 np0005466031 nova_compute[235803]: 2025-10-02 12:35:57.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:35:57 np0005466031 nova_compute[235803]: 2025-10-02 12:35:57.664 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:35:57 np0005466031 nova_compute[235803]: 2025-10-02 12:35:57.675 2 DEBUG nova.network.neutron [req-ac4d866f-ce39-4b68-8f06-3509a8ec9c69 req-28867fdd-c3a6-449b-b1e5-71d403712583 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated VIF entry in instance network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:35:57 np0005466031 nova_compute[235803]: 2025-10-02 12:35:57.675 2 DEBUG nova.network.neutron [req-ac4d866f-ce39-4b68-8f06-3509a8ec9c69 req-28867fdd-c3a6-449b-b1e5-71d403712583 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:57 np0005466031 nova_compute[235803]: 2025-10-02 12:35:57.698 2 DEBUG oslo_concurrency.lockutils [req-ac4d866f-ce39-4b68-8f06-3509a8ec9c69 req-28867fdd-c3a6-449b-b1e5-71d403712583 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:58.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:59 np0005466031 nova_compute[235803]: 2025-10-02 12:35:59.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:35:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:59.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:59 np0005466031 podman[274935]: 2025-10-02 12:35:59.625803025 +0000 UTC m=+0.054353577 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 08:35:59 np0005466031 podman[274934]: 2025-10-02 12:35:59.634512196 +0000 UTC m=+0.064538371 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:35:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:35:59Z|00320|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:35:59 np0005466031 nova_compute[235803]: 2025-10-02 12:35:59.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:01.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:02 np0005466031 nova_compute[235803]: 2025-10-02 12:36:02.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:02.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:03 np0005466031 nova_compute[235803]: 2025-10-02 12:36:03.174 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:03.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:04 np0005466031 nova_compute[235803]: 2025-10-02 12:36:04.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:04.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:05 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:05Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:e0:75 10.100.0.12
Oct  2 08:36:05 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:05Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:e0:75 10.100.0.12
Oct  2 08:36:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:05.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:06.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:06 np0005466031 nova_compute[235803]: 2025-10-02 12:36:06.651 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:07 np0005466031 nova_compute[235803]: 2025-10-02 12:36:07.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:07.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:07 np0005466031 nova_compute[235803]: 2025-10-02 12:36:07.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:08.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:36:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8468 writes, 43K keys, 8468 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 8468 writes, 8468 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1619 writes, 8061 keys, 1619 commit groups, 1.0 writes per commit group, ingest: 16.19 MB, 0.03 MB/s#012Interval WAL: 1619 writes, 1619 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     91.5      0.57              0.14        24    0.024       0      0       0.0       0.0#012  L6      1/0    8.83 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   4.0    133.6    110.8      1.88              0.60        23    0.082    126K    12K       0.0       0.0#012 Sum      1/0    8.83 MB   0.0      0.2     0.1      0.2       0.3      0.1       0.0   5.0    102.7    106.3      2.45              0.74        47    0.052    126K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     60.6     59.6      1.16              0.18        12    0.097     40K   3025       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0    133.6    110.8      1.88              0.60        23    0.082    126K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     91.9      0.56              0.14        23    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.051, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.25 GB write, 0.09 MB/s write, 0.25 GB read, 0.08 MB/s read, 2.4 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 27.90 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000173 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1628,26.93 MB,8.85796%) FilterBlock(47,363.17 KB,0.116664%) IndexBlock(47,636.23 KB,0.204382%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:36:09 np0005466031 nova_compute[235803]: 2025-10-02 12:36:09.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:09.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:09 np0005466031 nova_compute[235803]: 2025-10-02 12:36:09.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:10.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:11.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:12 np0005466031 nova_compute[235803]: 2025-10-02 12:36:12.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:12.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:12 np0005466031 nova_compute[235803]: 2025-10-02 12:36:12.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:12 np0005466031 nova_compute[235803]: 2025-10-02 12:36:12.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:36:13 np0005466031 nova_compute[235803]: 2025-10-02 12:36:13.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:13.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:14 np0005466031 nova_compute[235803]: 2025-10-02 12:36:14.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:14.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:16 np0005466031 nova_compute[235803]: 2025-10-02 12:36:16.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:16.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:16 np0005466031 nova_compute[235803]: 2025-10-02 12:36:16.667 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:16 np0005466031 nova_compute[235803]: 2025-10-02 12:36:16.667 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:16 np0005466031 nova_compute[235803]: 2025-10-02 12:36:16.668 2 DEBUG nova.network.neutron [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:36:17 np0005466031 nova_compute[235803]: 2025-10-02 12:36:17.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:17.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:18.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.424 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.424 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.468 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:36:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:19.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.641 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.642 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.652 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.652 2 INFO nova.compute.claims [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:36:19 np0005466031 nova_compute[235803]: 2025-10-02 12:36:19.856 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.043 2 DEBUG nova.network.neutron [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.062 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.222 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.223 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Creating file /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/72c30657335747579a45201fed8c3895.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.223 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/72c30657335747579a45201fed8c3895.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:20.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/995817035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.393 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.399 2 DEBUG nova.compute.provider_tree [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.436 2 DEBUG nova.scheduler.client.report [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.470 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.471 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.563 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.564 2 DEBUG nova.network.neutron [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.600 2 INFO nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:36:20 np0005466031 podman[275058]: 2025-10-02 12:36:20.62264098 +0000 UTC m=+0.052372077 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.645 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:36:20 np0005466031 podman[275059]: 2025-10-02 12:36:20.670734064 +0000 UTC m=+0.094845920 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.695 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/72c30657335747579a45201fed8c3895.tmp" returned: 1 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.695 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/72c30657335747579a45201fed8c3895.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.696 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Creating directory /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.696 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.858 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.860 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.860 2 INFO nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Creating image(s)#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.891 2 DEBUG nova.storage.rbd_utils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] rbd image 66831177-e247-4ab1-9e6d-da697263db07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.936 2 DEBUG nova.storage.rbd_utils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] rbd image 66831177-e247-4ab1-9e6d-da697263db07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.960 2 DEBUG nova.storage.rbd_utils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] rbd image 66831177-e247-4ab1-9e6d-da697263db07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.965 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.990 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.993 2 DEBUG nova.policy [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ea96e98e4914b1db7a21226386f1584', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89c910193b7b4d4eaecb3a0a07473ae7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:36:20 np0005466031 nova_compute[235803]: 2025-10-02 12:36:20.998 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:36:21 np0005466031 nova_compute[235803]: 2025-10-02 12:36:21.036 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:21 np0005466031 nova_compute[235803]: 2025-10-02 12:36:21.037 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:21 np0005466031 nova_compute[235803]: 2025-10-02 12:36:21.038 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:21 np0005466031 nova_compute[235803]: 2025-10-02 12:36:21.038 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:21 np0005466031 nova_compute[235803]: 2025-10-02 12:36:21.135 2 DEBUG nova.storage.rbd_utils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] rbd image 66831177-e247-4ab1-9e6d-da697263db07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:21 np0005466031 nova_compute[235803]: 2025-10-02 12:36:21.139 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 66831177-e247-4ab1-9e6d-da697263db07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:21.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:22 np0005466031 nova_compute[235803]: 2025-10-02 12:36:22.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:22 np0005466031 nova_compute[235803]: 2025-10-02 12:36:22.222 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 66831177-e247-4ab1-9e6d-da697263db07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:22.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:22 np0005466031 nova_compute[235803]: 2025-10-02 12:36:22.298 2 DEBUG nova.storage.rbd_utils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] resizing rbd image 66831177-e247-4ab1-9e6d-da697263db07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:36:22 np0005466031 nova_compute[235803]: 2025-10-02 12:36:22.420 2 DEBUG nova.objects.instance [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lazy-loading 'migration_context' on Instance uuid 66831177-e247-4ab1-9e6d-da697263db07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.124 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.125 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Ensure instance console log exists: /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.126 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.127 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.127 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.249 2 DEBUG nova.network.neutron [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Successfully created port: ddf79fa9-1a29-455c-a9ef-5dce823b7e60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:36:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:23 np0005466031 kernel: tapb31bc9d2-55 (unregistering): left promiscuous mode
Oct  2 08:36:23 np0005466031 NetworkManager[44907]: <info>  [1759408583.8056] device (tapb31bc9d2-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:36:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:23Z|00321|binding|INFO|Releasing lport b31bc9d2-5589-460c-9a78-a1d800087345 from this chassis (sb_readonly=0)
Oct  2 08:36:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:23Z|00322|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 down in Southbound
Oct  2 08:36:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:23Z|00323|binding|INFO|Removing iface tapb31bc9d2-55 ovn-installed in OVS
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:23.828 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:e0:75 10.100.0.12'], port_security=['fa:16:3e:46:e0:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=b31bc9d2-5589-460c-9a78-a1d800087345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:23.830 141898 INFO neutron.agent.ovn.metadata.agent [-] Port b31bc9d2-5589-460c-9a78-a1d800087345 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:36:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:23.831 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:36:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:23.832 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd5d475-8317-438c-b129-cd25ff1ddbd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:23.833 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore#033[00m
Oct  2 08:36:23 np0005466031 nova_compute[235803]: 2025-10-02 12:36:23.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:23 np0005466031 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct  2 08:36:23 np0005466031 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000005d.scope: Consumed 14.318s CPU time.
Oct  2 08:36:23 np0005466031 systemd-machined[192227]: Machine qemu-36-instance-0000005d terminated.
Oct  2 08:36:23 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[274819]: [NOTICE]   (274823) : haproxy version is 2.8.14-c23fe91
Oct  2 08:36:23 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[274819]: [NOTICE]   (274823) : path to executable is /usr/sbin/haproxy
Oct  2 08:36:23 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[274819]: [WARNING]  (274823) : Exiting Master process...
Oct  2 08:36:23 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[274819]: [ALERT]    (274823) : Current worker (274825) exited with code 143 (Terminated)
Oct  2 08:36:23 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[274819]: [WARNING]  (274823) : All workers exited. Exiting... (0)
Oct  2 08:36:23 np0005466031 systemd[1]: libpod-c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70.scope: Deactivated successfully.
Oct  2 08:36:23 np0005466031 podman[275294]: 2025-10-02 12:36:23.979444878 +0000 UTC m=+0.062217811 container died c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.053 2 INFO nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.058 2 INFO nova.virt.libvirt.driver [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance destroyed successfully.#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.059 2 DEBUG nova.virt.libvirt.vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.060 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.060 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.061 2 DEBUG os_vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.062 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb31bc9d2-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.069 2 INFO os_vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55')#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.073 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.073 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:36:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:24.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.287 2 DEBUG nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.288 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.288 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.289 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.289 2 DEBUG nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.290 2 WARNING nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:36:24 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70-userdata-shm.mount: Deactivated successfully.
Oct  2 08:36:24 np0005466031 systemd[1]: var-lib-containers-storage-overlay-372f6c2cd1f0e3f1f99cfa6989a085cfda1fd71de6824ffdb970a65585f159df-merged.mount: Deactivated successfully.
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.503 2 DEBUG neutronclient.v2_0.client [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port b31bc9d2-5589-460c-9a78-a1d800087345 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.663 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.663 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:24 np0005466031 nova_compute[235803]: 2025-10-02 12:36:24.663 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:25 np0005466031 podman[275294]: 2025-10-02 12:36:25.180459547 +0000 UTC m=+1.263232510 container cleanup c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:36:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:25 np0005466031 systemd[1]: libpod-conmon-c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70.scope: Deactivated successfully.
Oct  2 08:36:25 np0005466031 podman[275334]: 2025-10-02 12:36:25.295740044 +0000 UTC m=+0.084666528 container remove c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.301 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0c68e9-c80a-4da2-859c-e0cc3b7948ba]: (4, ('Thu Oct  2 12:36:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70)\nc943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70\nThu Oct  2 12:36:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (c943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70)\nc943461cda5743bb9b7abf63133019cb5c8151fc6941426c7d9204c99f3e2a70\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.304 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f326d5dd-08e8-4ef2-93be-f95478e52fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.306 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:25 np0005466031 kernel: tapf011efa4-00: left promiscuous mode
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.329 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[066c1d16-8caa-458f-8dac-18c08b2470b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.355 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4642c456-2021-4a55-a014-b6b59fd7bebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.357 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[630942c2-d6b0-4514-a64c-57730a985559]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.372 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[10d22c5c-6f60-4f9e-a79f-2affc111b5bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640536, 'reachable_time': 29687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275350, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:25 np0005466031 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.378 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.378 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[eb68f580-bd16-4ea7-aa2c-c8c8de127d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:25.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.728 2 DEBUG nova.network.neutron [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Successfully updated port: ddf79fa9-1a29-455c-a9ef-5dce823b7e60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.758 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "refresh_cache-66831177-e247-4ab1-9e6d-da697263db07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.758 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquired lock "refresh_cache-66831177-e247-4ab1-9e6d-da697263db07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.758 2 DEBUG nova.network.neutron [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.845 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.845 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:25.845 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.871 2 DEBUG nova.compute.manager [req-89170b12-3fc5-4d0b-ad1f-6e52a4faeb27 req-71d006e2-f904-4043-b350-de01284229e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received event network-changed-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.871 2 DEBUG nova.compute.manager [req-89170b12-3fc5-4d0b-ad1f-6e52a4faeb27 req-71d006e2-f904-4043-b350-de01284229e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Refreshing instance network info cache due to event network-changed-ddf79fa9-1a29-455c-a9ef-5dce823b7e60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:36:25 np0005466031 nova_compute[235803]: 2025-10-02 12:36:25.872 2 DEBUG oslo_concurrency.lockutils [req-89170b12-3fc5-4d0b-ad1f-6e52a4faeb27 req-71d006e2-f904-4043-b350-de01284229e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-66831177-e247-4ab1-9e6d-da697263db07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:26 np0005466031 nova_compute[235803]: 2025-10-02 12:36:26.044 2 DEBUG nova.network.neutron [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:36:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:26.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.107 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:27.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.742 2 DEBUG nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.742 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.743 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.744 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.744 2 DEBUG nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.745 2 WARNING nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.881 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Triggering sync for uuid 66831177-e247-4ab1-9e6d-da697263db07 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:36:27 np0005466031 nova_compute[235803]: 2025-10-02 12:36:27.881 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:28.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.840 2 DEBUG nova.network.neutron [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Updating instance_info_cache with network_info: [{"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.925 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Releasing lock "refresh_cache-66831177-e247-4ab1-9e6d-da697263db07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.925 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Instance network_info: |[{"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.926 2 DEBUG oslo_concurrency.lockutils [req-89170b12-3fc5-4d0b-ad1f-6e52a4faeb27 req-71d006e2-f904-4043-b350-de01284229e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-66831177-e247-4ab1-9e6d-da697263db07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.926 2 DEBUG nova.network.neutron [req-89170b12-3fc5-4d0b-ad1f-6e52a4faeb27 req-71d006e2-f904-4043-b350-de01284229e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Refreshing network info cache for port ddf79fa9-1a29-455c-a9ef-5dce823b7e60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.928 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Start _get_guest_xml network_info=[{"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.932 2 WARNING nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.945 2 DEBUG nova.virt.libvirt.host [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.946 2 DEBUG nova.virt.libvirt.host [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.948 2 DEBUG nova.virt.libvirt.host [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.949 2 DEBUG nova.virt.libvirt.host [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.949 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.950 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.950 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.950 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.950 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.951 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.951 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.951 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.951 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.951 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.952 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.952 2 DEBUG nova.virt.hardware [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:36:28 np0005466031 nova_compute[235803]: 2025-10-02 12:36:28.954 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:36:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2996427117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.391 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.418 2 DEBUG nova.storage.rbd_utils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] rbd image 66831177-e247-4ab1-9e6d-da697263db07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.422 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:29.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:36:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/795182459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.841 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.843 2 DEBUG nova.virt.libvirt.vif [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1735718745',display_name='tempest-ServerAddressesNegativeTestJSON-server-1735718745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1735718745',id=95,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89c910193b7b4d4eaecb3a0a07473ae7',ramdisk_id='',reservation_id='r-xq5d7m6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1170626334',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1170626334-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:20Z,user_data=None,user_id='9ea96e98e4914b1db7a21226386f1584',uuid=66831177-e247-4ab1-9e6d-da697263db07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.843 2 DEBUG nova.network.os_vif_util [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Converting VIF {"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.844 2 DEBUG nova.network.os_vif_util [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:3f:6e,bridge_name='br-int',has_traffic_filtering=True,id=ddf79fa9-1a29-455c-a9ef-5dce823b7e60,network=Network(dae6b5d9-aa0a-4374-8253-08d4653cc037),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddf79fa9-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.845 2 DEBUG nova.objects.instance [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66831177-e247-4ab1-9e6d-da697263db07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.884 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <uuid>66831177-e247-4ab1-9e6d-da697263db07</uuid>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <name>instance-0000005f</name>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1735718745</nova:name>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:36:28</nova:creationTime>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <nova:user uuid="9ea96e98e4914b1db7a21226386f1584">tempest-ServerAddressesNegativeTestJSON-1170626334-project-member</nova:user>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <nova:project uuid="89c910193b7b4d4eaecb3a0a07473ae7">tempest-ServerAddressesNegativeTestJSON-1170626334</nova:project>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <nova:port uuid="ddf79fa9-1a29-455c-a9ef-5dce823b7e60">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <entry name="serial">66831177-e247-4ab1-9e6d-da697263db07</entry>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <entry name="uuid">66831177-e247-4ab1-9e6d-da697263db07</entry>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/66831177-e247-4ab1-9e6d-da697263db07_disk">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/66831177-e247-4ab1-9e6d-da697263db07_disk.config">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:31:3f:6e"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <target dev="tapddf79fa9-1a"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07/console.log" append="off"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:36:29 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:36:29 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:36:29 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:36:29 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.885 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Preparing to wait for external event network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.885 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.886 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.886 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.886 2 DEBUG nova.virt.libvirt.vif [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1735718745',display_name='tempest-ServerAddressesNegativeTestJSON-server-1735718745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1735718745',id=95,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89c910193b7b4d4eaecb3a0a07473ae7',ramdisk_id='',reservation_id='r-xq5d7m6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1170626334',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1170626334-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:20Z,user_data=None,user_id='9ea96e98e4914b1db7a21226386f1584',uuid=66831177-e247-4ab1-9e6d-da697263db07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.887 2 DEBUG nova.network.os_vif_util [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Converting VIF {"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.887 2 DEBUG nova.network.os_vif_util [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:3f:6e,bridge_name='br-int',has_traffic_filtering=True,id=ddf79fa9-1a29-455c-a9ef-5dce823b7e60,network=Network(dae6b5d9-aa0a-4374-8253-08d4653cc037),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddf79fa9-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.887 2 DEBUG os_vif [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:3f:6e,bridge_name='br-int',has_traffic_filtering=True,id=ddf79fa9-1a29-455c-a9ef-5dce823b7e60,network=Network(dae6b5d9-aa0a-4374-8253-08d4653cc037),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddf79fa9-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.889 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddf79fa9-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapddf79fa9-1a, col_values=(('external_ids', {'iface-id': 'ddf79fa9-1a29-455c-a9ef-5dce823b7e60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:3f:6e', 'vm-uuid': '66831177-e247-4ab1-9e6d-da697263db07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:29 np0005466031 NetworkManager[44907]: <info>  [1759408589.8943] manager: (tapddf79fa9-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:29 np0005466031 nova_compute[235803]: 2025-10-02 12:36:29.901 2 INFO os_vif [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:3f:6e,bridge_name='br-int',has_traffic_filtering=True,id=ddf79fa9-1a29-455c-a9ef-5dce823b7e60,network=Network(dae6b5d9-aa0a-4374-8253-08d4653cc037),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddf79fa9-1a')#033[00m
Oct  2 08:36:29 np0005466031 podman[275552]: 2025-10-02 12:36:29.989182472 +0000 UTC m=+0.051972296 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:36:29 np0005466031 podman[275551]: 2025-10-02 12:36:29.990229662 +0000 UTC m=+0.054251072 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:36:30 np0005466031 nova_compute[235803]: 2025-10-02 12:36:30.176 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:36:30 np0005466031 nova_compute[235803]: 2025-10-02 12:36:30.177 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:36:30 np0005466031 nova_compute[235803]: 2025-10-02 12:36:30.177 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] No VIF found with MAC fa:16:3e:31:3f:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:36:30 np0005466031 nova_compute[235803]: 2025-10-02 12:36:30.177 2 INFO nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Using config drive#033[00m
Oct  2 08:36:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:30 np0005466031 nova_compute[235803]: 2025-10-02 12:36:30.204 2 DEBUG nova.storage.rbd_utils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] rbd image 66831177-e247-4ab1-9e6d-da697263db07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:30.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:30 np0005466031 nova_compute[235803]: 2025-10-02 12:36:30.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:30 np0005466031 nova_compute[235803]: 2025-10-02 12:36:30.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:36:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:36:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.155 2 DEBUG nova.compute.manager [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.156 2 DEBUG nova.compute.manager [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing instance network info cache due to event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.157 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.157 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.158 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.362 2 INFO nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Creating config drive at /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07/disk.config#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.367 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo5mt3z3z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.499 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo5mt3z3z" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.528 2 DEBUG nova.storage.rbd_utils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] rbd image 66831177-e247-4ab1-9e6d-da697263db07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.531 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07/disk.config 66831177-e247-4ab1-9e6d-da697263db07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:31.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.677 2 DEBUG oslo_concurrency.processutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07/disk.config 66831177-e247-4ab1-9e6d-da697263db07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.677 2 INFO nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Deleting local config drive /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07/disk.config because it was imported into RBD.#033[00m
Oct  2 08:36:31 np0005466031 kernel: tapddf79fa9-1a: entered promiscuous mode
Oct  2 08:36:31 np0005466031 NetworkManager[44907]: <info>  [1759408591.7266] manager: (tapddf79fa9-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:31Z|00324|binding|INFO|Claiming lport ddf79fa9-1a29-455c-a9ef-5dce823b7e60 for this chassis.
Oct  2 08:36:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:31Z|00325|binding|INFO|ddf79fa9-1a29-455c-a9ef-5dce823b7e60: Claiming fa:16:3e:31:3f:6e 10.100.0.14
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:31 np0005466031 systemd-machined[192227]: New machine qemu-37-instance-0000005f.
Oct  2 08:36:31 np0005466031 systemd-udevd[275664]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:36:31 np0005466031 NetworkManager[44907]: <info>  [1759408591.7662] device (tapddf79fa9-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:36:31 np0005466031 NetworkManager[44907]: <info>  [1759408591.7673] device (tapddf79fa9-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:36:31 np0005466031 systemd[1]: Started Virtual Machine qemu-37-instance-0000005f.
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.791 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:3f:6e 10.100.0.14'], port_security=['fa:16:3e:31:3f:6e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '66831177-e247-4ab1-9e6d-da697263db07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dae6b5d9-aa0a-4374-8253-08d4653cc037', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89c910193b7b4d4eaecb3a0a07473ae7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a6d8280b-dc8f-4fb6-9c2f-3349f85cee00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed94bc6-f17a-49f6-9b4e-a3dc5ca28454, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=ddf79fa9-1a29-455c-a9ef-5dce823b7e60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.792 141898 INFO neutron.agent.ovn.metadata.agent [-] Port ddf79fa9-1a29-455c-a9ef-5dce823b7e60 in datapath dae6b5d9-aa0a-4374-8253-08d4653cc037 bound to our chassis#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.794 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dae6b5d9-aa0a-4374-8253-08d4653cc037#033[00m
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:31Z|00326|binding|INFO|Setting lport ddf79fa9-1a29-455c-a9ef-5dce823b7e60 ovn-installed in OVS
Oct  2 08:36:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:31Z|00327|binding|INFO|Setting lport ddf79fa9-1a29-455c-a9ef-5dce823b7e60 up in Southbound
Oct  2 08:36:31 np0005466031 nova_compute[235803]: 2025-10-02 12:36:31.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.804 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6afa5d4d-e81b-4518-8c82-67466b801c46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.805 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdae6b5d9-a1 in ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.806 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdae6b5d9-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.806 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[06ee8da0-4ad7-476f-946e-a358b2229291]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.807 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c1aed88c-53da-4bb9-b22e-52bc0b029343]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.820 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ea1f9c-4664-4828-b904-d1a310cf9e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.843 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[db8d13c3-b49f-4b38-9f86-6856a3b3a22e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.872 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[280b01a0-0c10-4e70-805d-b18fdaca85f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.878 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3997db60-8a5f-40f9-b47b-42061778d672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 NetworkManager[44907]: <info>  [1759408591.8784] manager: (tapdae6b5d9-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Oct  2 08:36:31 np0005466031 systemd-udevd[275666]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.906 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[961a9988-94f4-4e8d-9611-1ad30f64dfda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.909 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa538bb-39b7-4775-a252-adfb6bb8f03a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 NetworkManager[44907]: <info>  [1759408591.9325] device (tapdae6b5d9-a0): carrier: link connected
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.940 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f99505c3-1e3f-4188-996e-db4566622059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.959 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3d4af1-cc40-4035-9821-aeadf6bdfd3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdae6b5d9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:0a:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644752, 'reachable_time': 15840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275698, 'error': None, 'target': 'ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.976 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e592b33b-ab92-4193-9dc3-e7be95fac6e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:a61'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 644752, 'tstamp': 644752}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275699, 'error': None, 'target': 'ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:31.994 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a78d80ee-add0-483a-92dd-59da3528b6b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdae6b5d9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:0a:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644752, 'reachable_time': 15840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275700, 'error': None, 'target': 'ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.023 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6269a45d-389f-4024-826e-2549557bde81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.078 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b63df33e-7a4c-41c8-9e14-56988fc10269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.080 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdae6b5d9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.080 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.080 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdae6b5d9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:32 np0005466031 NetworkManager[44907]: <info>  [1759408592.0837] manager: (tapdae6b5d9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct  2 08:36:32 np0005466031 kernel: tapdae6b5d9-a0: entered promiscuous mode
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.086 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdae6b5d9-a0, col_values=(('external_ids', {'iface-id': 'c76b95cf-0819-4d1f-9954-246f8c078616'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:32 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:32Z|00328|binding|INFO|Releasing lport c76b95cf-0819-4d1f-9954-246f8c078616 from this chassis (sb_readonly=0)
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.109 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dae6b5d9-aa0a-4374-8253-08d4653cc037.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dae6b5d9-aa0a-4374-8253-08d4653cc037.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.110 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[63506ba2-a802-4c86-aef4-affd4f4cc0fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.111 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-dae6b5d9-aa0a-4374-8253-08d4653cc037
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/dae6b5d9-aa0a-4374-8253-08d4653cc037.pid.haproxy
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID dae6b5d9-aa0a-4374-8253-08d4653cc037
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.113 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037', 'env', 'PROCESS_TAG=haproxy-dae6b5d9-aa0a-4374-8253-08d4653cc037', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dae6b5d9-aa0a-4374-8253-08d4653cc037.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:36:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:32.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.287 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:32 np0005466031 podman[275775]: 2025-10-02 12:36:32.509059368 +0000 UTC m=+0.056079294 container create d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:36:32 np0005466031 systemd[1]: Started libpod-conmon-d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504.scope.
Oct  2 08:36:32 np0005466031 podman[275775]: 2025-10-02 12:36:32.4799103 +0000 UTC m=+0.026930256 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:36:32 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:36:32 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16f8cc6ad3a0052cc18b854c72ac9ebbd11a8780ab9cfb6b5930a56e662fff5a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:36:32 np0005466031 podman[275775]: 2025-10-02 12:36:32.606582865 +0000 UTC m=+0.153602801 container init d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:36:32 np0005466031 podman[275775]: 2025-10-02 12:36:32.613610717 +0000 UTC m=+0.160630643 container start d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.614 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408592.613347, 66831177-e247-4ab1-9e6d-da697263db07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.615 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] VM Started (Lifecycle Event)#033[00m
Oct  2 08:36:32 np0005466031 neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037[275790]: [NOTICE]   (275794) : New worker (275796) forked
Oct  2 08:36:32 np0005466031 neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037[275790]: [NOTICE]   (275794) : Loading success.
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.674 2 DEBUG nova.compute.manager [req-2e7414e0-7d7d-4059-8789-bafc065cbca1 req-cc2d6bbf-e4dd-44eb-a88c-2786e4899689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received event network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.674 2 DEBUG oslo_concurrency.lockutils [req-2e7414e0-7d7d-4059-8789-bafc065cbca1 req-cc2d6bbf-e4dd-44eb-a88c-2786e4899689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.675 2 DEBUG oslo_concurrency.lockutils [req-2e7414e0-7d7d-4059-8789-bafc065cbca1 req-cc2d6bbf-e4dd-44eb-a88c-2786e4899689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.675 2 DEBUG oslo_concurrency.lockutils [req-2e7414e0-7d7d-4059-8789-bafc065cbca1 req-cc2d6bbf-e4dd-44eb-a88c-2786e4899689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.675 2 DEBUG nova.compute.manager [req-2e7414e0-7d7d-4059-8789-bafc065cbca1 req-cc2d6bbf-e4dd-44eb-a88c-2786e4899689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Processing event network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.676 2 DEBUG nova.network.neutron [req-89170b12-3fc5-4d0b-ad1f-6e52a4faeb27 req-71d006e2-f904-4043-b350-de01284229e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Updated VIF entry in instance network info cache for port ddf79fa9-1a29-455c-a9ef-5dce823b7e60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.676 2 DEBUG nova.network.neutron [req-89170b12-3fc5-4d0b-ad1f-6e52a4faeb27 req-71d006e2-f904-4043-b350-de01284229e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Updating instance_info_cache with network_info: [{"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.677 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:36:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:32.677 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.679 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.682 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.685 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.689 2 INFO nova.virt.libvirt.driver [-] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Instance spawned successfully.#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.690 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.727 2 DEBUG oslo_concurrency.lockutils [req-89170b12-3fc5-4d0b-ad1f-6e52a4faeb27 req-71d006e2-f904-4043-b350-de01284229e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-66831177-e247-4ab1-9e6d-da697263db07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.733 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.733 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.734 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.734 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.735 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.735 2 DEBUG nova.virt.libvirt.driver [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.738 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.738 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408592.613673, 66831177-e247-4ab1-9e6d-da697263db07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.739 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.782 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.785 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408592.6804342, 66831177-e247-4ab1-9e6d-da697263db07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.785 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.828 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.834 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.878 2 INFO nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Took 12.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.879 2 DEBUG nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.882 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:36:32 np0005466031 nova_compute[235803]: 2025-10-02 12:36:32.986 2 INFO nova.compute.manager [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Took 13.45 seconds to build instance.#033[00m
Oct  2 08:36:33 np0005466031 nova_compute[235803]: 2025-10-02 12:36:33.048 2 DEBUG oslo_concurrency.lockutils [None req-4a700ccd-bcf1-41de-8754-48790556ab5c 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:33 np0005466031 nova_compute[235803]: 2025-10-02 12:36:33.048 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "66831177-e247-4ab1-9e6d-da697263db07" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 5.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:33 np0005466031 nova_compute[235803]: 2025-10-02 12:36:33.048 2 INFO nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:36:33 np0005466031 nova_compute[235803]: 2025-10-02 12:36:33.049 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "66831177-e247-4ab1-9e6d-da697263db07" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e249 e249: 3 total, 3 up, 3 in
Oct  2 08:36:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:33.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:34 np0005466031 nova_compute[235803]: 2025-10-02 12:36:34.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:36:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:34.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:36:34 np0005466031 nova_compute[235803]: 2025-10-02 12:36:34.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.328 2 DEBUG nova.compute.manager [req-276f779c-4910-4333-82b6-992519a5a2bf req-bb0fc130-98b3-42e5-9add-70360d75efb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received event network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.328 2 DEBUG oslo_concurrency.lockutils [req-276f779c-4910-4333-82b6-992519a5a2bf req-bb0fc130-98b3-42e5-9add-70360d75efb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.328 2 DEBUG oslo_concurrency.lockutils [req-276f779c-4910-4333-82b6-992519a5a2bf req-bb0fc130-98b3-42e5-9add-70360d75efb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.328 2 DEBUG oslo_concurrency.lockutils [req-276f779c-4910-4333-82b6-992519a5a2bf req-bb0fc130-98b3-42e5-9add-70360d75efb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.329 2 DEBUG nova.compute.manager [req-276f779c-4910-4333-82b6-992519a5a2bf req-bb0fc130-98b3-42e5-9add-70360d75efb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] No waiting events found dispatching network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.329 2 WARNING nova.compute.manager [req-276f779c-4910-4333-82b6-992519a5a2bf req-bb0fc130-98b3-42e5-9add-70360d75efb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received unexpected event network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:36:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:35.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.812 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated VIF entry in instance network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.813 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:35 np0005466031 nova_compute[235803]: 2025-10-02 12:36:35.851 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:36.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:37.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:38.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:38.680 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.053 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408584.0524888, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.053 2 INFO nova.compute.manager [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.097 2 DEBUG nova.compute.manager [None req-8c6768f5-ddf1-455e-867b-06734d9e139b - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.100 2 DEBUG nova.compute.manager [None req-8c6768f5-ddf1-455e-867b-06734d9e139b - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.133 2 INFO nova.compute.manager [None req-8c6768f5-ddf1-455e-867b-06734d9e139b - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Oct  2 08:36:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:39.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:36:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.973 2 DEBUG nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.974 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.974 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.975 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.975 2 DEBUG nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.976 2 WARNING nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.978 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.978 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.978 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.979 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.979 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.980 2 INFO nova.compute.manager [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Terminating instance#033[00m
Oct  2 08:36:39 np0005466031 nova_compute[235803]: 2025-10-02 12:36:39.981 2 DEBUG nova.compute.manager [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:36:40 np0005466031 kernel: tapddf79fa9-1a (unregistering): left promiscuous mode
Oct  2 08:36:40 np0005466031 NetworkManager[44907]: <info>  [1759408600.1339] device (tapddf79fa9-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:36:40 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:40Z|00329|binding|INFO|Releasing lport ddf79fa9-1a29-455c-a9ef-5dce823b7e60 from this chassis (sb_readonly=0)
Oct  2 08:36:40 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:40Z|00330|binding|INFO|Setting lport ddf79fa9-1a29-455c-a9ef-5dce823b7e60 down in Southbound
Oct  2 08:36:40 np0005466031 ovn_controller[132413]: 2025-10-02T12:36:40Z|00331|binding|INFO|Removing iface tapddf79fa9-1a ovn-installed in OVS
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.156 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:3f:6e 10.100.0.14'], port_security=['fa:16:3e:31:3f:6e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '66831177-e247-4ab1-9e6d-da697263db07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dae6b5d9-aa0a-4374-8253-08d4653cc037', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89c910193b7b4d4eaecb3a0a07473ae7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a6d8280b-dc8f-4fb6-9c2f-3349f85cee00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed94bc6-f17a-49f6-9b4e-a3dc5ca28454, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=ddf79fa9-1a29-455c-a9ef-5dce823b7e60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.158 141898 INFO neutron.agent.ovn.metadata.agent [-] Port ddf79fa9-1a29-455c-a9ef-5dce823b7e60 in datapath dae6b5d9-aa0a-4374-8253-08d4653cc037 unbound from our chassis#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.160 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dae6b5d9-aa0a-4374-8253-08d4653cc037, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.161 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ed57ef56-6856-4692-9406-d91d1c1a6f64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.162 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037 namespace which is not needed anymore#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:40 np0005466031 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct  2 08:36:40 np0005466031 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005f.scope: Consumed 8.246s CPU time.
Oct  2 08:36:40 np0005466031 systemd-machined[192227]: Machine qemu-37-instance-0000005f terminated.
Oct  2 08:36:40 np0005466031 neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037[275790]: [NOTICE]   (275794) : haproxy version is 2.8.14-c23fe91
Oct  2 08:36:40 np0005466031 neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037[275790]: [NOTICE]   (275794) : path to executable is /usr/sbin/haproxy
Oct  2 08:36:40 np0005466031 neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037[275790]: [WARNING]  (275794) : Exiting Master process...
Oct  2 08:36:40 np0005466031 neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037[275790]: [WARNING]  (275794) : Exiting Master process...
Oct  2 08:36:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:40 np0005466031 neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037[275790]: [ALERT]    (275794) : Current worker (275796) exited with code 143 (Terminated)
Oct  2 08:36:40 np0005466031 neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037[275790]: [WARNING]  (275794) : All workers exited. Exiting... (0)
Oct  2 08:36:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:40 np0005466031 systemd[1]: libpod-d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504.scope: Deactivated successfully.
Oct  2 08:36:40 np0005466031 podman[275933]: 2025-10-02 12:36:40.295524884 +0000 UTC m=+0.042204085 container died d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:36:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504-userdata-shm.mount: Deactivated successfully.
Oct  2 08:36:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay-16f8cc6ad3a0052cc18b854c72ac9ebbd11a8780ab9cfb6b5930a56e662fff5a-merged.mount: Deactivated successfully.
Oct  2 08:36:40 np0005466031 podman[275933]: 2025-10-02 12:36:40.342795814 +0000 UTC m=+0.089475005 container cleanup d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:36:40 np0005466031 systemd[1]: libpod-conmon-d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504.scope: Deactivated successfully.
Oct  2 08:36:40 np0005466031 podman[275962]: 2025-10-02 12:36:40.409968107 +0000 UTC m=+0.044061519 container remove d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.418 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e858b6-56ab-4926-bb07-5c8df67c7775]: (4, ('Thu Oct  2 12:36:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037 (d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504)\nd39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504\nThu Oct  2 12:36:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037 (d39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504)\nd39a6a8d72f9055c906977f61324c5cb8130fd4af28cb7f879cac7d05ddaf504\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.420 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[088fa0c7-73f7-4719-b63c-80eff3a1bcc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.420 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdae6b5d9-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:40 np0005466031 kernel: tapdae6b5d9-a0: left promiscuous mode
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.426 2 INFO nova.virt.libvirt.driver [-] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Instance destroyed successfully.#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.427 2 DEBUG nova.objects.instance [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lazy-loading 'resources' on Instance uuid 66831177-e247-4ab1-9e6d-da697263db07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.447 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0368b5f6-e85b-422c-ae17-8e46153d744f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.448 2 DEBUG nova.virt.libvirt.vif [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:36:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1735718745',display_name='tempest-ServerAddressesNegativeTestJSON-server-1735718745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1735718745',id=95,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='89c910193b7b4d4eaecb3a0a07473ae7',ramdisk_id='',reservation_id='r-xq5d7m6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1170626334',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1170626334-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:32Z,user_data=None,user_id='9ea96e98e4914b1db7a21226386f1584',uuid=66831177-e247-4ab1-9e6d-da697263db07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.449 2 DEBUG nova.network.os_vif_util [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Converting VIF {"id": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "address": "fa:16:3e:31:3f:6e", "network": {"id": "dae6b5d9-aa0a-4374-8253-08d4653cc037", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-100875840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89c910193b7b4d4eaecb3a0a07473ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddf79fa9-1a", "ovs_interfaceid": "ddf79fa9-1a29-455c-a9ef-5dce823b7e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.450 2 DEBUG nova.network.os_vif_util [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:3f:6e,bridge_name='br-int',has_traffic_filtering=True,id=ddf79fa9-1a29-455c-a9ef-5dce823b7e60,network=Network(dae6b5d9-aa0a-4374-8253-08d4653cc037),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddf79fa9-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.450 2 DEBUG os_vif [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:3f:6e,bridge_name='br-int',has_traffic_filtering=True,id=ddf79fa9-1a29-455c-a9ef-5dce823b7e60,network=Network(dae6b5d9-aa0a-4374-8253-08d4653cc037),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddf79fa9-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddf79fa9-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466031 nova_compute[235803]: 2025-10-02 12:36:40.466 2 INFO os_vif [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:3f:6e,bridge_name='br-int',has_traffic_filtering=True,id=ddf79fa9-1a29-455c-a9ef-5dce823b7e60,network=Network(dae6b5d9-aa0a-4374-8253-08d4653cc037),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddf79fa9-1a')#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.475 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[83001d1a-22f8-4edf-86e5-1b6ce12de80a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.476 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7aa77a-820c-4227-81c5-66aa0d0ded2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.491 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b9abbf8c-36d3-45aa-b3bd-8008fa23cda9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644745, 'reachable_time': 24445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276006, 'error': None, 'target': 'ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:40 np0005466031 systemd[1]: run-netns-ovnmeta\x2ddae6b5d9\x2daa0a\x2d4374\x2d8253\x2d08d4653cc037.mount: Deactivated successfully.
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.496 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dae6b5d9-aa0a-4374-8253-08d4653cc037 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:36:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:36:40.496 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[577ce280-a333-4a5c-8f8e-679041d0d561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.294 2 DEBUG nova.compute.manager [req-b576565b-1719-4828-8d88-a37d98cdcf66 req-ec649d83-c40e-488d-8f67-a32e03589415 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received event network-vif-unplugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.295 2 DEBUG oslo_concurrency.lockutils [req-b576565b-1719-4828-8d88-a37d98cdcf66 req-ec649d83-c40e-488d-8f67-a32e03589415 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.296 2 DEBUG oslo_concurrency.lockutils [req-b576565b-1719-4828-8d88-a37d98cdcf66 req-ec649d83-c40e-488d-8f67-a32e03589415 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.296 2 DEBUG oslo_concurrency.lockutils [req-b576565b-1719-4828-8d88-a37d98cdcf66 req-ec649d83-c40e-488d-8f67-a32e03589415 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.296 2 DEBUG nova.compute.manager [req-b576565b-1719-4828-8d88-a37d98cdcf66 req-ec649d83-c40e-488d-8f67-a32e03589415 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] No waiting events found dispatching network-vif-unplugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.297 2 DEBUG nova.compute.manager [req-b576565b-1719-4828-8d88-a37d98cdcf66 req-ec649d83-c40e-488d-8f67-a32e03589415 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received event network-vif-unplugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.451 2 INFO nova.virt.libvirt.driver [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Deleting instance files /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07_del#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.453 2 INFO nova.virt.libvirt.driver [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Deletion of /var/lib/nova/instances/66831177-e247-4ab1-9e6d-da697263db07_del complete#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.536 2 INFO nova.compute.manager [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Took 1.55 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.537 2 DEBUG oslo.service.loopingcall [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.537 2 DEBUG nova.compute.manager [-] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:36:41 np0005466031 nova_compute[235803]: 2025-10-02 12:36:41.538 2 DEBUG nova.network.neutron [-] [instance: 66831177-e247-4ab1-9e6d-da697263db07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:36:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:41.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:42 np0005466031 nova_compute[235803]: 2025-10-02 12:36:42.212 2 DEBUG nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:42 np0005466031 nova_compute[235803]: 2025-10-02 12:36:42.213 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:42 np0005466031 nova_compute[235803]: 2025-10-02 12:36:42.214 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:42 np0005466031 nova_compute[235803]: 2025-10-02 12:36:42.215 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:42 np0005466031 nova_compute[235803]: 2025-10-02 12:36:42.215 2 DEBUG nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:42 np0005466031 nova_compute[235803]: 2025-10-02 12:36:42.216 2 WARNING nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:36:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:42.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:36:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 34K writes, 137K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.05 MB/s#012Cumulative WAL: 34K writes, 12K syncs, 2.79 writes per sync, written: 0.13 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 43.73 MB, 0.07 MB/s#012Interval WAL: 10K writes, 3772 syncs, 2.65 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.077 2 DEBUG nova.network.neutron [-] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.101 2 INFO nova.compute.manager [-] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Took 1.56 seconds to deallocate network for instance.#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.176 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.176 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.303 2 DEBUG oslo_concurrency.processutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.406 2 DEBUG nova.compute.manager [req-4e3f4907-f0fe-4bc7-ab09-a8e26d1b6ead req-9b529e56-677d-48a7-981f-7084a60c81ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received event network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.407 2 DEBUG oslo_concurrency.lockutils [req-4e3f4907-f0fe-4bc7-ab09-a8e26d1b6ead req-9b529e56-677d-48a7-981f-7084a60c81ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66831177-e247-4ab1-9e6d-da697263db07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.407 2 DEBUG oslo_concurrency.lockutils [req-4e3f4907-f0fe-4bc7-ab09-a8e26d1b6ead req-9b529e56-677d-48a7-981f-7084a60c81ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.408 2 DEBUG oslo_concurrency.lockutils [req-4e3f4907-f0fe-4bc7-ab09-a8e26d1b6ead req-9b529e56-677d-48a7-981f-7084a60c81ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.408 2 DEBUG nova.compute.manager [req-4e3f4907-f0fe-4bc7-ab09-a8e26d1b6ead req-9b529e56-677d-48a7-981f-7084a60c81ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] No waiting events found dispatching network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.408 2 WARNING nova.compute.manager [req-4e3f4907-f0fe-4bc7-ab09-a8e26d1b6ead req-9b529e56-677d-48a7-981f-7084a60c81ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received unexpected event network-vif-plugged-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.409 2 DEBUG nova.compute.manager [req-4e3f4907-f0fe-4bc7-ab09-a8e26d1b6ead req-9b529e56-677d-48a7-981f-7084a60c81ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Received event network-vif-deleted-ddf79fa9-1a29-455c-a9ef-5dce823b7e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:43.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2064274681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.754 2 DEBUG oslo_concurrency.processutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.761 2 DEBUG nova.compute.provider_tree [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.781 2 DEBUG nova.scheduler.client.report [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.823 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.855 2 INFO nova.scheduler.client.report [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Deleted allocations for instance 66831177-e247-4ab1-9e6d-da697263db07#033[00m
Oct  2 08:36:43 np0005466031 nova_compute[235803]: 2025-10-02 12:36:43.957 2 DEBUG oslo_concurrency.lockutils [None req-50d3300b-5496-4330-96eb-7f2ae3ba7dca 9ea96e98e4914b1db7a21226386f1584 89c910193b7b4d4eaecb3a0a07473ae7 - - default default] Lock "66831177-e247-4ab1-9e6d-da697263db07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:44 np0005466031 nova_compute[235803]: 2025-10-02 12:36:44.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:44.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:45 np0005466031 nova_compute[235803]: 2025-10-02 12:36:45.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:45.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:46.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:47 np0005466031 nova_compute[235803]: 2025-10-02 12:36:47.240 2 DEBUG nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:47 np0005466031 nova_compute[235803]: 2025-10-02 12:36:47.241 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:47 np0005466031 nova_compute[235803]: 2025-10-02 12:36:47.241 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:47 np0005466031 nova_compute[235803]: 2025-10-02 12:36:47.241 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:47 np0005466031 nova_compute[235803]: 2025-10-02 12:36:47.241 2 DEBUG nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:47 np0005466031 nova_compute[235803]: 2025-10-02 12:36:47.241 2 WARNING nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:36:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:47.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:48.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:48 np0005466031 nova_compute[235803]: 2025-10-02 12:36:48.348 2 INFO nova.compute.manager [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Swapping old allocation on dict_keys(['f694d536-1dcd-4bb3-8516-534a40cdf6d7']) held by migration 375a3f6c-42dd-46ed-8db6-645863f744f6 for instance#033[00m
Oct  2 08:36:48 np0005466031 nova_compute[235803]: 2025-10-02 12:36:48.421 2 DEBUG nova.scheduler.client.report [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Overwriting current allocation {'allocations': {'730da6ce-9754-46f0-88e3-0019d056443f': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 53}}, 'project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'user_id': '71d69bc37f274fad8a0b06c0b96f2a64', 'consumer_generation': 1} on consumer a7ee799a-27f6-41a6-86dc-694c480fc3a1 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Oct  2 08:36:49 np0005466031 nova_compute[235803]: 2025-10-02 12:36:49.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:49 np0005466031 nova_compute[235803]: 2025-10-02 12:36:49.086 2 INFO nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating port b31bc9d2-5589-460c-9a78-a1d800087345 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:36:49 np0005466031 nova_compute[235803]: 2025-10-02 12:36:49.384 2 DEBUG nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:49 np0005466031 nova_compute[235803]: 2025-10-02 12:36:49.384 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:49 np0005466031 nova_compute[235803]: 2025-10-02 12:36:49.384 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:49 np0005466031 nova_compute[235803]: 2025-10-02 12:36:49.384 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:49 np0005466031 nova_compute[235803]: 2025-10-02 12:36:49.384 2 DEBUG nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:49 np0005466031 nova_compute[235803]: 2025-10-02 12:36:49.385 2 WARNING nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:36:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:36:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:49.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:36:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:50.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:50 np0005466031 nova_compute[235803]: 2025-10-02 12:36:50.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:50 np0005466031 nova_compute[235803]: 2025-10-02 12:36:50.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.405 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:51.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:51 np0005466031 podman[276038]: 2025-10-02 12:36:51.626094455 +0000 UTC m=+0.053600254 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:51 np0005466031 podman[276039]: 2025-10-02 12:36:51.658500527 +0000 UTC m=+0.086342985 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.702 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.702 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.703 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.703 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.703 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.849 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.849 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:51 np0005466031 nova_compute[235803]: 2025-10-02 12:36:51.850 2 DEBUG nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.090 2 DEBUG nova.compute.manager [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.091 2 DEBUG nova.compute.manager [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing instance network info cache due to event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.091 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2931010887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.119 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:52.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.320 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.321 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.463 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.464 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4554MB free_disk=20.921703338623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.464 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.465 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.616 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance a7ee799a-27f6-41a6-86dc-694c480fc3a1 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.617 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.617 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:36:52 np0005466031 nova_compute[235803]: 2025-10-02 12:36:52.683 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:53 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3774149522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:53 np0005466031 nova_compute[235803]: 2025-10-02 12:36:53.133 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:53 np0005466031 nova_compute[235803]: 2025-10-02 12:36:53.138 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:53 np0005466031 nova_compute[235803]: 2025-10-02 12:36:53.163 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:53 np0005466031 nova_compute[235803]: 2025-10-02 12:36:53.200 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:36:53 np0005466031 nova_compute[235803]: 2025-10-02 12:36:53.201 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:53.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.201 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.202 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.202 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.255 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:54.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.476 2 DEBUG nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.529 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.530 2 DEBUG nova.virt.libvirt.driver [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.564 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.564 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:36:54 np0005466031 nova_compute[235803]: 2025-10-02 12:36:54.670 2 DEBUG nova.storage.rbd_utils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rolling back rbd image(a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Oct  2 08:36:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:55 np0005466031 nova_compute[235803]: 2025-10-02 12:36:55.423 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408600.4216204, 66831177-e247-4ab1-9e6d-da697263db07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:55 np0005466031 nova_compute[235803]: 2025-10-02 12:36:55.424 2 INFO nova.compute.manager [-] [instance: 66831177-e247-4ab1-9e6d-da697263db07] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:36:55 np0005466031 nova_compute[235803]: 2025-10-02 12:36:55.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:55 np0005466031 nova_compute[235803]: 2025-10-02 12:36:55.461 2 DEBUG nova.compute.manager [None req-92657512-c23d-4f01-9469-82470d4f2ce6 - - - - - -] [instance: 66831177-e247-4ab1-9e6d-da697263db07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:55.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:56.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:56 np0005466031 nova_compute[235803]: 2025-10-02 12:36:56.694 2 DEBUG nova.storage.rbd_utils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] removing snapshot(nova-resize) on rbd image(a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:36:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:57.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e250 e250: 3 total, 3 up, 3 in
Oct  2 08:36:57 np0005466031 nova_compute[235803]: 2025-10-02 12:36:57.907 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated VIF entry in instance network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:36:57 np0005466031 nova_compute[235803]: 2025-10-02 12:36:57.908 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.039 2 DEBUG nova.virt.libvirt.driver [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Start _get_guest_xml network_info=[{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.042 2 WARNING nova.virt.libvirt.driver [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.046 2 DEBUG nova.virt.libvirt.host [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.047 2 DEBUG nova.virt.libvirt.host [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.052 2 DEBUG nova.virt.libvirt.host [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.052 2 DEBUG nova.virt.libvirt.host [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.053 2 DEBUG nova.virt.libvirt.driver [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.053 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.054 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.054 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.054 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.054 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.055 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.055 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.055 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.055 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.055 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.055 2 DEBUG nova.virt.hardware [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:36:58 np0005466031 nova_compute[235803]: 2025-10-02 12:36:58.056 2 DEBUG nova.objects.instance [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:58 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:36:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:58.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:59 np0005466031 nova_compute[235803]: 2025-10-02 12:36:59.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:59 np0005466031 nova_compute[235803]: 2025-10-02 12:36:59.436 2 DEBUG oslo_concurrency.processutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:59 np0005466031 nova_compute[235803]: 2025-10-02 12:36:59.472 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:59 np0005466031 nova_compute[235803]: 2025-10-02 12:36:59.473 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:59 np0005466031 nova_compute[235803]: 2025-10-02 12:36:59.474 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:36:59 np0005466031 nova_compute[235803]: 2025-10-02 12:36:59.474 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:36:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:36:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:59.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:36:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:36:59 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3305874569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:36:59 np0005466031 nova_compute[235803]: 2025-10-02 12:36:59.896 2 DEBUG oslo_concurrency.processutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:59 np0005466031 nova_compute[235803]: 2025-10-02 12:36:59.934 2 DEBUG oslo_concurrency.processutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:00.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1562042604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.471 2 DEBUG oslo_concurrency.processutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.472 2 DEBUG nova.virt.libvirt.vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.473 2 DEBUG nova.network.os_vif_util [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.473 2 DEBUG nova.network.os_vif_util [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.476 2 DEBUG nova.virt.libvirt.driver [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <uuid>a7ee799a-27f6-41a6-86dc-694c480fc3a1</uuid>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <name>instance-0000005d</name>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerActionsTestJSON-server-1748262975</nova:name>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:36:58</nova:creationTime>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <nova:port uuid="b31bc9d2-5589-460c-9a78-a1d800087345">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <entry name="serial">a7ee799a-27f6-41a6-86dc-694c480fc3a1</entry>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <entry name="uuid">a7ee799a-27f6-41a6-86dc-694c480fc3a1</entry>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:46:e0:75"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <target dev="tapb31bc9d2-55"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/console.log" append="off"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:37:00 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:37:00 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:37:00 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:37:00 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.477 2 DEBUG nova.compute.manager [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Preparing to wait for external event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.477 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.477 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.477 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.478 2 DEBUG nova.virt.libvirt.vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.478 2 DEBUG nova.network.os_vif_util [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.479 2 DEBUG nova.network.os_vif_util [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.479 2 DEBUG os_vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb31bc9d2-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.485 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb31bc9d2-55, col_values=(('external_ids', {'iface-id': 'b31bc9d2-5589-460c-9a78-a1d800087345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:e0:75', 'vm-uuid': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 NetworkManager[44907]: <info>  [1759408620.4893] manager: (tapb31bc9d2-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.496 2 INFO os_vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55')#033[00m
Oct  2 08:37:00 np0005466031 podman[276304]: 2025-10-02 12:37:00.620295624 +0000 UTC m=+0.048048984 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 08:37:00 np0005466031 podman[276303]: 2025-10-02 12:37:00.627908913 +0000 UTC m=+0.057378852 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:37:00 np0005466031 kernel: tapb31bc9d2-55: entered promiscuous mode
Oct  2 08:37:00 np0005466031 NetworkManager[44907]: <info>  [1759408620.7207] manager: (tapb31bc9d2-55): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:00Z|00332|binding|INFO|Claiming lport b31bc9d2-5589-460c-9a78-a1d800087345 for this chassis.
Oct  2 08:37:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:00Z|00333|binding|INFO|b31bc9d2-5589-460c-9a78-a1d800087345: Claiming fa:16:3e:46:e0:75 10.100.0.12
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 NetworkManager[44907]: <info>  [1759408620.7523] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 NetworkManager[44907]: <info>  [1759408620.7540] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Oct  2 08:37:00 np0005466031 systemd-machined[192227]: New machine qemu-38-instance-0000005d.
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.776 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:e0:75 10.100.0.12'], port_security=['fa:16:3e:46:e0:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '10', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=b31bc9d2-5589-460c-9a78-a1d800087345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.778 141898 INFO neutron.agent.ovn.metadata.agent [-] Port b31bc9d2-5589-460c-9a78-a1d800087345 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.779 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.789 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d3001ab4-ab25-4e5c-a2d2-c706bda6b197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.790 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.792 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.792 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0f051401-d01e-4e6d-b72f-28e562d64745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.792 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a46ab5c2-9417-499f-a719-55202a1f9c38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 systemd[1]: Started Virtual Machine qemu-38-instance-0000005d.
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.805 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[8e21a006-6eeb-4d28-ad4f-fbeaf7c7c5bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 systemd-udevd[276356]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:00 np0005466031 NetworkManager[44907]: <info>  [1759408620.8290] device (tapb31bc9d2-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:37:00 np0005466031 NetworkManager[44907]: <info>  [1759408620.8298] device (tapb31bc9d2-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.832 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[546171ab-7829-42dd-8e12-641dc4fbf9a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.865 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e0c555-42ac-4346-a344-6c09d3c41ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 NetworkManager[44907]: <info>  [1759408620.8759] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/170)
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.875 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[700b453e-0e3b-445c-823d-5af2e31fec1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.907 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6e906a62-19aa-44e1-821f-a256fc2ba9c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.910 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[531ccd94-6274-4fd8-ae0a-c58d5b93d860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 NetworkManager[44907]: <info>  [1759408620.9349] device (tapf011efa4-00): carrier: link connected
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.941 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c555b43b-c880-4c0b-bb16-bfdb5c98e9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.956 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5a4f85-758e-47c5-85dc-f4096027f7c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647652, 'reachable_time': 19666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276387, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:00Z|00334|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 ovn-installed in OVS
Oct  2 08:37:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:00Z|00335|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 up in Southbound
Oct  2 08:37:00 np0005466031 nova_compute[235803]: 2025-10-02 12:37:00.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.971 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c183299e-e812-4718-a201-d16a475f678f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647652, 'tstamp': 647652}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276388, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:00.986 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f523f23e-25d0-4b00-a088-087b3f50a4d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647652, 'reachable_time': 19666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276389, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.010 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[667b0cab-8fd1-4e02-9280-bc57b1fa92f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:01 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.059 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[99ebfc36-1942-40c2-a84e-2f7d41473919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.061 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.061 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.061 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:01 np0005466031 kernel: tapf011efa4-00: entered promiscuous mode
Oct  2 08:37:01 np0005466031 NetworkManager[44907]: <info>  [1759408621.0635] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Oct  2 08:37:01 np0005466031 nova_compute[235803]: 2025-10-02 12:37:01.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.065 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:01Z|00336|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:37:01 np0005466031 nova_compute[235803]: 2025-10-02 12:37:01.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.082 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.083 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[064a953d-1e8f-4a8d-bb04-b0701242de28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.083 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:37:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:01.084 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:37:01 np0005466031 nova_compute[235803]: 2025-10-02 12:37:01.299 2 DEBUG nova.compute.manager [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:01 np0005466031 nova_compute[235803]: 2025-10-02 12:37:01.299 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:01 np0005466031 nova_compute[235803]: 2025-10-02 12:37:01.299 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:01 np0005466031 nova_compute[235803]: 2025-10-02 12:37:01.300 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:01 np0005466031 nova_compute[235803]: 2025-10-02 12:37:01.300 2 DEBUG nova.compute.manager [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Processing event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:37:01 np0005466031 podman[276440]: 2025-10-02 12:37:01.405812605 +0000 UTC m=+0.021719406 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:37:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:01.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.233 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.256 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.257 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.257 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.258 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.258 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.258 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.259 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.259 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:37:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:02 np0005466031 podman[276440]: 2025-10-02 12:37:02.408411024 +0000 UTC m=+1.024317845 container create 307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:37:02 np0005466031 systemd[1]: Started libpod-conmon-307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e.scope.
Oct  2 08:37:02 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:37:02 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db18f3c24dcd7e8191ab47ac561dbdcf2365f01d1dac9643bc1c87edb71dbe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:37:02 np0005466031 podman[276440]: 2025-10-02 12:37:02.711438773 +0000 UTC m=+1.327345564 container init 307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:37:02 np0005466031 podman[276440]: 2025-10-02 12:37:02.716918311 +0000 UTC m=+1.332825082 container start 307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:37:02 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[276480]: [NOTICE]   (276484) : New worker (276486) forked
Oct  2 08:37:02 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[276480]: [NOTICE]   (276484) : Loading success.
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.975 2 DEBUG nova.compute.manager [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.976 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408622.975107, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.976 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Started (Lifecycle Event)#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.982 2 INFO nova.virt.libvirt.driver [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance running successfully.#033[00m
Oct  2 08:37:02 np0005466031 nova_compute[235803]: 2025-10-02 12:37:02.983 2 DEBUG nova.virt.libvirt.driver [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.035 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.039 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.070 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.070 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408622.9760134, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.071 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.128 2 INFO nova.compute.manager [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance to original state: 'active'#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.133 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.137 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408622.9786844, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.137 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.208 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.211 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.441 2 DEBUG nova.compute.manager [req-3adabc16-95d6-4248-a077-00fbd1436cb1 req-f78c65cc-ae60-49a8-84c5-7158ec068078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.441 2 DEBUG oslo_concurrency.lockutils [req-3adabc16-95d6-4248-a077-00fbd1436cb1 req-f78c65cc-ae60-49a8-84c5-7158ec068078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.441 2 DEBUG oslo_concurrency.lockutils [req-3adabc16-95d6-4248-a077-00fbd1436cb1 req-f78c65cc-ae60-49a8-84c5-7158ec068078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.442 2 DEBUG oslo_concurrency.lockutils [req-3adabc16-95d6-4248-a077-00fbd1436cb1 req-f78c65cc-ae60-49a8-84c5-7158ec068078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.442 2 DEBUG nova.compute.manager [req-3adabc16-95d6-4248-a077-00fbd1436cb1 req-f78c65cc-ae60-49a8-84c5-7158ec068078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:03 np0005466031 nova_compute[235803]: 2025-10-02 12:37:03.442 2 WARNING nova.compute.manager [req-3adabc16-95d6-4248-a077-00fbd1436cb1 req-f78c65cc-ae60-49a8-84c5-7158ec068078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:04 np0005466031 nova_compute[235803]: 2025-10-02 12:37:04.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:04.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:04 np0005466031 nova_compute[235803]: 2025-10-02 12:37:04.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:05 np0005466031 nova_compute[235803]: 2025-10-02 12:37:05.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:05.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e251 e251: 3 total, 3 up, 3 in
Oct  2 08:37:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:06.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.574 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.575 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.575 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.575 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.575 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.576 2 INFO nova.compute.manager [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Terminating instance#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.578 2 DEBUG nova.compute.manager [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:37:06 np0005466031 kernel: tapb31bc9d2-55 (unregistering): left promiscuous mode
Oct  2 08:37:06 np0005466031 NetworkManager[44907]: <info>  [1759408626.6183] device (tapb31bc9d2-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:06 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:06Z|00337|binding|INFO|Releasing lport b31bc9d2-5589-460c-9a78-a1d800087345 from this chassis (sb_readonly=0)
Oct  2 08:37:06 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:06Z|00338|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 down in Southbound
Oct  2 08:37:06 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:06Z|00339|binding|INFO|Removing iface tapb31bc9d2-55 ovn-installed in OVS
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.652 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:e0:75 10.100.0.12'], port_security=['fa:16:3e:46:e0:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '12', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=b31bc9d2-5589-460c-9a78-a1d800087345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.656 141898 INFO neutron.agent.ovn.metadata.agent [-] Port b31bc9d2-5589-460c-9a78-a1d800087345 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.657 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.658 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[61c3c0ca-a884-45c3-846c-1845474613a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.659 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore#033[00m
Oct  2 08:37:06 np0005466031 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct  2 08:37:06 np0005466031 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005d.scope: Consumed 4.613s CPU time.
Oct  2 08:37:06 np0005466031 systemd-machined[192227]: Machine qemu-38-instance-0000005d terminated.
Oct  2 08:37:06 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[276480]: [NOTICE]   (276484) : haproxy version is 2.8.14-c23fe91
Oct  2 08:37:06 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[276480]: [NOTICE]   (276484) : path to executable is /usr/sbin/haproxy
Oct  2 08:37:06 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[276480]: [WARNING]  (276484) : Exiting Master process...
Oct  2 08:37:06 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[276480]: [ALERT]    (276484) : Current worker (276486) exited with code 143 (Terminated)
Oct  2 08:37:06 np0005466031 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[276480]: [WARNING]  (276484) : All workers exited. Exiting... (0)
Oct  2 08:37:06 np0005466031 systemd[1]: libpod-307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e.scope: Deactivated successfully.
Oct  2 08:37:06 np0005466031 podman[276522]: 2025-10-02 12:37:06.803335482 +0000 UTC m=+0.046424826 container died 307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.804 2 INFO nova.virt.libvirt.driver [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance destroyed successfully.#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.806 2 DEBUG nova.objects.instance [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:06 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e-userdata-shm.mount: Deactivated successfully.
Oct  2 08:37:06 np0005466031 systemd[1]: var-lib-containers-storage-overlay-37db18f3c24dcd7e8191ab47ac561dbdcf2365f01d1dac9643bc1c87edb71dbe-merged.mount: Deactivated successfully.
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.839 2 DEBUG nova.virt.libvirt.vif [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:37:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.840 2 DEBUG nova.network.os_vif_util [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.842 2 DEBUG nova.network.os_vif_util [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.843 2 DEBUG os_vif [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb31bc9d2-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.853 2 INFO os_vif [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55')#033[00m
Oct  2 08:37:06 np0005466031 podman[276522]: 2025-10-02 12:37:06.878767533 +0000 UTC m=+0.121856877 container cleanup 307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:37:06 np0005466031 systemd[1]: libpod-conmon-307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e.scope: Deactivated successfully.
Oct  2 08:37:06 np0005466031 podman[276578]: 2025-10-02 12:37:06.95235502 +0000 UTC m=+0.047477477 container remove 307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.957 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0a341b-3e76-442f-b7ff-8304a6578c49]: (4, ('Thu Oct  2 12:37:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e)\n307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e\nThu Oct  2 12:37:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e)\n307db21c3e0b824fe2a9874ec02812e28f2a94c3d54d6c1fed3fc9cffd69b31e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.959 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[361f8d77-7c84-4e6d-aaa8-bb6bd0d502db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.960 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:06 np0005466031 kernel: tapf011efa4-00: left promiscuous mode
Oct  2 08:37:06 np0005466031 nova_compute[235803]: 2025-10-02 12:37:06.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.978 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3bf434-909e-46c0-8b3a-d9c6faa08099]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.998 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4e63f946-0f2d-4411-842a-f111fcc85e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:06.999 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[49f21a8c-d53f-4fa6-813c-6635eb44fdac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:07.014 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ff06c489-ed3d-4c50-a86b-b4ae6ee6a482]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647645, 'reachable_time': 36100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276596, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:07.016 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:37:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:07.016 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[e68f5c7a-bc28-4226-a156-e69d6a904e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:07 np0005466031 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.085 2 DEBUG nova.compute.manager [req-4c89d850-c204-439e-ac53-375bbe220684 req-cce998f0-4690-4cc7-bb98-4a80d15a62c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.085 2 DEBUG oslo_concurrency.lockutils [req-4c89d850-c204-439e-ac53-375bbe220684 req-cce998f0-4690-4cc7-bb98-4a80d15a62c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.086 2 DEBUG oslo_concurrency.lockutils [req-4c89d850-c204-439e-ac53-375bbe220684 req-cce998f0-4690-4cc7-bb98-4a80d15a62c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.086 2 DEBUG oslo_concurrency.lockutils [req-4c89d850-c204-439e-ac53-375bbe220684 req-cce998f0-4690-4cc7-bb98-4a80d15a62c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.086 2 DEBUG nova.compute.manager [req-4c89d850-c204-439e-ac53-375bbe220684 req-cce998f0-4690-4cc7-bb98-4a80d15a62c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.087 2 DEBUG nova.compute.manager [req-4c89d850-c204-439e-ac53-375bbe220684 req-cce998f0-4690-4cc7-bb98-4a80d15a62c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.322 2 INFO nova.virt.libvirt.driver [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Deleting instance files /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1_del#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.323 2 INFO nova.virt.libvirt.driver [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Deletion of /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1_del complete#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.450 2 INFO nova.compute.manager [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.450 2 DEBUG oslo.service.loopingcall [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.451 2 DEBUG nova.compute.manager [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:37:07 np0005466031 nova_compute[235803]: 2025-10-02 12:37:07.451 2 DEBUG nova.network.neutron [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:37:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:07.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:08 np0005466031 nova_compute[235803]: 2025-10-02 12:37:08.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:08 np0005466031 nova_compute[235803]: 2025-10-02 12:37:08.874 2 DEBUG nova.network.neutron [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:08 np0005466031 nova_compute[235803]: 2025-10-02 12:37:08.963 2 INFO nova.compute.manager [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Took 1.51 seconds to deallocate network for instance.#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.030 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.031 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.092 2 DEBUG nova.compute.manager [req-79207aa3-40d2-49ba-91fb-3d70928649d6 req-66fe1463-3c93-4b4f-b2f5-1add78e31b33 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-deleted-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.119 2 DEBUG oslo_concurrency.processutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.220 2 DEBUG nova.compute.manager [req-26f6c086-2e26-460a-824d-8512da180a7f req-4f5d0934-8380-4912-92ba-258af8891607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.221 2 DEBUG oslo_concurrency.lockutils [req-26f6c086-2e26-460a-824d-8512da180a7f req-4f5d0934-8380-4912-92ba-258af8891607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.221 2 DEBUG oslo_concurrency.lockutils [req-26f6c086-2e26-460a-824d-8512da180a7f req-4f5d0934-8380-4912-92ba-258af8891607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.222 2 DEBUG oslo_concurrency.lockutils [req-26f6c086-2e26-460a-824d-8512da180a7f req-4f5d0934-8380-4912-92ba-258af8891607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.222 2 DEBUG nova.compute.manager [req-26f6c086-2e26-460a-824d-8512da180a7f req-4f5d0934-8380-4912-92ba-258af8891607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.222 2 WARNING nova.compute.manager [req-26f6c086-2e26-460a-824d-8512da180a7f req-4f5d0934-8380-4912-92ba-258af8891607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:37:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2542507121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.551 2 DEBUG oslo_concurrency.processutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.557 2 DEBUG nova.compute.provider_tree [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.589 2 DEBUG nova.scheduler.client.report [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:09.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.643 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.685 2 INFO nova.scheduler.client.report [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Deleted allocations for instance a7ee799a-27f6-41a6-86dc-694c480fc3a1#033[00m
Oct  2 08:37:09 np0005466031 nova_compute[235803]: 2025-10-02 12:37:09.881 2 DEBUG oslo_concurrency.lockutils [None req-1ac0884c-c8c9-46f6-99cb-d8185794035a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:10.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:11.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:11 np0005466031 nova_compute[235803]: 2025-10-02 12:37:11.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:12.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:12 np0005466031 nova_compute[235803]: 2025-10-02 12:37:12.643 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:12 np0005466031 nova_compute[235803]: 2025-10-02 12:37:12.644 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:12 np0005466031 nova_compute[235803]: 2025-10-02 12:37:12.684 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:37:12 np0005466031 nova_compute[235803]: 2025-10-02 12:37:12.785 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:12 np0005466031 nova_compute[235803]: 2025-10-02 12:37:12.785 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:12 np0005466031 nova_compute[235803]: 2025-10-02 12:37:12.794 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:37:12 np0005466031 nova_compute[235803]: 2025-10-02 12:37:12.794 2 INFO nova.compute.claims [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:37:12 np0005466031 nova_compute[235803]: 2025-10-02 12:37:12.937 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1533087876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.362 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.367 2 DEBUG nova.compute.provider_tree [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.399 2 DEBUG nova.scheduler.client.report [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.428 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.429 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.489 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.489 2 DEBUG nova.network.neutron [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.510 2 INFO nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.539 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:37:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:13.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.650 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.651 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.652 2 INFO nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Creating image(s)#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.676 2 DEBUG nova.storage.rbd_utils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.701 2 DEBUG nova.storage.rbd_utils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.727 2 DEBUG nova.storage.rbd_utils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.731 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.793 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.794 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.794 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.795 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.817 2 DEBUG nova.storage.rbd_utils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.821 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:13 np0005466031 nova_compute[235803]: 2025-10-02 12:37:13.847 2 DEBUG nova.policy [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdbe447f49374937a828d6281949a2a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.061 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.127 2 DEBUG nova.storage.rbd_utils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] resizing rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.225 2 DEBUG nova.objects.instance [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'migration_context' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.249 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.250 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Ensure instance console log exists: /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.250 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.251 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.251 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:14.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:14 np0005466031 nova_compute[235803]: 2025-10-02 12:37:14.578 2 DEBUG nova.network.neutron [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Successfully created port: 79d9c544-9d33-410a-a1d5-393ff0908cb1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:15.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:15 np0005466031 nova_compute[235803]: 2025-10-02 12:37:15.786 2 DEBUG nova.network.neutron [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Successfully updated port: 79d9c544-9d33-410a-a1d5-393ff0908cb1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:37:15 np0005466031 nova_compute[235803]: 2025-10-02 12:37:15.818 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:15 np0005466031 nova_compute[235803]: 2025-10-02 12:37:15.819 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:15 np0005466031 nova_compute[235803]: 2025-10-02 12:37:15.819 2 DEBUG nova.network.neutron [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:15.950796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635950861, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1504, "num_deletes": 261, "total_data_size": 3133579, "memory_usage": 3165584, "flush_reason": "Manual Compaction"}
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635966018, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 2055198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42592, "largest_seqno": 44091, "table_properties": {"data_size": 2048960, "index_size": 3377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14033, "raw_average_key_size": 20, "raw_value_size": 2036037, "raw_average_value_size": 2904, "num_data_blocks": 149, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408521, "oldest_key_time": 1759408521, "file_creation_time": 1759408635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 15253 microseconds, and 4788 cpu microseconds.
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:15.966059) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 2055198 bytes OK
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:15.966078) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:15.966797) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:15.966811) EVENT_LOG_v1 {"time_micros": 1759408635966806, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:15.966830) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3126522, prev total WAL file size 3126522, number of live WAL files 2.
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:15.967777) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(2007KB)], [81(9043KB)]
Oct  2 08:37:15 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635967852, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 11315904, "oldest_snapshot_seqno": -1}
Oct  2 08:37:15 np0005466031 nova_compute[235803]: 2025-10-02 12:37:15.996 2 DEBUG nova.compute.manager [req-ecd2291a-36a0-4451-a8a0-e740e3c62eea req-e6e6c958-3165-4378-aa47-bdeea7ebca6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-changed-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:15 np0005466031 nova_compute[235803]: 2025-10-02 12:37:15.997 2 DEBUG nova.compute.manager [req-ecd2291a-36a0-4451-a8a0-e740e3c62eea req-e6e6c958-3165-4378-aa47-bdeea7ebca6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Refreshing instance network info cache due to event network-changed-79d9c544-9d33-410a-a1d5-393ff0908cb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:37:15 np0005466031 nova_compute[235803]: 2025-10-02 12:37:15.997 2 DEBUG oslo_concurrency.lockutils [req-ecd2291a-36a0-4451-a8a0-e740e3c62eea req-e6e6c958-3165-4378-aa47-bdeea7ebca6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 6818 keys, 11169602 bytes, temperature: kUnknown
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636037863, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 11169602, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11123186, "index_size": 28256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 175469, "raw_average_key_size": 25, "raw_value_size": 11000437, "raw_average_value_size": 1613, "num_data_blocks": 1128, "num_entries": 6818, "num_filter_entries": 6818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:16.038178) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11169602 bytes
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:16.039255) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.4 rd, 159.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 7357, records dropped: 539 output_compression: NoCompression
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:16.039275) EVENT_LOG_v1 {"time_micros": 1759408636039264, "job": 50, "event": "compaction_finished", "compaction_time_micros": 70105, "compaction_time_cpu_micros": 30154, "output_level": 6, "num_output_files": 1, "total_output_size": 11169602, "num_input_records": 7357, "num_output_records": 6818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636039863, "job": 50, "event": "table_file_deletion", "file_number": 83}
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636042813, "job": 50, "event": "table_file_deletion", "file_number": 81}
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:15.967642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:16.042863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:16.042870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:16.042872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:16.042874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:16.042876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466031 nova_compute[235803]: 2025-10-02 12:37:16.110 2 DEBUG nova.network.neutron [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:37:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:16.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:16 np0005466031 nova_compute[235803]: 2025-10-02 12:37:16.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.171 2 DEBUG nova.network.neutron [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.207 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.208 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance network_info: |[{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.208 2 DEBUG oslo_concurrency.lockutils [req-ecd2291a-36a0-4451-a8a0-e740e3c62eea req-e6e6c958-3165-4378-aa47-bdeea7ebca6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.208 2 DEBUG nova.network.neutron [req-ecd2291a-36a0-4451-a8a0-e740e3c62eea req-e6e6c958-3165-4378-aa47-bdeea7ebca6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Refreshing network info cache for port 79d9c544-9d33-410a-a1d5-393ff0908cb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.210 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Start _get_guest_xml network_info=[{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.215 2 WARNING nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.218 2 DEBUG nova.virt.libvirt.host [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.219 2 DEBUG nova.virt.libvirt.host [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.222 2 DEBUG nova.virt.libvirt.host [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.222 2 DEBUG nova.virt.libvirt.host [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.223 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.223 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.224 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.224 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.224 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.224 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.224 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.225 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.225 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.225 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.225 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.225 2 DEBUG nova.virt.hardware [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.228 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:17.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/650801237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.674 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.698 2 DEBUG nova.storage.rbd_utils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:17 np0005466031 nova_compute[235803]: 2025-10-02 12:37:17.701 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4088164253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.125 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.127 2 DEBUG nova.virt.libvirt.vif [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1588003337',display_name='tempest-ServerStableDeviceRescueTest-server-1588003337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1588003337',id=98,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-903zs7bi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:13Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=dc4a4f9d-2d68-4b95-a651-f1817489ccd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.127 2 DEBUG nova.network.os_vif_util [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.128 2 DEBUG nova.network.os_vif_util [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:82:e0,bridge_name='br-int',has_traffic_filtering=True,id=79d9c544-9d33-410a-a1d5-393ff0908cb1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d9c544-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.129 2 DEBUG nova.objects.instance [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.146 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <uuid>dc4a4f9d-2d68-4b95-a651-f1817489ccd6</uuid>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <name>instance-00000062</name>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1588003337</nova:name>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:37:17</nova:creationTime>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <nova:user uuid="fdbe447f49374937a828d6281949a2a4">tempest-ServerStableDeviceRescueTest-2109974660-project-member</nova:user>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <nova:project uuid="a79bb765ab1e4aa18672c9641b6187b9">tempest-ServerStableDeviceRescueTest-2109974660</nova:project>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <nova:port uuid="79d9c544-9d33-410a-a1d5-393ff0908cb1">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <entry name="serial">dc4a4f9d-2d68-4b95-a651-f1817489ccd6</entry>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <entry name="uuid">dc4a4f9d-2d68-4b95-a651-f1817489ccd6</entry>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:56:82:e0"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <target dev="tap79d9c544-9d"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/console.log" append="off"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:37:18 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:37:18 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:37:18 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:37:18 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.147 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Preparing to wait for external event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.148 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.148 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.148 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.149 2 DEBUG nova.virt.libvirt.vif [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1588003337',display_name='tempest-ServerStableDeviceRescueTest-server-1588003337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1588003337',id=98,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-903zs7bi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:13Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=dc4a4f9d-2d68-4b95-a651-f1817489ccd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.149 2 DEBUG nova.network.os_vif_util [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.150 2 DEBUG nova.network.os_vif_util [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:82:e0,bridge_name='br-int',has_traffic_filtering=True,id=79d9c544-9d33-410a-a1d5-393ff0908cb1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d9c544-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.150 2 DEBUG os_vif [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:82:e0,bridge_name='br-int',has_traffic_filtering=True,id=79d9c544-9d33-410a-a1d5-393ff0908cb1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d9c544-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.151 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.152 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.154 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79d9c544-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.154 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79d9c544-9d, col_values=(('external_ids', {'iface-id': '79d9c544-9d33-410a-a1d5-393ff0908cb1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:82:e0', 'vm-uuid': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:18 np0005466031 NetworkManager[44907]: <info>  [1759408638.1566] manager: (tap79d9c544-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.161 2 INFO os_vif [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:82:e0,bridge_name='br-int',has_traffic_filtering=True,id=79d9c544-9d33-410a-a1d5-393ff0908cb1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d9c544-9d')#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.231 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.231 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.232 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:56:82:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.232 2 INFO nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Using config drive#033[00m
Oct  2 08:37:18 np0005466031 nova_compute[235803]: 2025-10-02 12:37:18.256 2 DEBUG nova.storage.rbd_utils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:18.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.295 2 INFO nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Creating config drive at /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config#033[00m
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.302 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3tfmv5u4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.435 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3tfmv5u4" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.471 2 DEBUG nova.storage.rbd_utils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.474 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:19.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.637 2 DEBUG oslo_concurrency.processutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.638 2 INFO nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Deleting local config drive /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config because it was imported into RBD.#033[00m
Oct  2 08:37:19 np0005466031 kernel: tap79d9c544-9d: entered promiscuous mode
Oct  2 08:37:19 np0005466031 NetworkManager[44907]: <info>  [1759408639.6852] manager: (tap79d9c544-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Oct  2 08:37:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:19Z|00340|binding|INFO|Claiming lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 for this chassis.
Oct  2 08:37:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:19Z|00341|binding|INFO|79d9c544-9d33-410a-a1d5-393ff0908cb1: Claiming fa:16:3e:56:82:e0 10.100.0.6
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.702 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:82:e0 10.100.0.6'], port_security=['fa:16:3e:56:82:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=79d9c544-9d33-410a-a1d5-393ff0908cb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.703 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 79d9c544-9d33-410a-a1d5-393ff0908cb1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.705 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:37:19 np0005466031 systemd-udevd[276999]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:19Z|00342|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 ovn-installed in OVS
Oct  2 08:37:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:19Z|00343|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 up in Southbound
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466031 nova_compute[235803]: 2025-10-02 12:37:19.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.720 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d6031b-7aff-4a33-aaf4-4f04dc01ab41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.721 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap494beff4-71 in ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:37:19 np0005466031 systemd-machined[192227]: New machine qemu-39-instance-00000062.
Oct  2 08:37:19 np0005466031 NetworkManager[44907]: <info>  [1759408639.7236] device (tap79d9c544-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:37:19 np0005466031 NetworkManager[44907]: <info>  [1759408639.7249] device (tap79d9c544-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.725 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap494beff4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.725 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[add336c0-215f-4161-b57c-43d89f2fff3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.727 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f34c7cf3-c1c7-4be6-95a4-9e652c34d15b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.736 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[f89186c1-64b2-4566-9ae2-9941c450a07d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 systemd[1]: Started Virtual Machine qemu-39-instance-00000062.
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.761 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[955bf548-59ec-4b5c-b781-244ace4e8190]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.789 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9a97dd-370b-4d64-b1d3-7c1e5e73b2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.793 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[679cedc6-ed14-4528-a214-3cd517c5690b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 NetworkManager[44907]: <info>  [1759408639.7942] manager: (tap494beff4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.827 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[96572118-bf69-4d8b-9118-0a4440249d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.830 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f35ac342-af05-47f7-a5f0-12da6d787338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 NetworkManager[44907]: <info>  [1759408639.8487] device (tap494beff4-70): carrier: link connected
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.852 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[37faeff3-b67b-4519-ae8b-0671ce8d17f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.868 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c189604a-5367-4942-860a-602ad600ecba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649543, 'reachable_time': 15594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277036, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.880 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[899d6820-76a3-44ad-b7ff-44b0a94dcf42]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:4a01'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649543, 'tstamp': 649543}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277037, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.893 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[659a08d5-dff9-4299-84ef-ccbd3a9a9e62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649543, 'reachable_time': 15594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277038, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.928 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a2201cb7-798d-4bc7-8839-7bc4711021c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:19.999 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c772721e-c69c-480e-94b1-6cc57ef0f346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:20.000 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:20.000 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:20.001 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:20 np0005466031 kernel: tap494beff4-70: entered promiscuous mode
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:20 np0005466031 NetworkManager[44907]: <info>  [1759408640.0045] manager: (tap494beff4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:20.008 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:20Z|00344|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:20.024 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:20.025 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5db3f600-2d86-400e-922b-6063bead84c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:20.025 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-494beff4-7fba-4749-8998-3432c91ac5d2
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 494beff4-7fba-4749-8998-3432c91ac5d2
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:37:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:20.026 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'env', 'PROCESS_TAG=haproxy-494beff4-7fba-4749-8998-3432c91ac5d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/494beff4-7fba-4749-8998-3432c91ac5d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:37:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:20.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:20 np0005466031 podman[277112]: 2025-10-02 12:37:20.372988862 +0000 UTC m=+0.043345729 container create ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:37:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:20 np0005466031 systemd[1]: Started libpod-conmon-ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79.scope.
Oct  2 08:37:20 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:37:20 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/769a5dfc913f13b37358bcd37555b5311b93ea89f506642163e12d37902da261/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:37:20 np0005466031 podman[277112]: 2025-10-02 12:37:20.350908556 +0000 UTC m=+0.021265453 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:37:20 np0005466031 podman[277112]: 2025-10-02 12:37:20.459853511 +0000 UTC m=+0.130210398 container init ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:37:20 np0005466031 podman[277112]: 2025-10-02 12:37:20.466273536 +0000 UTC m=+0.136630393 container start ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.478 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408640.4778006, dc4a4f9d-2d68-4b95-a651-f1817489ccd6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.479 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:37:20 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[277126]: [NOTICE]   (277131) : New worker (277133) forked
Oct  2 08:37:20 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[277126]: [NOTICE]   (277131) : Loading success.
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.529 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.534 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408640.478822, dc4a4f9d-2d68-4b95-a651-f1817489ccd6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.534 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.569 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.572 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.611 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.924 2 DEBUG nova.network.neutron [req-ecd2291a-36a0-4451-a8a0-e740e3c62eea req-e6e6c958-3165-4378-aa47-bdeea7ebca6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updated VIF entry in instance network info cache for port 79d9c544-9d33-410a-a1d5-393ff0908cb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:37:20 np0005466031 nova_compute[235803]: 2025-10-02 12:37:20.925 2 DEBUG nova.network.neutron [req-ecd2291a-36a0-4451-a8a0-e740e3c62eea req-e6e6c958-3165-4378-aa47-bdeea7ebca6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.022 2 DEBUG oslo_concurrency.lockutils [req-ecd2291a-36a0-4451-a8a0-e740e3c62eea req-e6e6c958-3165-4378-aa47-bdeea7ebca6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.210 2 DEBUG nova.compute.manager [req-4faffbe5-dd9f-4534-9791-c6291281e843 req-8971bff9-f6a9-421f-8c5f-1b14276537c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.211 2 DEBUG oslo_concurrency.lockutils [req-4faffbe5-dd9f-4534-9791-c6291281e843 req-8971bff9-f6a9-421f-8c5f-1b14276537c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.211 2 DEBUG oslo_concurrency.lockutils [req-4faffbe5-dd9f-4534-9791-c6291281e843 req-8971bff9-f6a9-421f-8c5f-1b14276537c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.212 2 DEBUG oslo_concurrency.lockutils [req-4faffbe5-dd9f-4534-9791-c6291281e843 req-8971bff9-f6a9-421f-8c5f-1b14276537c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.212 2 DEBUG nova.compute.manager [req-4faffbe5-dd9f-4534-9791-c6291281e843 req-8971bff9-f6a9-421f-8c5f-1b14276537c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Processing event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.213 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.216 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408641.216144, dc4a4f9d-2d68-4b95-a651-f1817489ccd6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.216 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.218 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.221 2 INFO nova.virt.libvirt.driver [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance spawned successfully.#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.221 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.249 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.254 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.257 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.257 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.257 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.258 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.258 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.258 2 DEBUG nova.virt.libvirt.driver [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.301 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.344 2 INFO nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Took 7.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.344 2 DEBUG nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.436 2 INFO nova.compute.manager [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Took 8.69 seconds to build instance.#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.457 2 DEBUG oslo_concurrency.lockutils [None req-75e19580-f9d0-4c77-873f-fee3d93e91da fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:37:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.802 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408626.802126, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.803 2 INFO nova.compute.manager [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:37:21 np0005466031 nova_compute[235803]: 2025-10-02 12:37:21.829 2 DEBUG nova.compute.manager [None req-e48ea2dc-b609-40aa-9abf-f55e04460b98 - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:22.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:22 np0005466031 podman[277143]: 2025-10-02 12:37:22.634504104 +0000 UTC m=+0.053946253 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 08:37:22 np0005466031 podman[277144]: 2025-10-02 12:37:22.659602836 +0000 UTC m=+0.080823687 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:37:23 np0005466031 nova_compute[235803]: 2025-10-02 12:37:23.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:23 np0005466031 nova_compute[235803]: 2025-10-02 12:37:23.343 2 DEBUG nova.compute.manager [req-ab3742b7-2bfd-43be-9c91-beb60ffaf9ae req-c3a73df6-33a7-48b5-a74f-083103f44282 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:23 np0005466031 nova_compute[235803]: 2025-10-02 12:37:23.344 2 DEBUG oslo_concurrency.lockutils [req-ab3742b7-2bfd-43be-9c91-beb60ffaf9ae req-c3a73df6-33a7-48b5-a74f-083103f44282 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:23 np0005466031 nova_compute[235803]: 2025-10-02 12:37:23.344 2 DEBUG oslo_concurrency.lockutils [req-ab3742b7-2bfd-43be-9c91-beb60ffaf9ae req-c3a73df6-33a7-48b5-a74f-083103f44282 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:23 np0005466031 nova_compute[235803]: 2025-10-02 12:37:23.344 2 DEBUG oslo_concurrency.lockutils [req-ab3742b7-2bfd-43be-9c91-beb60ffaf9ae req-c3a73df6-33a7-48b5-a74f-083103f44282 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:23 np0005466031 nova_compute[235803]: 2025-10-02 12:37:23.344 2 DEBUG nova.compute.manager [req-ab3742b7-2bfd-43be-9c91-beb60ffaf9ae req-c3a73df6-33a7-48b5-a74f-083103f44282 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:23 np0005466031 nova_compute[235803]: 2025-10-02 12:37:23.344 2 WARNING nova.compute.manager [req-ab3742b7-2bfd-43be-9c91-beb60ffaf9ae req-c3a73df6-33a7-48b5-a74f-083103f44282 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:24 np0005466031 nova_compute[235803]: 2025-10-02 12:37:24.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:24 np0005466031 nova_compute[235803]: 2025-10-02 12:37:24.863 2 DEBUG nova.compute.manager [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:24 np0005466031 nova_compute[235803]: 2025-10-02 12:37:24.938 2 INFO nova.compute.manager [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] instance snapshotting#033[00m
Oct  2 08:37:25 np0005466031 nova_compute[235803]: 2025-10-02 12:37:25.375 2 INFO nova.virt.libvirt.driver [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Beginning live snapshot process#033[00m
Oct  2 08:37:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:25 np0005466031 nova_compute[235803]: 2025-10-02 12:37:25.618 2 DEBUG nova.virt.libvirt.imagebackend [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:37:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:37:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:37:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:25.845 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:25.846 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:25.846 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:26 np0005466031 nova_compute[235803]: 2025-10-02 12:37:26.281 2 DEBUG nova.storage.rbd_utils [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] creating snapshot(eb1a6a86d8684b2aab882efcf48d0bcd) on rbd image(dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:37:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:26.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e252 e252: 3 total, 3 up, 3 in
Oct  2 08:37:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:27.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:27 np0005466031 nova_compute[235803]: 2025-10-02 12:37:27.967 2 DEBUG nova.storage.rbd_utils [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] cloning vms/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk@eb1a6a86d8684b2aab882efcf48d0bcd to images/d47d5adb-8963-4735-aac4-73e6ace2936e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:37:28 np0005466031 nova_compute[235803]: 2025-10-02 12:37:28.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:28 np0005466031 nova_compute[235803]: 2025-10-02 12:37:28.264 2 DEBUG nova.storage.rbd_utils [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] flattening images/d47d5adb-8963-4735-aac4-73e6ace2936e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:37:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:28.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:28 np0005466031 nova_compute[235803]: 2025-10-02 12:37:28.998 2 DEBUG nova.storage.rbd_utils [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] removing snapshot(eb1a6a86d8684b2aab882efcf48d0bcd) on rbd image(dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:37:29 np0005466031 nova_compute[235803]: 2025-10-02 12:37:29.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:29.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:29Z|00345|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:37:29 np0005466031 nova_compute[235803]: 2025-10-02 12:37:29.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:30 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:30Z|00346|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:37:30 np0005466031 nova_compute[235803]: 2025-10-02 12:37:30.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:30.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e253 e253: 3 total, 3 up, 3 in
Oct  2 08:37:31 np0005466031 nova_compute[235803]: 2025-10-02 12:37:31.345 2 DEBUG nova.storage.rbd_utils [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] creating snapshot(snap) on rbd image(d47d5adb-8963-4735-aac4-73e6ace2936e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:37:31 np0005466031 podman[277333]: 2025-10-02 12:37:31.617410408 +0000 UTC m=+0.051485563 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 08:37:31 np0005466031 podman[277334]: 2025-10-02 12:37:31.626126728 +0000 UTC m=+0.056916658 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:37:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:31.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e254 e254: 3 total, 3 up, 3 in
Oct  2 08:37:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:32.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:33 np0005466031 nova_compute[235803]: 2025-10-02 12:37:33.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:33 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:33Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:82:e0 10.100.0.6
Oct  2 08:37:33 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:33Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:82:e0 10.100.0.6
Oct  2 08:37:33 np0005466031 nova_compute[235803]: 2025-10-02 12:37:33.563 2 INFO nova.virt.libvirt.driver [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Snapshot image upload complete#033[00m
Oct  2 08:37:33 np0005466031 nova_compute[235803]: 2025-10-02 12:37:33.563 2 INFO nova.compute.manager [None req-439621c6-eb61-4d53-9fd0-ffa11cb242e8 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Took 8.62 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:37:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:33.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:34 np0005466031 nova_compute[235803]: 2025-10-02 12:37:34.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:34.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:35 np0005466031 nova_compute[235803]: 2025-10-02 12:37:35.280 2 INFO nova.compute.manager [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Rescuing#033[00m
Oct  2 08:37:35 np0005466031 nova_compute[235803]: 2025-10-02 12:37:35.280 2 DEBUG oslo_concurrency.lockutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:35 np0005466031 nova_compute[235803]: 2025-10-02 12:37:35.280 2 DEBUG oslo_concurrency.lockutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:35 np0005466031 nova_compute[235803]: 2025-10-02 12:37:35.280 2 DEBUG nova.network.neutron [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:37:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:35.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.065677) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656065744, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 494, "num_deletes": 251, "total_data_size": 635180, "memory_usage": 645608, "flush_reason": "Manual Compaction"}
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656069981, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 418588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44096, "largest_seqno": 44585, "table_properties": {"data_size": 415917, "index_size": 707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6515, "raw_average_key_size": 19, "raw_value_size": 410545, "raw_average_value_size": 1200, "num_data_blocks": 31, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408636, "oldest_key_time": 1759408636, "file_creation_time": 1759408656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 4335 microseconds, and 1698 cpu microseconds.
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.070020) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 418588 bytes OK
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.070037) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.071130) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.071145) EVENT_LOG_v1 {"time_micros": 1759408656071140, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.071162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 632215, prev total WAL file size 632215, number of live WAL files 2.
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.071620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(408KB)], [84(10MB)]
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656071660, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11588190, "oldest_snapshot_seqno": -1}
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 6646 keys, 9628118 bytes, temperature: kUnknown
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656135716, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9628118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9584395, "index_size": 26011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 172622, "raw_average_key_size": 25, "raw_value_size": 9466152, "raw_average_value_size": 1424, "num_data_blocks": 1026, "num_entries": 6646, "num_filter_entries": 6646, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.136052) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9628118 bytes
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.138891) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.6 rd, 150.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(50.7) write-amplify(23.0) OK, records in: 7160, records dropped: 514 output_compression: NoCompression
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.138914) EVENT_LOG_v1 {"time_micros": 1759408656138904, "job": 52, "event": "compaction_finished", "compaction_time_micros": 64173, "compaction_time_cpu_micros": 24386, "output_level": 6, "num_output_files": 1, "total_output_size": 9628118, "num_input_records": 7160, "num_output_records": 6646, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656139359, "job": 52, "event": "table_file_deletion", "file_number": 86}
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656142040, "job": 52, "event": "table_file_deletion", "file_number": 84}
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.071523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.142115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.142120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.142122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.142124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:37:36.142126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:37:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:36.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:37:36 np0005466031 nova_compute[235803]: 2025-10-02 12:37:36.569 2 DEBUG nova.network.neutron [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:36 np0005466031 nova_compute[235803]: 2025-10-02 12:37:36.594 2 DEBUG oslo_concurrency.lockutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:37 np0005466031 nova_compute[235803]: 2025-10-02 12:37:37.065 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:37:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:37.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:38.065 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:38.066 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:37:38 np0005466031 nova_compute[235803]: 2025-10-02 12:37:38.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:38 np0005466031 nova_compute[235803]: 2025-10-02 12:37:38.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:38.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:39 np0005466031 nova_compute[235803]: 2025-10-02 12:37:39.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:39.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:39 np0005466031 kernel: tap79d9c544-9d (unregistering): left promiscuous mode
Oct  2 08:37:39 np0005466031 NetworkManager[44907]: <info>  [1759408659.9718] device (tap79d9c544-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:37:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:39Z|00347|binding|INFO|Releasing lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 from this chassis (sb_readonly=0)
Oct  2 08:37:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:39Z|00348|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 down in Southbound
Oct  2 08:37:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:39Z|00349|binding|INFO|Removing iface tap79d9c544-9d ovn-installed in OVS
Oct  2 08:37:39 np0005466031 nova_compute[235803]: 2025-10-02 12:37:39.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:39.990 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:82:e0 10.100.0.6'], port_security=['fa:16:3e:56:82:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=79d9c544-9d33-410a-a1d5-393ff0908cb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:39.991 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 79d9c544-9d33-410a-a1d5-393ff0908cb1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:37:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:39.992 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 494beff4-7fba-4749-8998-3432c91ac5d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:37:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:39.993 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[95a4e3ea-6708-44ee-a69e-cb22966a2d95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:39.994 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace which is not needed anymore#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:40 np0005466031 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct  2 08:37:40 np0005466031 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000062.scope: Consumed 13.226s CPU time.
Oct  2 08:37:40 np0005466031 systemd-machined[192227]: Machine qemu-39-instance-00000062 terminated.
Oct  2 08:37:40 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[277126]: [NOTICE]   (277131) : haproxy version is 2.8.14-c23fe91
Oct  2 08:37:40 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[277126]: [NOTICE]   (277131) : path to executable is /usr/sbin/haproxy
Oct  2 08:37:40 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[277126]: [WARNING]  (277131) : Exiting Master process...
Oct  2 08:37:40 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[277126]: [WARNING]  (277131) : Exiting Master process...
Oct  2 08:37:40 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[277126]: [ALERT]    (277131) : Current worker (277133) exited with code 143 (Terminated)
Oct  2 08:37:40 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[277126]: [WARNING]  (277131) : All workers exited. Exiting... (0)
Oct  2 08:37:40 np0005466031 systemd[1]: libpod-ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79.scope: Deactivated successfully.
Oct  2 08:37:40 np0005466031 podman[277585]: 2025-10-02 12:37:40.119033092 +0000 UTC m=+0.042899065 container died ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:37:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79-userdata-shm.mount: Deactivated successfully.
Oct  2 08:37:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay-769a5dfc913f13b37358bcd37555b5311b93ea89f506642163e12d37902da261-merged.mount: Deactivated successfully.
Oct  2 08:37:40 np0005466031 podman[277585]: 2025-10-02 12:37:40.160673171 +0000 UTC m=+0.084539144 container cleanup ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:37:40 np0005466031 systemd[1]: libpod-conmon-ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79.scope: Deactivated successfully.
Oct  2 08:37:40 np0005466031 podman[277616]: 2025-10-02 12:37:40.218125074 +0000 UTC m=+0.039404415 container remove ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.223 2 INFO nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.229 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf15bcd-31ca-4d71-84ea-69d90bd991b6]: (4, ('Thu Oct  2 12:37:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79)\nad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79\nThu Oct  2 12:37:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (ad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79)\nad7582a6b43e7232337ac599fa8766a240713bb852abbb198e014f3a4f5ffc79\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.231 2 INFO nova.virt.libvirt.driver [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance destroyed successfully.#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.231 2 DEBUG nova.objects.instance [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'numa_topology' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.233 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[23534c7a-15fa-44dc-bfc2-89f59ae0fc93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.236 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:40 np0005466031 kernel: tap494beff4-70: left promiscuous mode
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.256 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[84459a8e-9a03-436b-9cf3-1c617f6dd436]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.259 2 INFO nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Attempting a stable device rescue#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.295 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[76ec7781-3973-4f84-9b50-1da95972b504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.296 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[58961413-8810-4c41-8075-16a914d610aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.304 2 DEBUG nova.compute.manager [req-2e29746c-d86a-4f75-bce7-a82f63687eed req-4bf60e55-1657-4d4f-824a-fc12bfc07855 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.305 2 DEBUG oslo_concurrency.lockutils [req-2e29746c-d86a-4f75-bce7-a82f63687eed req-4bf60e55-1657-4d4f-824a-fc12bfc07855 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.305 2 DEBUG oslo_concurrency.lockutils [req-2e29746c-d86a-4f75-bce7-a82f63687eed req-4bf60e55-1657-4d4f-824a-fc12bfc07855 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.305 2 DEBUG oslo_concurrency.lockutils [req-2e29746c-d86a-4f75-bce7-a82f63687eed req-4bf60e55-1657-4d4f-824a-fc12bfc07855 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.305 2 DEBUG nova.compute.manager [req-2e29746c-d86a-4f75-bce7-a82f63687eed req-4bf60e55-1657-4d4f-824a-fc12bfc07855 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.306 2 WARNING nova.compute.manager [req-2e29746c-d86a-4f75-bce7-a82f63687eed req-4bf60e55-1657-4d4f-824a-fc12bfc07855 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.310 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[97d0e3a4-0b1a-4180-a49f-49f71e50f162]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649537, 'reachable_time': 41104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277645, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.314 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:37:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:40.314 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7f1778-0619-457f-a507-6e52ea29aeba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:40 np0005466031 systemd[1]: run-netns-ovnmeta\x2d494beff4\x2d7fba\x2d4749\x2d8998\x2d3432c91ac5d2.mount: Deactivated successfully.
Oct  2 08:37:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.837 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.842 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.843 2 INFO nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Creating image(s)#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.876 2 DEBUG nova.storage.rbd_utils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.882 2 DEBUG nova.objects.instance [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.930 2 DEBUG nova.storage.rbd_utils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.961 2 DEBUG nova.storage.rbd_utils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.964 2 DEBUG oslo_concurrency.lockutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "6f3561e8d0c08a22ac9856d4a6d03c973a83afa3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:40 np0005466031 nova_compute[235803]: 2025-10-02 12:37:40.965 2 DEBUG oslo_concurrency.lockutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "6f3561e8d0c08a22ac9856d4a6d03c973a83afa3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e255 e255: 3 total, 3 up, 3 in
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.326 2 DEBUG nova.virt.libvirt.imagebackend [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/d47d5adb-8963-4735-aac4-73e6ace2936e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/d47d5adb-8963-4735-aac4-73e6ace2936e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.457 2 DEBUG nova.virt.libvirt.imagebackend [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/d47d5adb-8963-4735-aac4-73e6ace2936e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.458 2 DEBUG nova.storage.rbd_utils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] cloning images/d47d5adb-8963-4735-aac4-73e6ace2936e@snap to None/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.578 2 DEBUG oslo_concurrency.lockutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "6f3561e8d0c08a22ac9856d4a6d03c973a83afa3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.618 2 DEBUG nova.objects.instance [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'migration_context' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.632 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.634 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Start _get_guest_xml network_info=[{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:56:82:e0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'd47d5adb-8963-4735-aac4-73e6ace2936e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.634 2 DEBUG nova.objects.instance [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'resources' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.649 2 WARNING nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:41.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.654 2 DEBUG nova.virt.libvirt.host [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.655 2 DEBUG nova.virt.libvirt.host [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.657 2 DEBUG nova.virt.libvirt.host [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.658 2 DEBUG nova.virt.libvirt.host [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.659 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.660 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.661 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.661 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.662 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.662 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.662 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.663 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.663 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.663 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.664 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.664 2 DEBUG nova.virt.hardware [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.665 2 DEBUG nova.objects.instance [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:41 np0005466031 nova_compute[235803]: 2025-10-02 12:37:41.680 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2454314841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:37:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.158 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.194 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.493 2 DEBUG nova.compute.manager [req-38a4375d-18f1-45d5-9915-949353f3f289 req-4bf769fb-912d-4483-a8b3-87e72186631f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.494 2 DEBUG oslo_concurrency.lockutils [req-38a4375d-18f1-45d5-9915-949353f3f289 req-4bf769fb-912d-4483-a8b3-87e72186631f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.494 2 DEBUG oslo_concurrency.lockutils [req-38a4375d-18f1-45d5-9915-949353f3f289 req-4bf769fb-912d-4483-a8b3-87e72186631f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.494 2 DEBUG oslo_concurrency.lockutils [req-38a4375d-18f1-45d5-9915-949353f3f289 req-4bf769fb-912d-4483-a8b3-87e72186631f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.495 2 DEBUG nova.compute.manager [req-38a4375d-18f1-45d5-9915-949353f3f289 req-4bf769fb-912d-4483-a8b3-87e72186631f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.495 2 WARNING nova.compute.manager [req-38a4375d-18f1-45d5-9915-949353f3f289 req-4bf769fb-912d-4483-a8b3-87e72186631f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:37:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/673595144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.601 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:42 np0005466031 nova_compute[235803]: 2025-10-02 12:37:42.602 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3144403304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.031 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.033 2 DEBUG nova.virt.libvirt.vif [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1588003337',display_name='tempest-ServerStableDeviceRescueTest-server-1588003337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1588003337',id=98,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-903zs7bi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:33Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=dc4a4f9d-2d68-4b95-a651-f1817489ccd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:56:82:e0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.033 2 DEBUG nova.network.os_vif_util [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:56:82:e0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.034 2 DEBUG nova.network.os_vif_util [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:82:e0,bridge_name='br-int',has_traffic_filtering=True,id=79d9c544-9d33-410a-a1d5-393ff0908cb1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d9c544-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.035 2 DEBUG nova.objects.instance [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.055 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <uuid>dc4a4f9d-2d68-4b95-a651-f1817489ccd6</uuid>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <name>instance-00000062</name>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1588003337</nova:name>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:37:41</nova:creationTime>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <nova:user uuid="fdbe447f49374937a828d6281949a2a4">tempest-ServerStableDeviceRescueTest-2109974660-project-member</nova:user>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <nova:project uuid="a79bb765ab1e4aa18672c9641b6187b9">tempest-ServerStableDeviceRescueTest-2109974660</nova:project>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <nova:port uuid="79d9c544-9d33-410a-a1d5-393ff0908cb1">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <entry name="serial">dc4a4f9d-2d68-4b95-a651-f1817489ccd6</entry>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <entry name="uuid">dc4a4f9d-2d68-4b95-a651-f1817489ccd6</entry>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.rescue">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <target dev="sdb" bus="scsi"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <boot order="1"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:56:82:e0"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <target dev="tap79d9c544-9d"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/console.log" append="off"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:37:43 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:37:43 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:37:43 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:37:43 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.062 2 INFO nova.virt.libvirt.driver [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance destroyed successfully.#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.209 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.209 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.210 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.210 2 DEBUG nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:56:82:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.210 2 INFO nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Using config drive#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.238 2 DEBUG nova.storage.rbd_utils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.256 2 DEBUG nova.objects.instance [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.284 2 DEBUG nova.objects.instance [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'keypairs' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:43.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.673 2 INFO nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Creating config drive at /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config.rescue#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.677 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpte6tcwcv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.828 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpte6tcwcv" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.861 2 DEBUG nova.storage.rbd_utils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:43 np0005466031 nova_compute[235803]: 2025-10-02 12:37:43.866 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config.rescue dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:44 np0005466031 nova_compute[235803]: 2025-10-02 12:37:44.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:44 np0005466031 nova_compute[235803]: 2025-10-02 12:37:44.690 2 DEBUG oslo_concurrency.processutils [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config.rescue dc4a4f9d-2d68-4b95-a651-f1817489ccd6_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.824s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:44 np0005466031 nova_compute[235803]: 2025-10-02 12:37:44.691 2 INFO nova.virt.libvirt.driver [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Deleting local config drive /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:37:44 np0005466031 kernel: tap79d9c544-9d: entered promiscuous mode
Oct  2 08:37:44 np0005466031 NetworkManager[44907]: <info>  [1759408664.7470] manager: (tap79d9c544-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Oct  2 08:37:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:44Z|00350|binding|INFO|Claiming lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 for this chassis.
Oct  2 08:37:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:44Z|00351|binding|INFO|79d9c544-9d33-410a-a1d5-393ff0908cb1: Claiming fa:16:3e:56:82:e0 10.100.0.6
Oct  2 08:37:44 np0005466031 nova_compute[235803]: 2025-10-02 12:37:44.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.753 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:82:e0 10.100.0.6'], port_security=['fa:16:3e:56:82:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=79d9c544-9d33-410a-a1d5-393ff0908cb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.755 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 79d9c544-9d33-410a-a1d5-393ff0908cb1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.757 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:37:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:44Z|00352|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 ovn-installed in OVS
Oct  2 08:37:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:44Z|00353|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 up in Southbound
Oct  2 08:37:44 np0005466031 nova_compute[235803]: 2025-10-02 12:37:44.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:44 np0005466031 nova_compute[235803]: 2025-10-02 12:37:44.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.771 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5a78c952-bf6f-4865-831a-d55149dd9055]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.772 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap494beff4-71 in ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.774 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap494beff4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.774 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[adebb7c6-9d1e-44a8-bd3e-bd062c74413f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.776 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[be966500-38bd-4d8a-8dd7-0d9da40f86cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 systemd-udevd[277945]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:44 np0005466031 systemd-machined[192227]: New machine qemu-40-instance-00000062.
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.786 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4fec71e9-fac5-4ff4-a137-23409e879af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 NetworkManager[44907]: <info>  [1759408664.7906] device (tap79d9c544-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:37:44 np0005466031 NetworkManager[44907]: <info>  [1759408664.7913] device (tap79d9c544-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:37:44 np0005466031 systemd[1]: Started Virtual Machine qemu-40-instance-00000062.
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.799 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a870f4dd-2934-460a-9801-4d9d922ee674]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.827 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a7982b6d-45be-4b7f-9ac6-6d55999431ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 NetworkManager[44907]: <info>  [1759408664.8369] manager: (tap494beff4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Oct  2 08:37:44 np0005466031 systemd-udevd[277950]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.837 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1834901c-c44c-4521-b1c1-93dfe9343abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.870 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2579075c-5b0f-4183-8886-09d4c3b466eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.873 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fa8644-b1cf-41b5-ab61-e9f9ccb7c22a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 NetworkManager[44907]: <info>  [1759408664.8963] device (tap494beff4-70): carrier: link connected
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.901 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[af2fa9e3-00a9-43f8-a097-39454aae3f67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.920 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0edd24c5-b2fa-4793-b263-107b10ee51df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652048, 'reachable_time': 41826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277978, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.934 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b2676b25-8f15-4285-8673-9aa869d45918]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:4a01'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652048, 'tstamp': 652048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277979, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.950 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a09c0f-ad8e-4d00-8df3-ff107dd51bf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652048, 'reachable_time': 41826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277980, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:44.976 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c12fd727-1ba0-4080-927c-03a84befba77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.024 2 DEBUG nova.compute.manager [req-45060da9-04cf-4ade-90d7-58d325c0fcfc req-308f8216-b844-4ed2-86ce-4b7d7ff6141c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.025 2 DEBUG oslo_concurrency.lockutils [req-45060da9-04cf-4ade-90d7-58d325c0fcfc req-308f8216-b844-4ed2-86ce-4b7d7ff6141c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.025 2 DEBUG oslo_concurrency.lockutils [req-45060da9-04cf-4ade-90d7-58d325c0fcfc req-308f8216-b844-4ed2-86ce-4b7d7ff6141c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.026 2 DEBUG oslo_concurrency.lockutils [req-45060da9-04cf-4ade-90d7-58d325c0fcfc req-308f8216-b844-4ed2-86ce-4b7d7ff6141c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.026 2 DEBUG nova.compute.manager [req-45060da9-04cf-4ade-90d7-58d325c0fcfc req-308f8216-b844-4ed2-86ce-4b7d7ff6141c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.026 2 WARNING nova.compute.manager [req-45060da9-04cf-4ade-90d7-58d325c0fcfc req-308f8216-b844-4ed2-86ce-4b7d7ff6141c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.039 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a89526a9-4bca-4745-9e54-55e0bf76e6d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.040 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.040 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.041 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:45 np0005466031 kernel: tap494beff4-70: entered promiscuous mode
Oct  2 08:37:45 np0005466031 NetworkManager[44907]: <info>  [1759408665.0932] manager: (tap494beff4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.096 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:45Z|00354|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.099 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.100 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fbf088-26f8-4184-a11f-7d73d07a52f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.101 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-494beff4-7fba-4749-8998-3432c91ac5d2
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 494beff4-7fba-4749-8998-3432c91ac5d2
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:37:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:45.102 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'env', 'PROCESS_TAG=haproxy-494beff4-7fba-4749-8998-3432c91ac5d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/494beff4-7fba-4749-8998-3432c91ac5d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:37:45 np0005466031 nova_compute[235803]: 2025-10-02 12:37:45.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:45 np0005466031 podman[278012]: 2025-10-02 12:37:45.46511397 +0000 UTC m=+0.023657082 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:37:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:45.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:45 np0005466031 podman[278012]: 2025-10-02 12:37:45.974129067 +0000 UTC m=+0.532672179 container create a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:37:46 np0005466031 systemd[1]: Started libpod-conmon-a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a.scope.
Oct  2 08:37:46 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:37:46 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fd06db5f8443b8742ac5bd8dec198cf44ae2617767dad4d6d806ebf736b4ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:37:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:46.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:46 np0005466031 podman[278012]: 2025-10-02 12:37:46.45487402 +0000 UTC m=+1.013417112 container init a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:37:46 np0005466031 podman[278012]: 2025-10-02 12:37:46.461576003 +0000 UTC m=+1.020119075 container start a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:37:46 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278083]: [NOTICE]   (278093) : New worker (278095) forked
Oct  2 08:37:46 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278083]: [NOTICE]   (278093) : Loading success.
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.911 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for dc4a4f9d-2d68-4b95-a651-f1817489ccd6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.913 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408666.9113452, dc4a4f9d-2d68-4b95-a651-f1817489ccd6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.913 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.918 2 DEBUG nova.compute.manager [None req-ddc6d48a-8535-48c0-a50a-424794a939cf fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.944 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.949 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.979 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.980 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408666.9145253, dc4a4f9d-2d68-4b95-a651-f1817489ccd6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:46 np0005466031 nova_compute[235803]: 2025-10-02 12:37:46.980 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.006 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.010 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:47.067 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.131 2 DEBUG nova.compute.manager [req-e1616a5f-2a24-4235-a43f-166b1ebad014 req-3cc8a872-4916-4126-ab6c-c4315b59287a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.132 2 DEBUG oslo_concurrency.lockutils [req-e1616a5f-2a24-4235-a43f-166b1ebad014 req-3cc8a872-4916-4126-ab6c-c4315b59287a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.132 2 DEBUG oslo_concurrency.lockutils [req-e1616a5f-2a24-4235-a43f-166b1ebad014 req-3cc8a872-4916-4126-ab6c-c4315b59287a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.132 2 DEBUG oslo_concurrency.lockutils [req-e1616a5f-2a24-4235-a43f-166b1ebad014 req-3cc8a872-4916-4126-ab6c-c4315b59287a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.132 2 DEBUG nova.compute.manager [req-e1616a5f-2a24-4235-a43f-166b1ebad014 req-3cc8a872-4916-4126-ab6c-c4315b59287a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.133 2 WARNING nova.compute.manager [req-e1616a5f-2a24-4235-a43f-166b1ebad014 req-3cc8a872-4916-4126-ab6c-c4315b59287a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:37:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:47.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.774 2 INFO nova.compute.manager [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Unrescuing#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.775 2 DEBUG oslo_concurrency.lockutils [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.775 2 DEBUG oslo_concurrency.lockutils [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:47 np0005466031 nova_compute[235803]: 2025-10-02 12:37:47.776 2 DEBUG nova.network.neutron [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:37:48 np0005466031 nova_compute[235803]: 2025-10-02 12:37:48.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:48.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:48 np0005466031 nova_compute[235803]: 2025-10-02 12:37:48.875 2 DEBUG nova.network.neutron [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:48 np0005466031 nova_compute[235803]: 2025-10-02 12:37:48.900 2 DEBUG oslo_concurrency.lockutils [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:48 np0005466031 nova_compute[235803]: 2025-10-02 12:37:48.901 2 DEBUG nova.objects.instance [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'flavor' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:48 np0005466031 kernel: tap79d9c544-9d (unregistering): left promiscuous mode
Oct  2 08:37:48 np0005466031 NetworkManager[44907]: <info>  [1759408668.9894] device (tap79d9c544-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:48.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00355|binding|INFO|Releasing lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 from this chassis (sb_readonly=0)
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00356|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 down in Southbound
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00357|binding|INFO|Removing iface tap79d9c544-9d ovn-installed in OVS
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.007 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:82:e0 10.100.0.6'], port_security=['fa:16:3e:56:82:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=79d9c544-9d33-410a-a1d5-393ff0908cb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.008 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 79d9c544-9d33-410a-a1d5-393ff0908cb1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.009 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 494beff4-7fba-4749-8998-3432c91ac5d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.010 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3323924c-f60c-4404-bc16-92117a940cfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.011 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace which is not needed anymore#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct  2 08:37:49 np0005466031 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000062.scope: Consumed 3.677s CPU time.
Oct  2 08:37:49 np0005466031 systemd-machined[192227]: Machine qemu-40-instance-00000062 terminated.
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278083]: [NOTICE]   (278093) : haproxy version is 2.8.14-c23fe91
Oct  2 08:37:49 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278083]: [NOTICE]   (278093) : path to executable is /usr/sbin/haproxy
Oct  2 08:37:49 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278083]: [WARNING]  (278093) : Exiting Master process...
Oct  2 08:37:49 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278083]: [ALERT]    (278093) : Current worker (278095) exited with code 143 (Terminated)
Oct  2 08:37:49 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278083]: [WARNING]  (278093) : All workers exited. Exiting... (0)
Oct  2 08:37:49 np0005466031 systemd[1]: libpod-a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a.scope: Deactivated successfully.
Oct  2 08:37:49 np0005466031 podman[278129]: 2025-10-02 12:37:49.140499455 +0000 UTC m=+0.041681740 container died a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:37:49 np0005466031 kernel: tap79d9c544-9d: entered promiscuous mode
Oct  2 08:37:49 np0005466031 NetworkManager[44907]: <info>  [1759408669.1541] manager: (tap79d9c544-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00358|binding|INFO|Claiming lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 for this chassis.
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00359|binding|INFO|79d9c544-9d33-410a-a1d5-393ff0908cb1: Claiming fa:16:3e:56:82:e0 10.100.0.6
Oct  2 08:37:49 np0005466031 kernel: tap79d9c544-9d (unregistering): left promiscuous mode
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.167 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:82:e0 10.100.0.6'], port_security=['fa:16:3e:56:82:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=79d9c544-9d33-410a-a1d5-393ff0908cb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:49 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.175 2 INFO nova.virt.libvirt.driver [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance destroyed successfully.#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.178 2 DEBUG nova.objects.instance [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'numa_topology' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:49 np0005466031 systemd[1]: var-lib-containers-storage-overlay-63fd06db5f8443b8742ac5bd8dec198cf44ae2617767dad4d6d806ebf736b4ea-merged.mount: Deactivated successfully.
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00360|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 ovn-installed in OVS
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00361|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 up in Southbound
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00362|binding|INFO|Releasing lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 from this chassis (sb_readonly=1)
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00363|binding|INFO|Removing iface tap79d9c544-9d ovn-installed in OVS
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00364|if_status|INFO|Dropped 37 log messages in last 604 seconds (most recently, 603 seconds ago) due to excessive rate
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00365|if_status|INFO|Not setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 down as sb is readonly
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 podman[278129]: 2025-10-02 12:37:49.191121812 +0000 UTC m=+0.092304097 container cleanup a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00366|binding|INFO|Releasing lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 from this chassis (sb_readonly=0)
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00367|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 down in Southbound
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 systemd[1]: libpod-conmon-a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a.scope: Deactivated successfully.
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.203 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:82:e0 10.100.0.6'], port_security=['fa:16:3e:56:82:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=79d9c544-9d33-410a-a1d5-393ff0908cb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.237 2 DEBUG nova.compute.manager [req-4c8ede1a-77cf-4865-8d68-ee3e639aebb7 req-849bd507-f538-4bcd-ac77-08eb0a4599c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.237 2 DEBUG oslo_concurrency.lockutils [req-4c8ede1a-77cf-4865-8d68-ee3e639aebb7 req-849bd507-f538-4bcd-ac77-08eb0a4599c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.237 2 DEBUG oslo_concurrency.lockutils [req-4c8ede1a-77cf-4865-8d68-ee3e639aebb7 req-849bd507-f538-4bcd-ac77-08eb0a4599c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.238 2 DEBUG oslo_concurrency.lockutils [req-4c8ede1a-77cf-4865-8d68-ee3e639aebb7 req-849bd507-f538-4bcd-ac77-08eb0a4599c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.238 2 DEBUG nova.compute.manager [req-4c8ede1a-77cf-4865-8d68-ee3e639aebb7 req-849bd507-f538-4bcd-ac77-08eb0a4599c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.238 2 WARNING nova.compute.manager [req-4c8ede1a-77cf-4865-8d68-ee3e639aebb7 req-849bd507-f538-4bcd-ac77-08eb0a4599c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:37:49 np0005466031 podman[278165]: 2025-10-02 12:37:49.252894319 +0000 UTC m=+0.040425514 container remove a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.259 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7f03f3-9d08-48d9-95a4-a54058fe1dd4]: (4, ('Thu Oct  2 12:37:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a)\na2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a\nThu Oct  2 12:37:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (a2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a)\na2e82eebbf1180cd1446e30b9e69c89d204444a7025571acae66d1b91ef7e49a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.261 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[995c1e66-4caf-47fb-9f53-ada4087e148e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.262 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 kernel: tap494beff4-70: left promiscuous mode
Oct  2 08:37:49 np0005466031 NetworkManager[44907]: <info>  [1759408669.2722] manager: (tap79d9c544-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Oct  2 08:37:49 np0005466031 systemd-udevd[278109]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 kernel: tap79d9c544-9d: entered promiscuous mode
Oct  2 08:37:49 np0005466031 NetworkManager[44907]: <info>  [1759408669.2833] device (tap79d9c544-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:37:49 np0005466031 NetworkManager[44907]: <info>  [1759408669.2843] device (tap79d9c544-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00368|binding|INFO|Claiming lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 for this chassis.
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00369|binding|INFO|79d9c544-9d33-410a-a1d5-393ff0908cb1: Claiming fa:16:3e:56:82:e0 10.100.0.6
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.285 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[61376d28-5263-49a8-b611-8cdeebe943e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.290 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:82:e0 10.100.0.6'], port_security=['fa:16:3e:56:82:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=79d9c544-9d33-410a-a1d5-393ff0908cb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:49 np0005466031 systemd-machined[192227]: New machine qemu-41-instance-00000062.
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00370|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 ovn-installed in OVS
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00371|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 up in Southbound
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 systemd[1]: Started Virtual Machine qemu-41-instance-00000062.
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.317 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3c25bb-de5a-460c-b0bb-d0f957b7b874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.318 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b097199d-c551-4c29-ad6a-fe882dda1534]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.333 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcb2102-92f8-42ec-8c2b-c1fe63857c24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652041, 'reachable_time': 31368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278195, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.335 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.335 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3c05a6-2e25-4586-af14-41d8ce741395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.336 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 79d9c544-9d33-410a-a1d5-393ff0908cb1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:37:49 np0005466031 systemd[1]: run-netns-ovnmeta\x2d494beff4\x2d7fba\x2d4749\x2d8998\x2d3432c91ac5d2.mount: Deactivated successfully.
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.337 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.345 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3d1c4c-ac87-4ebd-8ced-77a09e2012f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.346 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap494beff4-71 in ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.347 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap494beff4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.347 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c550bff9-27ac-49ec-824d-c50f4defd8b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.348 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8a548b-5991-483a-937b-6713431bf9f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.360 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d90e1811-8a07-4cd6-af67-a7d478212695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.383 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aef2c25e-7e40-49ed-8edc-650bad5042f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.408 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[87aa3dc8-9eeb-4485-9a10-1cd82a0e8c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.413 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[953756a0-c38e-4bb8-abc6-90d4c99aa189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 NetworkManager[44907]: <info>  [1759408669.4146] manager: (tap494beff4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.442 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a1688b-04c4-47a3-81a8-a248ff968aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.445 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[59df00cc-0c25-4006-88b1-e2b5caf7f812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 NetworkManager[44907]: <info>  [1759408669.4641] device (tap494beff4-70): carrier: link connected
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.468 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[df9e0c50-9bb1-4366-8044-e981911ed5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.482 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[432fdaf2-3f3a-40ae-8d6f-ef9f1a607c0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 41285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278226, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.494 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb2a926-a4f3-4b58-af45-97f4d368646c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:4a01'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652505, 'tstamp': 652505}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278227, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.505 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f6c693-d672-4bf9-9657-0235a6fe7929]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 41285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278228, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.527 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[61a97b40-a554-4d50-9abe-eb9cb6d2d86a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.567 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbd7036-51f4-4bf0-9962-29816257a19f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.568 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.568 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.568 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 NetworkManager[44907]: <info>  [1759408669.5706] manager: (tap494beff4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Oct  2 08:37:49 np0005466031 kernel: tap494beff4-70: entered promiscuous mode
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.573 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:37:49Z|00372|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:37:49 np0005466031 nova_compute[235803]: 2025-10-02 12:37:49.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.588 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.589 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ef8322-d29d-40d4-a947-d7eba553e349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.590 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-494beff4-7fba-4749-8998-3432c91ac5d2
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/494beff4-7fba-4749-8998-3432c91ac5d2.pid.haproxy
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 494beff4-7fba-4749-8998-3432c91ac5d2
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:37:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:49.590 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'env', 'PROCESS_TAG=haproxy-494beff4-7fba-4749-8998-3432c91ac5d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/494beff4-7fba-4749-8998-3432c91ac5d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:37:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:49.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:49 np0005466031 podman[278286]: 2025-10-02 12:37:49.932556336 +0000 UTC m=+0.046633483 container create e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:37:49 np0005466031 systemd[1]: Started libpod-conmon-e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de.scope.
Oct  2 08:37:49 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:37:49 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac159d90a81fbe34ae90f210ea1190e4448846257e83124647d29e1c6c8c993c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:37:50 np0005466031 podman[278286]: 2025-10-02 12:37:49.907754742 +0000 UTC m=+0.021831919 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:37:50 np0005466031 podman[278286]: 2025-10-02 12:37:50.010640733 +0000 UTC m=+0.124717910 container init e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:37:50 np0005466031 podman[278286]: 2025-10-02 12:37:50.015485512 +0000 UTC m=+0.129562659 container start e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:37:50 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278319]: [NOTICE]   (278323) : New worker (278325) forked
Oct  2 08:37:50 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278319]: [NOTICE]   (278323) : Loading success.
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.069 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 79d9c544-9d33-410a-a1d5-393ff0908cb1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.070 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.083 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[28e252c3-cd27-4cf4-8c55-1d0ae980a0a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.108 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc7cff8-d101-4dab-a398-7a9d747ff8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.111 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bba8ed17-3437-4973-9a5f-c129c9c5ea02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.133 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[7e862348-6d7e-4a6d-b77c-82501a166ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.149 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1af07406-ed95-4118-8af8-8e5a0bc247b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 41285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278339, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.165 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1e50671d-f8d2-4807-be7a-2d3aa2f1d0c0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652513, 'tstamp': 652513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278340, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652515, 'tstamp': 652515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278340, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.167 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.172 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.172 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.172 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.173 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.173 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 79d9c544-9d33-410a-a1d5-393ff0908cb1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.175 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.189 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6ab081-aa70-443b-9c06-34edfabe1075]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.218 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b02ea769-b131-4b40-838f-5b21e6160a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.221 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b90e5b59-64c7-4f2c-8560-676efb0c6eb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.247 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[af0c8943-4909-4100-a6e3-b6471b15ecf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.262 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1aeda0e6-8038-488b-9aa8-a7a10c29ad05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 306, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 306, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 41285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278346, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.276 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[75596544-e2dd-4754-9285-cc63eee94740]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652513, 'tstamp': 652513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278347, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652515, 'tstamp': 652515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278347, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.278 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.282 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.282 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.283 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:37:50.283 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:50.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.367 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for dc4a4f9d-2d68-4b95-a651-f1817489ccd6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.367 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408670.366902, dc4a4f9d-2d68-4b95-a651-f1817489ccd6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.367 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.388 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.391 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.408 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.408 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408670.3690562, dc4a4f9d-2d68-4b95-a651-f1817489ccd6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.408 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.426 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.428 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.446 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:37:50 np0005466031 nova_compute[235803]: 2025-10-02 12:37:50.865 2 DEBUG nova.compute.manager [None req-5d6ab3dc-4572-44c5-a2c6-325eb2e73a55 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.311 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.312 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.312 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.312 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.313 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.313 2 WARNING nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.314 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.314 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.314 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.315 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.315 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.316 2 WARNING nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.316 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.317 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.317 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.318 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.319 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.319 2 WARNING nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.320 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.320 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.321 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.321 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.322 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.323 2 WARNING nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.323 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.324 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.324 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.325 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.325 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.326 2 WARNING nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.326 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.327 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.327 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.328 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.328 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.329 2 WARNING nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.330 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.330 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.331 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.331 2 DEBUG oslo_concurrency.lockutils [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.333 2 DEBUG nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:51 np0005466031 nova_compute[235803]: 2025-10-02 12:37:51.334 2 WARNING nova.compute.manager [req-099b2298-5705-4950-b355-a1fb5c16e282 req-d4278c8a-92b8-4dd1-aaaf-a5ba23a57614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1024383535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:51.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:52 np0005466031 nova_compute[235803]: 2025-10-02 12:37:52.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:52 np0005466031 nova_compute[235803]: 2025-10-02 12:37:52.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:52 np0005466031 nova_compute[235803]: 2025-10-02 12:37:52.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:37:52 np0005466031 nova_compute[235803]: 2025-10-02 12:37:52.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:37:52 np0005466031 nova_compute[235803]: 2025-10-02 12:37:52.768 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:52 np0005466031 nova_compute[235803]: 2025-10-02 12:37:52.769 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:52 np0005466031 nova_compute[235803]: 2025-10-02 12:37:52.769 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:37:52 np0005466031 nova_compute[235803]: 2025-10-02 12:37:52.770 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:53 np0005466031 nova_compute[235803]: 2025-10-02 12:37:53.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:53 np0005466031 podman[278418]: 2025-10-02 12:37:53.623438006 +0000 UTC m=+0.052429019 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:37:53 np0005466031 podman[278419]: 2025-10-02 12:37:53.648905949 +0000 UTC m=+0.077657055 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:37:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:53.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.022 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.042 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.043 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.043 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.043 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.044 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.044 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.069 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.071 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.071 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.071 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.072 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:54 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1993557897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.558 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.627 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.628 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.773 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.775 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4371MB free_disk=20.87631607055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.776 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.776 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.896 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance dc4a4f9d-2d68-4b95-a651-f1817489ccd6 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.896 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.897 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:37:54 np0005466031 nova_compute[235803]: 2025-10-02 12:37:54.939 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1142469038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:55 np0005466031 nova_compute[235803]: 2025-10-02 12:37:55.468 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:55 np0005466031 nova_compute[235803]: 2025-10-02 12:37:55.474 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:55 np0005466031 nova_compute[235803]: 2025-10-02 12:37:55.488 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:55 np0005466031 nova_compute[235803]: 2025-10-02 12:37:55.515 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:37:55 np0005466031 nova_compute[235803]: 2025-10-02 12:37:55.516 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:55.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:56 np0005466031 nova_compute[235803]: 2025-10-02 12:37:56.108 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:56.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:56 np0005466031 nova_compute[235803]: 2025-10-02 12:37:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:56 np0005466031 nova_compute[235803]: 2025-10-02 12:37:56.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:37:57 np0005466031 nova_compute[235803]: 2025-10-02 12:37:57.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:37:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:57.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:37:58 np0005466031 nova_compute[235803]: 2025-10-02 12:37:58.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:59 np0005466031 nova_compute[235803]: 2025-10-02 12:37:59.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:37:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:59.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:38:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:00.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:38:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:01.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:02.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:02 np0005466031 ovn_controller[132413]: 2025-10-02T12:38:02Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:82:e0 10.100.0.6
Oct  2 08:38:02 np0005466031 ovn_controller[132413]: 2025-10-02T12:38:02Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:82:e0 10.100.0.6
Oct  2 08:38:02 np0005466031 podman[278564]: 2025-10-02 12:38:02.635422935 +0000 UTC m=+0.058834863 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:38:02 np0005466031 podman[278565]: 2025-10-02 12:38:02.635491677 +0000 UTC m=+0.054600572 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  2 08:38:03 np0005466031 nova_compute[235803]: 2025-10-02 12:38:03.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:03.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:04 np0005466031 nova_compute[235803]: 2025-10-02 12:38:04.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:04.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:38:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560162661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:38:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:38:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560162661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:38:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:05.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:07.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:08 np0005466031 nova_compute[235803]: 2025-10-02 12:38:08.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:08.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:09 np0005466031 nova_compute[235803]: 2025-10-02 12:38:09.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:09.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e256 e256: 3 total, 3 up, 3 in
Oct  2 08:38:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:10.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:10 np0005466031 nova_compute[235803]: 2025-10-02 12:38:10.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:11.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e257 e257: 3 total, 3 up, 3 in
Oct  2 08:38:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:12.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e258 e258: 3 total, 3 up, 3 in
Oct  2 08:38:13 np0005466031 nova_compute[235803]: 2025-10-02 12:38:13.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:14 np0005466031 nova_compute[235803]: 2025-10-02 12:38:14.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:15.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:16.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:17.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:18 np0005466031 nova_compute[235803]: 2025-10-02 12:38:18.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:18.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:19 np0005466031 nova_compute[235803]: 2025-10-02 12:38:19.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:19.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:20.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 e259: 3 total, 3 up, 3 in
Oct  2 08:38:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:21.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:22.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:23 np0005466031 nova_compute[235803]: 2025-10-02 12:38:23.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:23.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:24 np0005466031 nova_compute[235803]: 2025-10-02 12:38:24.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:24.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:24 np0005466031 podman[278664]: 2025-10-02 12:38:24.648416657 +0000 UTC m=+0.078759667 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:38:24 np0005466031 podman[278665]: 2025-10-02 12:38:24.740447045 +0000 UTC m=+0.166668956 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:38:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:25.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:38:25.846 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:38:25.847 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:38:25.847 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:26.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:27.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:28 np0005466031 nova_compute[235803]: 2025-10-02 12:38:28.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:28.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:29 np0005466031 nova_compute[235803]: 2025-10-02 12:38:29.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:38:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:29.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:38:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:30.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:31.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:32.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:33 np0005466031 nova_compute[235803]: 2025-10-02 12:38:33.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:33 np0005466031 podman[278715]: 2025-10-02 12:38:33.643043186 +0000 UTC m=+0.064818996 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:38:33 np0005466031 podman[278714]: 2025-10-02 12:38:33.6431899 +0000 UTC m=+0.069475020 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 08:38:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:33.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:34 np0005466031 nova_compute[235803]: 2025-10-02 12:38:34.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:34.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:35.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:36.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:37.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:38 np0005466031 nova_compute[235803]: 2025-10-02 12:38:38.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:38.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:39 np0005466031 nova_compute[235803]: 2025-10-02 12:38:39.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:39.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:40.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:42.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:38:43.278 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:38:43 np0005466031 nova_compute[235803]: 2025-10-02 12:38:43.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:38:43.280 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:38:43 np0005466031 nova_compute[235803]: 2025-10-02 12:38:43.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:43.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:44 np0005466031 nova_compute[235803]: 2025-10-02 12:38:44.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:44.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:38:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:45.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:38:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:46.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:47.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:48 np0005466031 nova_compute[235803]: 2025-10-02 12:38:48.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:49 np0005466031 nova_compute[235803]: 2025-10-02 12:38:49.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:38:49.281 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:49.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:50.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:51.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:38:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:52.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:38:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:38:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:38:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:38:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:38:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:38:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:38:53 np0005466031 nova_compute[235803]: 2025-10-02 12:38:53.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:53 np0005466031 nova_compute[235803]: 2025-10-02 12:38:53.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:53 np0005466031 nova_compute[235803]: 2025-10-02 12:38:53.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:53 np0005466031 nova_compute[235803]: 2025-10-02 12:38:53.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:38:53 np0005466031 nova_compute[235803]: 2025-10-02 12:38:53.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:38:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:54 np0005466031 nova_compute[235803]: 2025-10-02 12:38:54.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:54 np0005466031 nova_compute[235803]: 2025-10-02 12:38:54.155 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:38:54 np0005466031 nova_compute[235803]: 2025-10-02 12:38:54.156 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:38:54 np0005466031 nova_compute[235803]: 2025-10-02 12:38:54.156 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:38:54 np0005466031 nova_compute[235803]: 2025-10-02 12:38:54.156 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:54.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:38:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:38:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:55 np0005466031 nova_compute[235803]: 2025-10-02 12:38:55.585 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:55 np0005466031 nova_compute[235803]: 2025-10-02 12:38:55.585 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:55 np0005466031 podman[279068]: 2025-10-02 12:38:55.618562347 +0000 UTC m=+0.053675486 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:38:55 np0005466031 nova_compute[235803]: 2025-10-02 12:38:55.627 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:38:55 np0005466031 podman[279069]: 2025-10-02 12:38:55.645506042 +0000 UTC m=+0.077367307 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:38:55 np0005466031 nova_compute[235803]: 2025-10-02 12:38:55.735 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:55 np0005466031 nova_compute[235803]: 2025-10-02 12:38:55.735 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:55 np0005466031 nova_compute[235803]: 2025-10-02 12:38:55.746 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:38:55 np0005466031 nova_compute[235803]: 2025-10-02 12:38:55.747 2 INFO nova.compute.claims [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:38:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:55.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:55 np0005466031 nova_compute[235803]: 2025-10-02 12:38:55.980 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:56 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1343751637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:38:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:56.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.425 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.433 2 DEBUG nova.compute.provider_tree [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.455 2 DEBUG nova.scheduler.client.report [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.484 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.484 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.551 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.552 2 DEBUG nova.network.neutron [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.610 2 INFO nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.649 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.724 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.755 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.756 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.757 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.757 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.757 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.757 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.758 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.763 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.765 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.765 2 INFO nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Creating image(s)#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.791 2 DEBUG nova.storage.rbd_utils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.819 2 DEBUG nova.storage.rbd_utils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.845 2 DEBUG nova.storage.rbd_utils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.849 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.907 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.908 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.908 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.908 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.908 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.937 2 DEBUG nova.policy [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fdbe447f49374937a828d6281949a2a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.941 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.942 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.943 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.943 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.973 2 DEBUG nova.storage.rbd_utils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:56 np0005466031 nova_compute[235803]: 2025-10-02 12:38:56.976 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2a9f318e-50b4-47f5-b281-128055b9d810_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/668701634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.374 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.451 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.451 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.576 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.577 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4348MB free_disk=20.880149841308594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.577 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.577 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.703 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance dc4a4f9d-2d68-4b95-a651-f1817489ccd6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.703 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2a9f318e-50b4-47f5-b281-128055b9d810 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.704 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.704 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:38:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:57 np0005466031 nova_compute[235803]: 2025-10-02 12:38:57.773 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:57.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/714823305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:58 np0005466031 nova_compute[235803]: 2025-10-02 12:38:58.221 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:58 np0005466031 nova_compute[235803]: 2025-10-02 12:38:58.229 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:58 np0005466031 nova_compute[235803]: 2025-10-02 12:38:58.269 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:58 np0005466031 nova_compute[235803]: 2025-10-02 12:38:58.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:58 np0005466031 nova_compute[235803]: 2025-10-02 12:38:58.437 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:38:58 np0005466031 nova_compute[235803]: 2025-10-02 12:38:58.437 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:59 np0005466031 nova_compute[235803]: 2025-10-02 12:38:59.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:59 np0005466031 nova_compute[235803]: 2025-10-02 12:38:59.315 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:59 np0005466031 nova_compute[235803]: 2025-10-02 12:38:59.316 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:59 np0005466031 nova_compute[235803]: 2025-10-02 12:38:59.316 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:38:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:38:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:38:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:59.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:38:59 np0005466031 nova_compute[235803]: 2025-10-02 12:38:59.907 2 DEBUG nova.network.neutron [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Successfully created port: 4a059cfc-9263-4a5c-b335-f23e936035a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:39:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:00.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:01.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:02.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:02 np0005466031 nova_compute[235803]: 2025-10-02 12:39:02.776 2 DEBUG nova.network.neutron [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Successfully updated port: 4a059cfc-9263-4a5c-b335-f23e936035a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:39:02 np0005466031 nova_compute[235803]: 2025-10-02 12:39:02.798 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:02 np0005466031 nova_compute[235803]: 2025-10-02 12:39:02.798 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:02 np0005466031 nova_compute[235803]: 2025-10-02 12:39:02.798 2 DEBUG nova.network.neutron [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:39:03 np0005466031 nova_compute[235803]: 2025-10-02 12:39:03.093 2 DEBUG nova.compute.manager [req-5d048896-b832-4e2b-ac5c-0d73c52e6f61 req-b158576d-402f-4b07-a958-b5e7d4e4659c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-changed-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:03 np0005466031 nova_compute[235803]: 2025-10-02 12:39:03.093 2 DEBUG nova.compute.manager [req-5d048896-b832-4e2b-ac5c-0d73c52e6f61 req-b158576d-402f-4b07-a958-b5e7d4e4659c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Refreshing instance network info cache due to event network-changed-4a059cfc-9263-4a5c-b335-f23e936035a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:39:03 np0005466031 nova_compute[235803]: 2025-10-02 12:39:03.094 2 DEBUG oslo_concurrency.lockutils [req-5d048896-b832-4e2b-ac5c-0d73c52e6f61 req-b158576d-402f-4b07-a958-b5e7d4e4659c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:03 np0005466031 nova_compute[235803]: 2025-10-02 12:39:03.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:03 np0005466031 nova_compute[235803]: 2025-10-02 12:39:03.691 2 DEBUG nova.network.neutron [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:39:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:03.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:03 np0005466031 podman[279349]: 2025-10-02 12:39:03.804222 +0000 UTC m=+0.055842888 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:39:03 np0005466031 podman[279350]: 2025-10-02 12:39:03.807383591 +0000 UTC m=+0.055788186 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:39:04 np0005466031 nova_compute[235803]: 2025-10-02 12:39:04.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:04.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:04 np0005466031 nova_compute[235803]: 2025-10-02 12:39:04.624 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2a9f318e-50b4-47f5-b281-128055b9d810_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:04 np0005466031 nova_compute[235803]: 2025-10-02 12:39:04.686 2 DEBUG nova.storage.rbd_utils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] resizing rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:39:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:39:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:39:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:39:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3364850339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:39:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:39:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3364850339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:39:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:05.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:06 np0005466031 nova_compute[235803]: 2025-10-02 12:39:06.359 2 DEBUG nova.network.neutron [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updating instance_info_cache with network_info: [{"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:06 np0005466031 nova_compute[235803]: 2025-10-02 12:39:06.464 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:39:06 np0005466031 nova_compute[235803]: 2025-10-02 12:39:06.464 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance network_info: |[{"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:39:06 np0005466031 nova_compute[235803]: 2025-10-02 12:39:06.466 2 DEBUG oslo_concurrency.lockutils [req-5d048896-b832-4e2b-ac5c-0d73c52e6f61 req-b158576d-402f-4b07-a958-b5e7d4e4659c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:06 np0005466031 nova_compute[235803]: 2025-10-02 12:39:06.466 2 DEBUG nova.network.neutron [req-5d048896-b832-4e2b-ac5c-0d73c52e6f61 req-b158576d-402f-4b07-a958-b5e7d4e4659c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Refreshing network info cache for port 4a059cfc-9263-4a5c-b335-f23e936035a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:39:06 np0005466031 nova_compute[235803]: 2025-10-02 12:39:06.987 2 DEBUG nova.objects.instance [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.142 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.142 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Ensure instance console log exists: /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.144 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.145 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.145 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.150 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Start _get_guest_xml network_info=[{"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.161 2 WARNING nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.166 2 DEBUG nova.virt.libvirt.host [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.167 2 DEBUG nova.virt.libvirt.host [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.171 2 DEBUG nova.virt.libvirt.host [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.171 2 DEBUG nova.virt.libvirt.host [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.173 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.174 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.175 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.175 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.176 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.176 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.177 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.177 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.178 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.178 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.178 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.179 2 DEBUG nova.virt.hardware [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.184 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2854324697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.633 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.661 2 DEBUG nova.storage.rbd_utils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:07 np0005466031 nova_compute[235803]: 2025-10-02 12:39:07.665 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:07.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:08 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4233012682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.077 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.080 2 DEBUG nova.virt.libvirt.vif [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-183740436',display_name='tempest-ServerStableDeviceRescueTest-server-183740436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-183740436',id=104,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-tjzcygr6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:38:56Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=2a9f318e-50b4-47f5-b281-128055b9d810,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.080 2 DEBUG nova.network.os_vif_util [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.081 2 DEBUG nova.network.os_vif_util [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a059cfc-9263-4a5c-b335-f23e936035a1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a059cfc-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.082 2 DEBUG nova.objects.instance [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.122 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <uuid>2a9f318e-50b4-47f5-b281-128055b9d810</uuid>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <name>instance-00000068</name>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-183740436</nova:name>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:39:07</nova:creationTime>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <nova:user uuid="fdbe447f49374937a828d6281949a2a4">tempest-ServerStableDeviceRescueTest-2109974660-project-member</nova:user>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <nova:project uuid="a79bb765ab1e4aa18672c9641b6187b9">tempest-ServerStableDeviceRescueTest-2109974660</nova:project>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <nova:port uuid="4a059cfc-9263-4a5c-b335-f23e936035a1">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <entry name="serial">2a9f318e-50b4-47f5-b281-128055b9d810</entry>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <entry name="uuid">2a9f318e-50b4-47f5-b281-128055b9d810</entry>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2a9f318e-50b4-47f5-b281-128055b9d810_disk">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2a9f318e-50b4-47f5-b281-128055b9d810_disk.config">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:e4:2a:b6"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <target dev="tap4a059cfc-92"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/console.log" append="off"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:39:08 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:39:08 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:39:08 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:39:08 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.124 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Preparing to wait for external event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.124 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.125 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.125 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.126 2 DEBUG nova.virt.libvirt.vif [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-183740436',display_name='tempest-ServerStableDeviceRescueTest-server-183740436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-183740436',id=104,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-tjzcygr6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:38:56Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=2a9f318e-50b4-47f5-b281-128055b9d810,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.126 2 DEBUG nova.network.os_vif_util [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.126 2 DEBUG nova.network.os_vif_util [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a059cfc-9263-4a5c-b335-f23e936035a1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a059cfc-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.127 2 DEBUG os_vif [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a059cfc-9263-4a5c-b335-f23e936035a1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a059cfc-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a059cfc-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a059cfc-92, col_values=(('external_ids', {'iface-id': '4a059cfc-9263-4a5c-b335-f23e936035a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:2a:b6', 'vm-uuid': '2a9f318e-50b4-47f5-b281-128055b9d810'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:08 np0005466031 NetworkManager[44907]: <info>  [1759408748.1357] manager: (tap4a059cfc-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.142 2 INFO os_vif [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a059cfc-9263-4a5c-b335-f23e936035a1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a059cfc-92')#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.197 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.198 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.198 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:e4:2a:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.199 2 INFO nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Using config drive#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.241 2 DEBUG nova.storage.rbd_utils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:08.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.733 2 DEBUG nova.network.neutron [req-5d048896-b832-4e2b-ac5c-0d73c52e6f61 req-b158576d-402f-4b07-a958-b5e7d4e4659c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updated VIF entry in instance network info cache for port 4a059cfc-9263-4a5c-b335-f23e936035a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.733 2 DEBUG nova.network.neutron [req-5d048896-b832-4e2b-ac5c-0d73c52e6f61 req-b158576d-402f-4b07-a958-b5e7d4e4659c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updating instance_info_cache with network_info: [{"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:08 np0005466031 nova_compute[235803]: 2025-10-02 12:39:08.771 2 DEBUG oslo_concurrency.lockutils [req-5d048896-b832-4e2b-ac5c-0d73c52e6f61 req-b158576d-402f-4b07-a958-b5e7d4e4659c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:39:09 np0005466031 nova_compute[235803]: 2025-10-02 12:39:09.089 2 INFO nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Creating config drive at /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config#033[00m
Oct  2 08:39:09 np0005466031 nova_compute[235803]: 2025-10-02 12:39:09.095 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0j7me2i2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:09 np0005466031 nova_compute[235803]: 2025-10-02 12:39:09.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:09 np0005466031 nova_compute[235803]: 2025-10-02 12:39:09.229 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0j7me2i2" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:09 np0005466031 nova_compute[235803]: 2025-10-02 12:39:09.264 2 DEBUG nova.storage.rbd_utils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:09 np0005466031 nova_compute[235803]: 2025-10-02 12:39:09.268 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:09.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:10.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:11 np0005466031 nova_compute[235803]: 2025-10-02 12:39:11.131 2 DEBUG oslo_concurrency.processutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.862s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:11 np0005466031 nova_compute[235803]: 2025-10-02 12:39:11.132 2 INFO nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Deleting local config drive /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config because it was imported into RBD.#033[00m
Oct  2 08:39:11 np0005466031 kernel: tap4a059cfc-92: entered promiscuous mode
Oct  2 08:39:11 np0005466031 NetworkManager[44907]: <info>  [1759408751.1936] manager: (tap4a059cfc-92): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Oct  2 08:39:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:11Z|00373|binding|INFO|Claiming lport 4a059cfc-9263-4a5c-b335-f23e936035a1 for this chassis.
Oct  2 08:39:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:11Z|00374|binding|INFO|4a059cfc-9263-4a5c-b335-f23e936035a1: Claiming fa:16:3e:e4:2a:b6 10.100.0.5
Oct  2 08:39:11 np0005466031 nova_compute[235803]: 2025-10-02 12:39:11.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:11Z|00375|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 ovn-installed in OVS
Oct  2 08:39:11 np0005466031 nova_compute[235803]: 2025-10-02 12:39:11.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:11 np0005466031 nova_compute[235803]: 2025-10-02 12:39:11.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:11 np0005466031 systemd-udevd[279626]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:39:11 np0005466031 systemd-machined[192227]: New machine qemu-42-instance-00000068.
Oct  2 08:39:11 np0005466031 systemd[1]: Started Virtual Machine qemu-42-instance-00000068.
Oct  2 08:39:11 np0005466031 NetworkManager[44907]: <info>  [1759408751.2457] device (tap4a059cfc-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:39:11 np0005466031 NetworkManager[44907]: <info>  [1759408751.2463] device (tap4a059cfc-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:39:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:11Z|00376|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 up in Southbound
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.328 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:2a:b6 10.100.0.5'], port_security=['fa:16:3e:e4:2a:b6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2a9f318e-50b4-47f5-b281-128055b9d810', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=4a059cfc-9263-4a5c-b335-f23e936035a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.329 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 4a059cfc-9263-4a5c-b335-f23e936035a1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.330 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.346 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[543ff706-61cc-42e6-a6ca-e999bfb1cc1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.380 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[79e217b7-678a-4b3a-801d-b4c4d536c2ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.383 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[687fbb73-dfd5-40da-a37c-4644a82710e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.408 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d6546d84-6887-4124-acb8-cee19fc0e53f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.424 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[53d934e3-c70d-4690-8289-acf8ab62afbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 38483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279640, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.440 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[236b7377-2c77-445e-a035-3ee449289c4c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652513, 'tstamp': 652513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279641, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652515, 'tstamp': 652515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279641, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.442 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:11 np0005466031 nova_compute[235803]: 2025-10-02 12:39:11.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.445 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.445 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.446 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:11.446 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:11.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.355 2 DEBUG nova.compute.manager [req-1bf25a57-4879-42b7-9595-06b2a41a52ff req-e125bcf3-7840-4501-9e46-89763b776d67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.355 2 DEBUG oslo_concurrency.lockutils [req-1bf25a57-4879-42b7-9595-06b2a41a52ff req-e125bcf3-7840-4501-9e46-89763b776d67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.356 2 DEBUG oslo_concurrency.lockutils [req-1bf25a57-4879-42b7-9595-06b2a41a52ff req-e125bcf3-7840-4501-9e46-89763b776d67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.356 2 DEBUG oslo_concurrency.lockutils [req-1bf25a57-4879-42b7-9595-06b2a41a52ff req-e125bcf3-7840-4501-9e46-89763b776d67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.356 2 DEBUG nova.compute.manager [req-1bf25a57-4879-42b7-9595-06b2a41a52ff req-e125bcf3-7840-4501-9e46-89763b776d67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Processing event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.433 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408752.4332185, 2a9f318e-50b4-47f5-b281-128055b9d810 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.434 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] VM Started (Lifecycle Event)#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.437 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:39:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.442 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:39:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:12.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.447 2 INFO nova.virt.libvirt.driver [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance spawned successfully.#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.447 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.458 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.461 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.470 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.470 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.471 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.471 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.471 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.472 2 DEBUG nova.virt.libvirt.driver [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.480 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.480 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408752.4344726, 2a9f318e-50b4-47f5-b281-128055b9d810 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.480 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.519 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.525 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408752.441028, 2a9f318e-50b4-47f5-b281-128055b9d810 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.526 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.587 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.590 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.623 2 INFO nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Took 15.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.624 2 DEBUG nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.626 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.721 2 INFO nova.compute.manager [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Took 17.01 seconds to build instance.#033[00m
Oct  2 08:39:12 np0005466031 nova_compute[235803]: 2025-10-02 12:39:12.753 2 DEBUG oslo_concurrency.lockutils [None req-f3c853c7-6b21-4fec-9284-ca3ca0693bb1 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:13 np0005466031 nova_compute[235803]: 2025-10-02 12:39:13.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:13.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:14 np0005466031 nova_compute[235803]: 2025-10-02 12:39:14.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:14.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:14 np0005466031 nova_compute[235803]: 2025-10-02 12:39:14.817 2 DEBUG nova.compute.manager [req-58b0ac2b-e81b-42e5-b07a-40ac403421c1 req-393ec89d-291b-48ab-b331-efef5f128b26 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:14 np0005466031 nova_compute[235803]: 2025-10-02 12:39:14.817 2 DEBUG oslo_concurrency.lockutils [req-58b0ac2b-e81b-42e5-b07a-40ac403421c1 req-393ec89d-291b-48ab-b331-efef5f128b26 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:14 np0005466031 nova_compute[235803]: 2025-10-02 12:39:14.818 2 DEBUG oslo_concurrency.lockutils [req-58b0ac2b-e81b-42e5-b07a-40ac403421c1 req-393ec89d-291b-48ab-b331-efef5f128b26 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:14 np0005466031 nova_compute[235803]: 2025-10-02 12:39:14.818 2 DEBUG oslo_concurrency.lockutils [req-58b0ac2b-e81b-42e5-b07a-40ac403421c1 req-393ec89d-291b-48ab-b331-efef5f128b26 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:14 np0005466031 nova_compute[235803]: 2025-10-02 12:39:14.818 2 DEBUG nova.compute.manager [req-58b0ac2b-e81b-42e5-b07a-40ac403421c1 req-393ec89d-291b-48ab-b331-efef5f128b26 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:14 np0005466031 nova_compute[235803]: 2025-10-02 12:39:14.818 2 WARNING nova.compute.manager [req-58b0ac2b-e81b-42e5-b07a-40ac403421c1 req-393ec89d-291b-48ab-b331-efef5f128b26 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:39:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:15.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:16.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:17.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:18 np0005466031 nova_compute[235803]: 2025-10-02 12:39:18.021 2 DEBUG nova.compute.manager [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:18 np0005466031 nova_compute[235803]: 2025-10-02 12:39:18.110 2 INFO nova.compute.manager [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] instance snapshotting#033[00m
Oct  2 08:39:18 np0005466031 nova_compute[235803]: 2025-10-02 12:39:18.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:18 np0005466031 nova_compute[235803]: 2025-10-02 12:39:18.363 2 INFO nova.virt.libvirt.driver [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Beginning live snapshot process#033[00m
Oct  2 08:39:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:18 np0005466031 nova_compute[235803]: 2025-10-02 12:39:18.620 2 DEBUG nova.virt.libvirt.imagebackend [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:39:19 np0005466031 nova_compute[235803]: 2025-10-02 12:39:19.051 2 DEBUG nova.storage.rbd_utils [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] creating snapshot(6715cb6069274ee9868168982ececad2) on rbd image(2a9f318e-50b4-47f5-b281-128055b9d810_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:39:19 np0005466031 nova_compute[235803]: 2025-10-02 12:39:19.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:19.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e260 e260: 3 total, 3 up, 3 in
Oct  2 08:39:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:20.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:20 np0005466031 nova_compute[235803]: 2025-10-02 12:39:20.464 2 DEBUG nova.storage.rbd_utils [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] cloning vms/2a9f318e-50b4-47f5-b281-128055b9d810_disk@6715cb6069274ee9868168982ececad2 to images/845ab049-b1a7-4b63-95d7-72464046ac90 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:39:21 np0005466031 nova_compute[235803]: 2025-10-02 12:39:21.111 2 DEBUG nova.storage.rbd_utils [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] flattening images/845ab049-b1a7-4b63-95d7-72464046ac90 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:39:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:21.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:21 np0005466031 nova_compute[235803]: 2025-10-02 12:39:21.892 2 DEBUG nova.storage.rbd_utils [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] removing snapshot(6715cb6069274ee9868168982ececad2) on rbd image(2a9f318e-50b4-47f5-b281-128055b9d810_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:39:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:22.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:23 np0005466031 nova_compute[235803]: 2025-10-02 12:39:23.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e261 e261: 3 total, 3 up, 3 in
Oct  2 08:39:23 np0005466031 nova_compute[235803]: 2025-10-02 12:39:23.652 2 DEBUG nova.storage.rbd_utils [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] creating snapshot(snap) on rbd image(845ab049-b1a7-4b63-95d7-72464046ac90) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:39:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:24 np0005466031 nova_compute[235803]: 2025-10-02 12:39:24.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:24.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e262 e262: 3 total, 3 up, 3 in
Oct  2 08:39:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:25.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:25.848 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:25.848 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:25.848 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:26.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:26 np0005466031 podman[279883]: 2025-10-02 12:39:26.637528319 +0000 UTC m=+0.063420646 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 08:39:26 np0005466031 podman[279884]: 2025-10-02 12:39:26.666012999 +0000 UTC m=+0.091945507 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:39:27 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:27Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:2a:b6 10.100.0.5
Oct  2 08:39:27 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:27Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:2a:b6 10.100.0.5
Oct  2 08:39:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:27.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:28 np0005466031 nova_compute[235803]: 2025-10-02 12:39:28.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:28 np0005466031 nova_compute[235803]: 2025-10-02 12:39:28.644 2 INFO nova.virt.libvirt.driver [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Snapshot image upload complete#033[00m
Oct  2 08:39:28 np0005466031 nova_compute[235803]: 2025-10-02 12:39:28.645 2 INFO nova.compute.manager [None req-30ac2484-2f32-4e60-a25e-ef9b7d803969 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Took 10.53 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:39:29 np0005466031 nova_compute[235803]: 2025-10-02 12:39:29.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:29.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:30.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 e263: 3 total, 3 up, 3 in
Oct  2 08:39:31 np0005466031 nova_compute[235803]: 2025-10-02 12:39:31.694 2 INFO nova.compute.manager [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Rescuing#033[00m
Oct  2 08:39:31 np0005466031 nova_compute[235803]: 2025-10-02 12:39:31.695 2 DEBUG oslo_concurrency.lockutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:31 np0005466031 nova_compute[235803]: 2025-10-02 12:39:31.695 2 DEBUG oslo_concurrency.lockutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:31 np0005466031 nova_compute[235803]: 2025-10-02 12:39:31.695 2 DEBUG nova.network.neutron [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:39:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:31.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:32.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:33 np0005466031 nova_compute[235803]: 2025-10-02 12:39:33.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:33.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:34 np0005466031 nova_compute[235803]: 2025-10-02 12:39:34.099 2 DEBUG nova.network.neutron [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updating instance_info_cache with network_info: [{"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:34 np0005466031 nova_compute[235803]: 2025-10-02 12:39:34.130 2 DEBUG oslo_concurrency.lockutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:39:34 np0005466031 nova_compute[235803]: 2025-10-02 12:39:34.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:34.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:34 np0005466031 podman[279927]: 2025-10-02 12:39:34.645981192 +0000 UTC m=+0.074742222 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 08:39:34 np0005466031 podman[279928]: 2025-10-02 12:39:34.687345012 +0000 UTC m=+0.098618049 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:39:34 np0005466031 nova_compute[235803]: 2025-10-02 12:39:34.745 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:39:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:35.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:37.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:38.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:38 np0005466031 kernel: tap4a059cfc-92 (unregistering): left promiscuous mode
Oct  2 08:39:38 np0005466031 NetworkManager[44907]: <info>  [1759408778.6237] device (tap4a059cfc-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:39:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:38Z|00377|binding|INFO|Releasing lport 4a059cfc-9263-4a5c-b335-f23e936035a1 from this chassis (sb_readonly=0)
Oct  2 08:39:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:38Z|00378|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 down in Southbound
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:38Z|00379|binding|INFO|Removing iface tap4a059cfc-92 ovn-installed in OVS
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.643 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:2a:b6 10.100.0.5'], port_security=['fa:16:3e:e4:2a:b6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2a9f318e-50b4-47f5-b281-128055b9d810', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=4a059cfc-9263-4a5c-b335-f23e936035a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.644 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 4a059cfc-9263-4a5c-b335-f23e936035a1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.646 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.665 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a4292e-6680-4a56-8e6b-d62bff843df9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.693 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[1b39a960-8170-4530-9992-0500553a3d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.696 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[98a60fa7-c9ba-4958-96ea-7d88664d52f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:38 np0005466031 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct  2 08:39:38 np0005466031 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000068.scope: Consumed 14.166s CPU time.
Oct  2 08:39:38 np0005466031 systemd-machined[192227]: Machine qemu-42-instance-00000068 terminated.
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.723 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[9f324aee-aa5e-422b-9c47-230b27833bf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.746 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a71f3d7b-9d54-4824-b273-2d25762357f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 38483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279979, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.766 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4cc31c-e8e8-4964-a74b-c80b2da08d59]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652513, 'tstamp': 652513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279980, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652515, 'tstamp': 652515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279980, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.769 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.775 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.775 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.776 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:38.776 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.884 2 INFO nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance shutdown successfully after 4 seconds.#033[00m
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.889 2 INFO nova.virt.libvirt.driver [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance destroyed successfully.#033[00m
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.889 2 DEBUG nova.objects.instance [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:38 np0005466031 nova_compute[235803]: 2025-10-02 12:39:38.993 2 INFO nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Attempting a stable device rescue#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.257 2 DEBUG nova.compute.manager [req-bf5739a4-6ea3-41cc-aa18-e27c76a08f5b req-92ede989-34f5-40b9-a73d-fd4c117a5c6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.258 2 DEBUG oslo_concurrency.lockutils [req-bf5739a4-6ea3-41cc-aa18-e27c76a08f5b req-92ede989-34f5-40b9-a73d-fd4c117a5c6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.258 2 DEBUG oslo_concurrency.lockutils [req-bf5739a4-6ea3-41cc-aa18-e27c76a08f5b req-92ede989-34f5-40b9-a73d-fd4c117a5c6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.258 2 DEBUG oslo_concurrency.lockutils [req-bf5739a4-6ea3-41cc-aa18-e27c76a08f5b req-92ede989-34f5-40b9-a73d-fd4c117a5c6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.258 2 DEBUG nova.compute.manager [req-bf5739a4-6ea3-41cc-aa18-e27c76a08f5b req-92ede989-34f5-40b9-a73d-fd4c117a5c6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.258 2 WARNING nova.compute.manager [req-bf5739a4-6ea3-41cc-aa18-e27c76a08f5b req-92ede989-34f5-40b9-a73d-fd4c117a5c6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.296 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.301 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.301 2 INFO nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Creating image(s)#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.327 2 DEBUG nova.storage.rbd_utils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.331 2 DEBUG nova.objects.instance [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.473 2 DEBUG nova.storage.rbd_utils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.499 2 DEBUG nova.storage.rbd_utils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.502 2 DEBUG oslo_concurrency.lockutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "1853e4f5622283eb004e54f21170d433144d8833" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.503 2 DEBUG oslo_concurrency.lockutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1853e4f5622283eb004e54f21170d433144d8833" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:39.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:39 np0005466031 nova_compute[235803]: 2025-10-02 12:39:39.962 2 DEBUG nova.virt.libvirt.imagebackend [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/845ab049-b1a7-4b63-95d7-72464046ac90/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/845ab049-b1a7-4b63-95d7-72464046ac90/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.019 2 DEBUG nova.virt.libvirt.imagebackend [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/845ab049-b1a7-4b63-95d7-72464046ac90/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.020 2 DEBUG nova.storage.rbd_utils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] cloning images/845ab049-b1a7-4b63-95d7-72464046ac90@snap to None/2a9f318e-50b4-47f5-b281-128055b9d810_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:39:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:40.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.659 2 DEBUG oslo_concurrency.lockutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "1853e4f5622283eb004e54f21170d433144d8833" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.821 2 DEBUG nova.objects.instance [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.836 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.839 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Start _get_guest_xml network_info=[{"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:e4:2a:b6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '845ab049-b1a7-4b63-95d7-72464046ac90', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.840 2 DEBUG nova.objects.instance [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'resources' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.875 2 WARNING nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.881 2 DEBUG nova.virt.libvirt.host [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.881 2 DEBUG nova.virt.libvirt.host [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.883 2 DEBUG nova.virt.libvirt.host [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.884 2 DEBUG nova.virt.libvirt.host [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.885 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.885 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.886 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.886 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.886 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.887 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.887 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.887 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.887 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.888 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.888 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.888 2 DEBUG nova.virt.hardware [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.888 2 DEBUG nova.objects.instance [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:40 np0005466031 nova_compute[235803]: 2025-10-02 12:39:40.908 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2123990606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.366 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.399 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.507 2 DEBUG nova.compute.manager [req-91ef32a8-913a-4800-9b23-d2b55f296e4b req-683f653d-0ab5-401a-a260-75cca26f1261 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.508 2 DEBUG oslo_concurrency.lockutils [req-91ef32a8-913a-4800-9b23-d2b55f296e4b req-683f653d-0ab5-401a-a260-75cca26f1261 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.508 2 DEBUG oslo_concurrency.lockutils [req-91ef32a8-913a-4800-9b23-d2b55f296e4b req-683f653d-0ab5-401a-a260-75cca26f1261 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.508 2 DEBUG oslo_concurrency.lockutils [req-91ef32a8-913a-4800-9b23-d2b55f296e4b req-683f653d-0ab5-401a-a260-75cca26f1261 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.509 2 DEBUG nova.compute.manager [req-91ef32a8-913a-4800-9b23-d2b55f296e4b req-683f653d-0ab5-401a-a260-75cca26f1261 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.509 2 WARNING nova.compute.manager [req-91ef32a8-913a-4800-9b23-d2b55f296e4b req-683f653d-0ab5-401a-a260-75cca26f1261 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:39:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/753406698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.827 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:41 np0005466031 nova_compute[235803]: 2025-10-02 12:39:41.828 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:41.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2694472616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:42.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.521 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.523 2 DEBUG nova.virt.libvirt.vif [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-183740436',display_name='tempest-ServerStableDeviceRescueTest-server-183740436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-183740436',id=104,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:39:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-tjzcygr6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:28Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=2a9f318e-50b4-47f5-b281-128055b9d810,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:e4:2a:b6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.524 2 DEBUG nova.network.os_vif_util [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "vif_mac": "fa:16:3e:e4:2a:b6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.525 2 DEBUG nova.network.os_vif_util [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a059cfc-9263-4a5c-b335-f23e936035a1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a059cfc-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.526 2 DEBUG nova.objects.instance [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.547 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <uuid>2a9f318e-50b4-47f5-b281-128055b9d810</uuid>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <name>instance-00000068</name>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-183740436</nova:name>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:39:40</nova:creationTime>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <nova:user uuid="fdbe447f49374937a828d6281949a2a4">tempest-ServerStableDeviceRescueTest-2109974660-project-member</nova:user>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <nova:project uuid="a79bb765ab1e4aa18672c9641b6187b9">tempest-ServerStableDeviceRescueTest-2109974660</nova:project>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <nova:port uuid="4a059cfc-9263-4a5c-b335-f23e936035a1">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <entry name="serial">2a9f318e-50b4-47f5-b281-128055b9d810</entry>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <entry name="uuid">2a9f318e-50b4-47f5-b281-128055b9d810</entry>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2a9f318e-50b4-47f5-b281-128055b9d810_disk">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2a9f318e-50b4-47f5-b281-128055b9d810_disk.config">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2a9f318e-50b4-47f5-b281-128055b9d810_disk.rescue">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <boot order="1"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:e4:2a:b6"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <target dev="tap4a059cfc-92"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/console.log" append="off"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:39:42 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:39:42 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:39:42 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:39:42 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.556 2 INFO nova.virt.libvirt.driver [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance destroyed successfully.#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.623 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.624 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.624 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.624 2 DEBUG nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] No VIF found with MAC fa:16:3e:e4:2a:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.625 2 INFO nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Using config drive#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.648 2 DEBUG nova.storage.rbd_utils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.670 2 DEBUG nova.objects.instance [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:42 np0005466031 nova_compute[235803]: 2025-10-02 12:39:42.697 2 DEBUG nova.objects.instance [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'keypairs' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:43 np0005466031 nova_compute[235803]: 2025-10-02 12:39:43.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.113 2 INFO nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Creating config drive at /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config.rescue#033[00m
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.118 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm0qagxyi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.250 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm0qagxyi" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.276 2 DEBUG nova.storage.rbd_utils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] rbd image 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.280 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config.rescue 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:44.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.775 2 DEBUG oslo_concurrency.processutils [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config.rescue 2a9f318e-50b4-47f5-b281-128055b9d810_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.776 2 INFO nova.virt.libvirt.driver [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Deleting local config drive /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:39:44 np0005466031 kernel: tap4a059cfc-92: entered promiscuous mode
Oct  2 08:39:44 np0005466031 NetworkManager[44907]: <info>  [1759408784.8297] manager: (tap4a059cfc-92): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Oct  2 08:39:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:44Z|00380|binding|INFO|Claiming lport 4a059cfc-9263-4a5c-b335-f23e936035a1 for this chassis.
Oct  2 08:39:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:44Z|00381|binding|INFO|4a059cfc-9263-4a5c-b335-f23e936035a1: Claiming fa:16:3e:e4:2a:b6 10.100.0.5
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.845 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:2a:b6 10.100.0.5'], port_security=['fa:16:3e:e4:2a:b6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2a9f318e-50b4-47f5-b281-128055b9d810', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=4a059cfc-9263-4a5c-b335-f23e936035a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.846 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 4a059cfc-9263-4a5c-b335-f23e936035a1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.847 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:39:44 np0005466031 systemd-udevd[280341]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:39:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:44Z|00382|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 ovn-installed in OVS
Oct  2 08:39:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:44Z|00383|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 up in Southbound
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:44 np0005466031 NetworkManager[44907]: <info>  [1759408784.8650] device (tap4a059cfc-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:44 np0005466031 NetworkManager[44907]: <info>  [1759408784.8659] device (tap4a059cfc-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.866 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a89a3f3c-82e0-42c4-a237-23d359ff796e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:44 np0005466031 systemd-machined[192227]: New machine qemu-43-instance-00000068.
Oct  2 08:39:44 np0005466031 systemd[1]: Started Virtual Machine qemu-43-instance-00000068.
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.895 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ddb49c-9bba-49e4-82e4-7a2b06ab93dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.897 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[13f5f9fa-2817-4fe2-9b71-81f7ab0c2266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.921 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c564ea-73c0-4646-934b-df105a74b0b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.936 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[29fd8d50-d129-4d23-8ce1-30fc6eb9b3a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 38483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280353, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.954 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[57238deb-1a4d-4b52-91b7-872778a94144]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652513, 'tstamp': 652513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280357, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652515, 'tstamp': 652515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280357, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.956 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:44 np0005466031 nova_compute[235803]: 2025-10-02 12:39:44.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.958 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.959 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.959 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:44.959 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.265 2 DEBUG nova.compute.manager [req-edca4b41-ade8-4804-a2a3-0866a46d4d23 req-de246e37-b5db-4fee-8f86-92cee292827a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.265 2 DEBUG oslo_concurrency.lockutils [req-edca4b41-ade8-4804-a2a3-0866a46d4d23 req-de246e37-b5db-4fee-8f86-92cee292827a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.266 2 DEBUG oslo_concurrency.lockutils [req-edca4b41-ade8-4804-a2a3-0866a46d4d23 req-de246e37-b5db-4fee-8f86-92cee292827a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.266 2 DEBUG oslo_concurrency.lockutils [req-edca4b41-ade8-4804-a2a3-0866a46d4d23 req-de246e37-b5db-4fee-8f86-92cee292827a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.266 2 DEBUG nova.compute.manager [req-edca4b41-ade8-4804-a2a3-0866a46d4d23 req-de246e37-b5db-4fee-8f86-92cee292827a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.266 2 WARNING nova.compute.manager [req-edca4b41-ade8-4804-a2a3-0866a46d4d23 req-de246e37-b5db-4fee-8f86-92cee292827a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:39:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:45.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.946 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 2a9f318e-50b4-47f5-b281-128055b9d810 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.947 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408785.9465194, 2a9f318e-50b4-47f5-b281-128055b9d810 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.947 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:39:45 np0005466031 nova_compute[235803]: 2025-10-02 12:39:45.951 2 DEBUG nova.compute.manager [None req-ee93ecc0-a740-4748-87f7-8944dea7dda7 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:46 np0005466031 nova_compute[235803]: 2025-10-02 12:39:46.047 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:46 np0005466031 nova_compute[235803]: 2025-10-02 12:39:46.050 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:46 np0005466031 nova_compute[235803]: 2025-10-02 12:39:46.163 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct  2 08:39:46 np0005466031 nova_compute[235803]: 2025-10-02 12:39:46.164 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408785.9473152, 2a9f318e-50b4-47f5-b281-128055b9d810 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:46 np0005466031 nova_compute[235803]: 2025-10-02 12:39:46.164 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] VM Started (Lifecycle Event)#033[00m
Oct  2 08:39:46 np0005466031 nova_compute[235803]: 2025-10-02 12:39:46.232 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:46 np0005466031 nova_compute[235803]: 2025-10-02 12:39:46.235 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:46.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:47 np0005466031 nova_compute[235803]: 2025-10-02 12:39:47.381 2 DEBUG nova.compute.manager [req-af80b533-f9d5-4d1c-97ab-f54cb0d8a662 req-bc66b6a5-f244-4674-9dd3-d69a562206ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:47 np0005466031 nova_compute[235803]: 2025-10-02 12:39:47.382 2 DEBUG oslo_concurrency.lockutils [req-af80b533-f9d5-4d1c-97ab-f54cb0d8a662 req-bc66b6a5-f244-4674-9dd3-d69a562206ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:47 np0005466031 nova_compute[235803]: 2025-10-02 12:39:47.382 2 DEBUG oslo_concurrency.lockutils [req-af80b533-f9d5-4d1c-97ab-f54cb0d8a662 req-bc66b6a5-f244-4674-9dd3-d69a562206ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:47 np0005466031 nova_compute[235803]: 2025-10-02 12:39:47.382 2 DEBUG oslo_concurrency.lockutils [req-af80b533-f9d5-4d1c-97ab-f54cb0d8a662 req-bc66b6a5-f244-4674-9dd3-d69a562206ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:47 np0005466031 nova_compute[235803]: 2025-10-02 12:39:47.383 2 DEBUG nova.compute.manager [req-af80b533-f9d5-4d1c-97ab-f54cb0d8a662 req-bc66b6a5-f244-4674-9dd3-d69a562206ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:47 np0005466031 nova_compute[235803]: 2025-10-02 12:39:47.383 2 WARNING nova.compute.manager [req-af80b533-f9d5-4d1c-97ab-f54cb0d8a662 req-bc66b6a5-f244-4674-9dd3-d69a562206ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:39:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:47.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:48 np0005466031 nova_compute[235803]: 2025-10-02 12:39:48.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:48.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:48 np0005466031 nova_compute[235803]: 2025-10-02 12:39:48.661 2 INFO nova.compute.manager [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Unrescuing#033[00m
Oct  2 08:39:48 np0005466031 nova_compute[235803]: 2025-10-02 12:39:48.662 2 DEBUG oslo_concurrency.lockutils [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:48 np0005466031 nova_compute[235803]: 2025-10-02 12:39:48.662 2 DEBUG oslo_concurrency.lockutils [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquired lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:48 np0005466031 nova_compute[235803]: 2025-10-02 12:39:48.662 2 DEBUG nova.network.neutron [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:39:49 np0005466031 nova_compute[235803]: 2025-10-02 12:39:49.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:49.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:50.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:51 np0005466031 nova_compute[235803]: 2025-10-02 12:39:51.379 2 DEBUG nova.network.neutron [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updating instance_info_cache with network_info: [{"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:51 np0005466031 nova_compute[235803]: 2025-10-02 12:39:51.673 2 DEBUG oslo_concurrency.lockutils [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Releasing lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:39:51 np0005466031 nova_compute[235803]: 2025-10-02 12:39:51.674 2 DEBUG nova.objects.instance [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'flavor' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:51.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:51 np0005466031 kernel: tap4a059cfc-92 (unregistering): left promiscuous mode
Oct  2 08:39:51 np0005466031 NetworkManager[44907]: <info>  [1759408791.9725] device (tap4a059cfc-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:39:51 np0005466031 nova_compute[235803]: 2025-10-02 12:39:51.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:51Z|00384|binding|INFO|Releasing lport 4a059cfc-9263-4a5c-b335-f23e936035a1 from this chassis (sb_readonly=0)
Oct  2 08:39:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:51Z|00385|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 down in Southbound
Oct  2 08:39:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:51Z|00386|binding|INFO|Removing iface tap4a059cfc-92 ovn-installed in OVS
Oct  2 08:39:51 np0005466031 nova_compute[235803]: 2025-10-02 12:39:51.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:51 np0005466031 nova_compute[235803]: 2025-10-02 12:39:51.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.010 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:2a:b6 10.100.0.5'], port_security=['fa:16:3e:e4:2a:b6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2a9f318e-50b4-47f5-b281-128055b9d810', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=4a059cfc-9263-4a5c-b335-f23e936035a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.011 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 4a059cfc-9263-4a5c-b335-f23e936035a1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.013 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:39:52 np0005466031 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct  2 08:39:52 np0005466031 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000068.scope: Consumed 7.261s CPU time.
Oct  2 08:39:52 np0005466031 systemd-machined[192227]: Machine qemu-43-instance-00000068 terminated.
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.027 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cad562cb-7cf3-4aac-b673-e12b0266ed6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.055 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b929b8-7501-4aab-a23e-316594c523c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.058 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfb6ab6-8f89-40d4-ba1f-c30c3fb1bdb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.084 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2db74d50-5156-47b0-a671-a4861ac16995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.101 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c16267a2-ac46-405e-b12d-146d616b475a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 38483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280434, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.118 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cec42c13-d870-4686-9afa-d59dca76aef7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652513, 'tstamp': 652513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280435, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652515, 'tstamp': 652515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280435, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.120 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.128 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.128 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.129 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.129 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.156 2 INFO nova.virt.libvirt.driver [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance destroyed successfully.#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.157 2 DEBUG nova.objects.instance [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:52 np0005466031 kernel: tap4a059cfc-92: entered promiscuous mode
Oct  2 08:39:52 np0005466031 systemd-udevd[280425]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:39:52 np0005466031 NetworkManager[44907]: <info>  [1759408792.2811] manager: (tap4a059cfc-92): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Oct  2 08:39:52 np0005466031 NetworkManager[44907]: <info>  [1759408792.2903] device (tap4a059cfc-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:39:52 np0005466031 NetworkManager[44907]: <info>  [1759408792.2929] device (tap4a059cfc-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:39:52 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:52Z|00387|binding|INFO|Claiming lport 4a059cfc-9263-4a5c-b335-f23e936035a1 for this chassis.
Oct  2 08:39:52 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:52Z|00388|binding|INFO|4a059cfc-9263-4a5c-b335-f23e936035a1: Claiming fa:16:3e:e4:2a:b6 10.100.0.5
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.332 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:2a:b6 10.100.0.5'], port_security=['fa:16:3e:e4:2a:b6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2a9f318e-50b4-47f5-b281-128055b9d810', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=4a059cfc-9263-4a5c-b335-f23e936035a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.333 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 4a059cfc-9263-4a5c-b335-f23e936035a1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 bound to our chassis#033[00m
Oct  2 08:39:52 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:52Z|00389|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 ovn-installed in OVS
Oct  2 08:39:52 np0005466031 ovn_controller[132413]: 2025-10-02T12:39:52Z|00390|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 up in Southbound
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.335 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:52 np0005466031 systemd-machined[192227]: New machine qemu-44-instance-00000068.
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.352 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf6f488-ba0f-4c9e-bcc1-0c38143ad67c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 systemd[1]: Started Virtual Machine qemu-44-instance-00000068.
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.386 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd7e23c-fe70-47f5-ae62-0b9a5fb1c33d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.389 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6880d9b7-7dca-49e2-9bc8-13593d18ac25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.414 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[232b7fb7-7769-49f9-9d12-c2fe75afbdda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.430 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d07297e6-14c1-4247-b150-e9469865e33c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 1000, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 1000, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 38483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280473, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.446 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[73f3c6bf-014a-49f3-abad-8b1e8b30c06f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652513, 'tstamp': 652513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280474, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652515, 'tstamp': 652515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280474, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.447 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.449 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.450 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.450 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:39:52.450 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:52.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.486 2 DEBUG nova.compute.manager [req-48010c63-0d02-4177-b1e0-4a119e0bd2eb req-c05aff50-be55-4785-ac06-5256d143d30c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.486 2 DEBUG oslo_concurrency.lockutils [req-48010c63-0d02-4177-b1e0-4a119e0bd2eb req-c05aff50-be55-4785-ac06-5256d143d30c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.487 2 DEBUG oslo_concurrency.lockutils [req-48010c63-0d02-4177-b1e0-4a119e0bd2eb req-c05aff50-be55-4785-ac06-5256d143d30c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.487 2 DEBUG oslo_concurrency.lockutils [req-48010c63-0d02-4177-b1e0-4a119e0bd2eb req-c05aff50-be55-4785-ac06-5256d143d30c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.487 2 DEBUG nova.compute.manager [req-48010c63-0d02-4177-b1e0-4a119e0bd2eb req-c05aff50-be55-4785-ac06-5256d143d30c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:52 np0005466031 nova_compute[235803]: 2025-10-02 12:39:52.487 2 WARNING nova.compute.manager [req-48010c63-0d02-4177-b1e0-4a119e0bd2eb req-c05aff50-be55-4785-ac06-5256d143d30c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.139 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 2a9f318e-50b4-47f5-b281-128055b9d810 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.139 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408793.1390796, 2a9f318e-50b4-47f5-b281-128055b9d810 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.140 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.188 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.192 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.220 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.221 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408793.1396198, 2a9f318e-50b4-47f5-b281-128055b9d810 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.221 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] VM Started (Lifecycle Event)#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.259 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.262 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.286 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:53 np0005466031 nova_compute[235803]: 2025-10-02 12:39:53.875 2 DEBUG nova.compute.manager [None req-983a7aae-aa4a-4353-be0c-442cd4f2095d fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:53.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:54 np0005466031 nova_compute[235803]: 2025-10-02 12:39:54.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:54.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:54 np0005466031 nova_compute[235803]: 2025-10-02 12:39:54.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:54 np0005466031 nova_compute[235803]: 2025-10-02 12:39:54.731 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:54 np0005466031 nova_compute[235803]: 2025-10-02 12:39:54.732 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:54 np0005466031 nova_compute[235803]: 2025-10-02 12:39:54.732 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:54 np0005466031 nova_compute[235803]: 2025-10-02 12:39:54.733 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:39:54 np0005466031 nova_compute[235803]: 2025-10-02 12:39:54.733 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2458645433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.165 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.420 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.420 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.423 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.423 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:39:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.583 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.586 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4161MB free_disk=20.78521728515625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.586 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.587 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.820 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance dc4a4f9d-2d68-4b95-a651-f1817489ccd6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.821 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2a9f318e-50b4-47f5-b281-128055b9d810 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.821 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.821 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.847 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.872 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.873 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:39:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:55.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.896 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.927 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.967 2 DEBUG nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.968 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.968 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.968 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.968 2 DEBUG nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.968 2 WARNING nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.969 2 DEBUG nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.969 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.969 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.969 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.969 2 DEBUG nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.970 2 WARNING nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.970 2 DEBUG nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.970 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.970 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.970 2 DEBUG oslo_concurrency.lockutils [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.971 2 DEBUG nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:55 np0005466031 nova_compute[235803]: 2025-10-02 12:39:55.972 2 WARNING nova.compute.manager [req-f29ec1ad-d018-4d33-905e-78390b182f9d req-06df0a12-434a-43d8-9db5-811070660658 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:39:56 np0005466031 nova_compute[235803]: 2025-10-02 12:39:56.022 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:56 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3992094948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:56 np0005466031 nova_compute[235803]: 2025-10-02 12:39:56.485 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:56.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:56 np0005466031 nova_compute[235803]: 2025-10-02 12:39:56.490 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:39:56 np0005466031 nova_compute[235803]: 2025-10-02 12:39:56.602 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:39:56 np0005466031 nova_compute[235803]: 2025-10-02 12:39:56.788 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:39:56 np0005466031 nova_compute[235803]: 2025-10-02 12:39:56.788 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:57 np0005466031 podman[280583]: 2025-10-02 12:39:57.632342084 +0000 UTC m=+0.059071491 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:39:57 np0005466031 podman[280584]: 2025-10-02 12:39:57.664485239 +0000 UTC m=+0.091692969 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:39:57 np0005466031 nova_compute[235803]: 2025-10-02 12:39:57.789 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:57 np0005466031 nova_compute[235803]: 2025-10-02 12:39:57.789 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:39:57 np0005466031 nova_compute[235803]: 2025-10-02 12:39:57.789 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:39:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:57.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:58 np0005466031 nova_compute[235803]: 2025-10-02 12:39:58.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:58 np0005466031 nova_compute[235803]: 2025-10-02 12:39:58.389 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:58 np0005466031 nova_compute[235803]: 2025-10-02 12:39:58.389 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:58 np0005466031 nova_compute[235803]: 2025-10-02 12:39:58.389 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:39:58 np0005466031 nova_compute[235803]: 2025-10-02 12:39:58.390 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:39:59 np0005466031 nova_compute[235803]: 2025-10-02 12:39:59.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:59 np0005466031 nova_compute[235803]: 2025-10-02 12:39:59.585 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:59 np0005466031 nova_compute[235803]: 2025-10-02 12:39:59.586 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:59 np0005466031 nova_compute[235803]: 2025-10-02 12:39:59.874 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:39:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:39:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:39:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:59.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.249 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.250 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.255 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.255 2 INFO nova.compute.claims [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:40:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 08:40:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.523 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.550 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.643 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.643 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.644 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.644 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.644 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.645 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.645 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.645 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.645 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:40:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3170026607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.975 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:00 np0005466031 nova_compute[235803]: 2025-10-02 12:40:00.981 2 DEBUG nova.compute.provider_tree [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.048 2 DEBUG nova.scheduler.client.report [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.197 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.198 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.354 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.354 2 DEBUG nova.network.neutron [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.520 2 INFO nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.584 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:40:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:01.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.971 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.972 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:40:01 np0005466031 nova_compute[235803]: 2025-10-02 12:40:01.973 2 INFO nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Creating image(s)#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.004 2 DEBUG nova.storage.rbd_utils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.028 2 DEBUG nova.storage.rbd_utils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.051 2 DEBUG nova.storage.rbd_utils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.055 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.127 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.128 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.128 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.129 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.151 2 DEBUG nova.storage.rbd_utils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.156 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e884c412-2e45-4b28-b840-00335c863f28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:02.342 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:02.343 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.357 2 DEBUG nova.policy [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17a0940c9daf48ac8cfa6c3e56d0e39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88141e38aa2347299e7ab249431ef68c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:40:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.556 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e884c412-2e45-4b28-b840-00335c863f28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.623 2 DEBUG nova.storage.rbd_utils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] resizing rbd image e884c412-2e45-4b28-b840-00335c863f28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.732 2 DEBUG nova.objects.instance [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.796 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.797 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Ensure instance console log exists: /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.797 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.798 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:02 np0005466031 nova_compute[235803]: 2025-10-02 12:40:02.798 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:03 np0005466031 nova_compute[235803]: 2025-10-02 12:40:03.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:03.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:04 np0005466031 nova_compute[235803]: 2025-10-02 12:40:04.140 2 DEBUG nova.network.neutron [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Successfully created port: c88f61eb-a07d-435d-a75c-39224295dd64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:40:04 np0005466031 nova_compute[235803]: 2025-10-02 12:40:04.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:40:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3067416809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:40:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:40:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3067416809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:40:05 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:05Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:2a:b6 10.100.0.5
Oct  2 08:40:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:05 np0005466031 podman[280997]: 2025-10-02 12:40:05.628215306 +0000 UTC m=+0.050535276 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:40:05 np0005466031 podman[280998]: 2025-10-02 12:40:05.62907885 +0000 UTC m=+0.050395351 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 08:40:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:05.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:40:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:40:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:40:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:06.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:06 np0005466031 nova_compute[235803]: 2025-10-02 12:40:06.836 2 DEBUG nova.network.neutron [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Successfully updated port: c88f61eb-a07d-435d-a75c-39224295dd64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:40:07 np0005466031 nova_compute[235803]: 2025-10-02 12:40:07.069 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-e884c412-2e45-4b28-b840-00335c863f28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:40:07 np0005466031 nova_compute[235803]: 2025-10-02 12:40:07.070 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-e884c412-2e45-4b28-b840-00335c863f28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:40:07 np0005466031 nova_compute[235803]: 2025-10-02 12:40:07.070 2 DEBUG nova.network.neutron [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:40:07 np0005466031 nova_compute[235803]: 2025-10-02 12:40:07.383 2 DEBUG nova.compute.manager [req-037367d8-1f26-4c87-8033-eadd4808baae req-aba4c55f-ff2f-481c-ad6c-100f2ee6cb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-changed-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:07 np0005466031 nova_compute[235803]: 2025-10-02 12:40:07.384 2 DEBUG nova.compute.manager [req-037367d8-1f26-4c87-8033-eadd4808baae req-aba4c55f-ff2f-481c-ad6c-100f2ee6cb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Refreshing instance network info cache due to event network-changed-c88f61eb-a07d-435d-a75c-39224295dd64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:40:07 np0005466031 nova_compute[235803]: 2025-10-02 12:40:07.384 2 DEBUG oslo_concurrency.lockutils [req-037367d8-1f26-4c87-8033-eadd4808baae req-aba4c55f-ff2f-481c-ad6c-100f2ee6cb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e884c412-2e45-4b28-b840-00335c863f28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:40:07 np0005466031 nova_compute[235803]: 2025-10-02 12:40:07.610 2 DEBUG nova.network.neutron [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:40:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:40:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:07.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:40:08 np0005466031 nova_compute[235803]: 2025-10-02 12:40:08.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:08.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:09 np0005466031 nova_compute[235803]: 2025-10-02 12:40:09.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:09.345 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:10.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.523 2 DEBUG nova.network.neutron [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Updating instance_info_cache with network_info: [{"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.603 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-e884c412-2e45-4b28-b840-00335c863f28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.604 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance network_info: |[{"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.604 2 DEBUG oslo_concurrency.lockutils [req-037367d8-1f26-4c87-8033-eadd4808baae req-aba4c55f-ff2f-481c-ad6c-100f2ee6cb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e884c412-2e45-4b28-b840-00335c863f28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.604 2 DEBUG nova.network.neutron [req-037367d8-1f26-4c87-8033-eadd4808baae req-aba4c55f-ff2f-481c-ad6c-100f2ee6cb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Refreshing network info cache for port c88f61eb-a07d-435d-a75c-39224295dd64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.606 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Start _get_guest_xml network_info=[{"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.611 2 WARNING nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.615 2 DEBUG nova.virt.libvirt.host [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.615 2 DEBUG nova.virt.libvirt.host [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.617 2 DEBUG nova.virt.libvirt.host [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.618 2 DEBUG nova.virt.libvirt.host [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.618 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.619 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.619 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.619 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.619 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.619 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.620 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.620 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.620 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.620 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.620 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.620 2 DEBUG nova.virt.hardware [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:40:10 np0005466031 nova_compute[235803]: 2025-10-02 12:40:10.623 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:40:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2693718750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.134 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.161 2 DEBUG nova.storage.rbd_utils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.165 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:40:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/306191312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.585 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.588 2 DEBUG nova.virt.libvirt.vif [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1561014529',display_name='tempest-tempest.common.compute-instance-1561014529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1561014529',id=108,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-z4rax6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:01Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=e884c412-2e45-4b28-b840-00335c863f28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.588 2 DEBUG nova.network.os_vif_util [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.590 2 DEBUG nova.network.os_vif_util [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.591 2 DEBUG nova.objects.instance [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.658 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <uuid>e884c412-2e45-4b28-b840-00335c863f28</uuid>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <name>instance-0000006c</name>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <nova:name>tempest-tempest.common.compute-instance-1561014529</nova:name>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:40:10</nova:creationTime>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <nova:port uuid="c88f61eb-a07d-435d-a75c-39224295dd64">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <entry name="serial">e884c412-2e45-4b28-b840-00335c863f28</entry>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <entry name="uuid">e884c412-2e45-4b28-b840-00335c863f28</entry>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e884c412-2e45-4b28-b840-00335c863f28_disk">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e884c412-2e45-4b28-b840-00335c863f28_disk.config">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:41:5d:9d"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <target dev="tapc88f61eb-a0"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/console.log" append="off"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:40:11 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:40:11 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:40:11 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:40:11 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.659 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Preparing to wait for external event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.659 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.660 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.660 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.661 2 DEBUG nova.virt.libvirt.vif [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1561014529',display_name='tempest-tempest.common.compute-instance-1561014529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1561014529',id=108,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-z4rax6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:01Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=e884c412-2e45-4b28-b840-00335c863f28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.662 2 DEBUG nova.network.os_vif_util [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.663 2 DEBUG nova.network.os_vif_util [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.664 2 DEBUG os_vif [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.671 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc88f61eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.672 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc88f61eb-a0, col_values=(('external_ids', {'iface-id': 'c88f61eb-a07d-435d-a75c-39224295dd64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:5d:9d', 'vm-uuid': 'e884c412-2e45-4b28-b840-00335c863f28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:11 np0005466031 NetworkManager[44907]: <info>  [1759408811.6752] manager: (tapc88f61eb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.683 2 INFO os_vif [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0')#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.843 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.844 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.845 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:41:5d:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.846 2 INFO nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Using config drive#033[00m
Oct  2 08:40:11 np0005466031 nova_compute[235803]: 2025-10-02 12:40:11.878 2 DEBUG nova.storage.rbd_utils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:12.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:12 np0005466031 nova_compute[235803]: 2025-10-02 12:40:12.596 2 INFO nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Creating config drive at /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config#033[00m
Oct  2 08:40:12 np0005466031 nova_compute[235803]: 2025-10-02 12:40:12.600 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzjr6unf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:12 np0005466031 nova_compute[235803]: 2025-10-02 12:40:12.734 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzjr6unf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:12 np0005466031 nova_compute[235803]: 2025-10-02 12:40:12.772 2 DEBUG nova.storage.rbd_utils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:12 np0005466031 nova_compute[235803]: 2025-10-02 12:40:12.776 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config e884c412-2e45-4b28-b840-00335c863f28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.143 2 DEBUG nova.network.neutron [req-037367d8-1f26-4c87-8033-eadd4808baae req-aba4c55f-ff2f-481c-ad6c-100f2ee6cb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Updated VIF entry in instance network info cache for port c88f61eb-a07d-435d-a75c-39224295dd64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.144 2 DEBUG nova.network.neutron [req-037367d8-1f26-4c87-8033-eadd4808baae req-aba4c55f-ff2f-481c-ad6c-100f2ee6cb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Updating instance_info_cache with network_info: [{"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.298 2 DEBUG oslo_concurrency.lockutils [req-037367d8-1f26-4c87-8033-eadd4808baae req-aba4c55f-ff2f-481c-ad6c-100f2ee6cb1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e884c412-2e45-4b28-b840-00335c863f28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.300 2 DEBUG oslo_concurrency.processutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config e884c412-2e45-4b28-b840-00335c863f28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.301 2 INFO nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Deleting local config drive /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config because it was imported into RBD.#033[00m
Oct  2 08:40:13 np0005466031 kernel: tapc88f61eb-a0: entered promiscuous mode
Oct  2 08:40:13 np0005466031 NetworkManager[44907]: <info>  [1759408813.3541] manager: (tapc88f61eb-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:13Z|00391|binding|INFO|Claiming lport c88f61eb-a07d-435d-a75c-39224295dd64 for this chassis.
Oct  2 08:40:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:13Z|00392|binding|INFO|c88f61eb-a07d-435d-a75c-39224295dd64: Claiming fa:16:3e:41:5d:9d 10.100.0.7
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 NetworkManager[44907]: <info>  [1759408813.4136] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 NetworkManager[44907]: <info>  [1759408813.4150] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct  2 08:40:13 np0005466031 systemd-udevd[281177]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:40:13 np0005466031 systemd-machined[192227]: New machine qemu-45-instance-0000006c.
Oct  2 08:40:13 np0005466031 NetworkManager[44907]: <info>  [1759408813.4399] device (tapc88f61eb-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:40:13 np0005466031 systemd[1]: Started Virtual Machine qemu-45-instance-0000006c.
Oct  2 08:40:13 np0005466031 NetworkManager[44907]: <info>  [1759408813.4436] device (tapc88f61eb-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.511 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:5d:9d 10.100.0.7'], port_security=['fa:16:3e:41:5d:9d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e884c412-2e45-4b28-b840-00335c863f28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01dffe06-e9c5-44f7-8e0c-9bbbdc67ec7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=c88f61eb-a07d-435d-a75c-39224295dd64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.513 141898 INFO neutron.agent.ovn.metadata.agent [-] Port c88f61eb-a07d-435d-a75c-39224295dd64 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.514 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.529 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9e3139-8eaa-4c77-b00b-35e29a0c3f9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.530 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3643647-71 in ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.532 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3643647-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.532 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6f56fe37-3ba5-43eb-a010-25393d6a7f72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.533 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[90fbb093-7ccc-4032-9c15-ce309a5fe76e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.559 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[83bb591a-4ce5-4965-b4a2-4d287bae88e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.589 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[80dece66-cc39-4b1c-9127-03b9af760ddc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.623 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ba0c0b-256d-4b16-966e-071eeac62ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 NetworkManager[44907]: <info>  [1759408813.6305] manager: (tapf3643647-70): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Oct  2 08:40:13 np0005466031 systemd-udevd[281180]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.632 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[49e62727-baa2-4d59-bfed-38928e0a195f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.675 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b15389c1-1721-4a4a-9865-f763be15496b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:13Z|00393|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.678 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[9846f8a2-761b-4fb3-a96e-7cdefc6b9489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:13Z|00394|binding|INFO|Setting lport c88f61eb-a07d-435d-a75c-39224295dd64 ovn-installed in OVS
Oct  2 08:40:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:13Z|00395|binding|INFO|Setting lport c88f61eb-a07d-435d-a75c-39224295dd64 up in Southbound
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 NetworkManager[44907]: <info>  [1759408813.7034] device (tapf3643647-70): carrier: link connected
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.709 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4541e355-7e26-4515-92bc-e668a0675dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.727 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a1540413-ae5e-47bc-a973-a618a4e2adb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666929, 'reachable_time': 16866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281229, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.751 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[587ad52c-92bd-4ace-9588-f6a0ebcca1cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:edfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666929, 'tstamp': 666929}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281245, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.776 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[645aac9d-c10c-41d0-b296-7bddd7b933d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666929, 'reachable_time': 16866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281247, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.819 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[999d9152-6fdb-41df-b64c-563e3c25be87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.892 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3d159812-7cdb-42f5-b1ee-5e8087d1a5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.894 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.895 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.896 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 NetworkManager[44907]: <info>  [1759408813.8993] manager: (tapf3643647-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct  2 08:40:13 np0005466031 kernel: tapf3643647-70: entered promiscuous mode
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.905 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:13 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:13Z|00396|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:13 np0005466031 nova_compute[235803]: 2025-10-02 12:40:13.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.928 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.930 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[da599fbb-641c-41fc-8cde-33d78a6a5ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.931 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:40:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:13.933 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'env', 'PROCESS_TAG=haproxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3643647-7cd9-4c43-8aaa-9b0f3160274b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:14 np0005466031 podman[281289]: 2025-10-02 12:40:14.294029232 +0000 UTC m=+0.052887993 container create f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:40:14 np0005466031 systemd[1]: Started libpod-conmon-f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b.scope.
Oct  2 08:40:14 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:40:14 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61aef39c519992cf94c8fcae72a5a375faa0055a5f7ab7d98659f748d52f5735/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.358 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408814.3582728, e884c412-2e45-4b28-b840-00335c863f28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.359 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] VM Started (Lifecycle Event)#033[00m
Oct  2 08:40:14 np0005466031 podman[281289]: 2025-10-02 12:40:14.268634241 +0000 UTC m=+0.027493022 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:40:14 np0005466031 podman[281289]: 2025-10-02 12:40:14.366888378 +0000 UTC m=+0.125747139 container init f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:40:14 np0005466031 podman[281289]: 2025-10-02 12:40:14.379838961 +0000 UTC m=+0.138697722 container start f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:40:14 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[281304]: [NOTICE]   (281308) : New worker (281310) forked
Oct  2 08:40:14 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[281304]: [NOTICE]   (281308) : Loading success.
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.487 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:14.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.536 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.540 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408814.3584356, e884c412-2e45-4b28-b840-00335c863f28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.540 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.896 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:14 np0005466031 nova_compute[235803]: 2025-10-02 12:40:14.899 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:40:15 np0005466031 nova_compute[235803]: 2025-10-02 12:40:15.081 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:40:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:16.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:16 np0005466031 nova_compute[235803]: 2025-10-02 12:40:16.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:40:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:40:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:17.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:18.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:18 np0005466031 nova_compute[235803]: 2025-10-02 12:40:18.990 2 DEBUG nova.compute.manager [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:18 np0005466031 nova_compute[235803]: 2025-10-02 12:40:18.990 2 DEBUG oslo_concurrency.lockutils [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:18 np0005466031 nova_compute[235803]: 2025-10-02 12:40:18.991 2 DEBUG oslo_concurrency.lockutils [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:18 np0005466031 nova_compute[235803]: 2025-10-02 12:40:18.991 2 DEBUG oslo_concurrency.lockutils [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:18 np0005466031 nova_compute[235803]: 2025-10-02 12:40:18.991 2 DEBUG nova.compute.manager [req-0caecdc1-4789-45e6-8896-e9e44aae0e7d req-1310e3b5-632f-4103-ac79-477b13acaee4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Processing event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:40:18 np0005466031 nova_compute[235803]: 2025-10-02 12:40:18.992 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:40:18 np0005466031 nova_compute[235803]: 2025-10-02 12:40:18.998 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408818.9974365, e884c412-2e45-4b28-b840-00335c863f28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:18 np0005466031 nova_compute[235803]: 2025-10-02 12:40:18.998 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.001 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.004 2 INFO nova.virt.libvirt.driver [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance spawned successfully.#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.004 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.091 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.094 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.095 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.095 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.096 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.096 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.096 2 DEBUG nova.virt.libvirt.driver [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.100 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.212 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.414 2 INFO nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Took 17.44 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.414 2 DEBUG nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.713 2 INFO nova.compute.manager [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Took 19.56 seconds to build instance.#033[00m
Oct  2 08:40:19 np0005466031 nova_compute[235803]: 2025-10-02 12:40:19.918 2 DEBUG oslo_concurrency.lockutils [None req-8b930c3a-26ce-4094-8757-3cdbb23bbaea 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:19.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:20.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:20 np0005466031 nova_compute[235803]: 2025-10-02 12:40:20.995 2 DEBUG oslo_concurrency.lockutils [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:20 np0005466031 nova_compute[235803]: 2025-10-02 12:40:20.995 2 DEBUG oslo_concurrency.lockutils [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:20 np0005466031 nova_compute[235803]: 2025-10-02 12:40:20.995 2 DEBUG nova.compute.manager [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:20 np0005466031 nova_compute[235803]: 2025-10-02 12:40:20.999 2 DEBUG nova.compute.manager [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:20.999 2 DEBUG nova.objects.instance [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'flavor' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:21.105 2 DEBUG nova.virt.libvirt.driver [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:21.399 2 DEBUG nova.compute.manager [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:21.399 2 DEBUG oslo_concurrency.lockutils [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:21.399 2 DEBUG oslo_concurrency.lockutils [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:21.400 2 DEBUG oslo_concurrency.lockutils [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:21.400 2 DEBUG nova.compute.manager [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] No waiting events found dispatching network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:21.400 2 WARNING nova.compute.manager [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received unexpected event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 08:40:21 np0005466031 nova_compute[235803]: 2025-10-02 12:40:21.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:21.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:23.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:24 np0005466031 nova_compute[235803]: 2025-10-02 12:40:24.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:40:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:24.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:40:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:25.849 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:25.850 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:25.851 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:25.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:26.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:26 np0005466031 nova_compute[235803]: 2025-10-02 12:40:26.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:28 np0005466031 podman[281428]: 2025-10-02 12:40:28.647598278 +0000 UTC m=+0.071788027 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:40:28 np0005466031 podman[281429]: 2025-10-02 12:40:28.693303983 +0000 UTC m=+0.115058492 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:40:29 np0005466031 nova_compute[235803]: 2025-10-02 12:40:29.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:29.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:30.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:31 np0005466031 nova_compute[235803]: 2025-10-02 12:40:31.159 2 DEBUG nova.virt.libvirt.driver [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:40:31 np0005466031 nova_compute[235803]: 2025-10-02 12:40:31.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:31.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:33.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:34 np0005466031 nova_compute[235803]: 2025-10-02 12:40:34.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:35.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:36.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:36 np0005466031 podman[281476]: 2025-10-02 12:40:36.683853512 +0000 UTC m=+0.098621489 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:40:36 np0005466031 podman[281477]: 2025-10-02 12:40:36.707342138 +0000 UTC m=+0.117265235 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:40:36 np0005466031 nova_compute[235803]: 2025-10-02 12:40:36.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:37.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:38.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:38Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:5d:9d 10.100.0.7
Oct  2 08:40:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:38Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:5d:9d 10.100.0.7
Oct  2 08:40:39 np0005466031 nova_compute[235803]: 2025-10-02 12:40:39.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:40:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:39.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:40:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:40.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:41 np0005466031 nova_compute[235803]: 2025-10-02 12:40:41.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:41.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:42 np0005466031 nova_compute[235803]: 2025-10-02 12:40:42.208 2 DEBUG nova.virt.libvirt.driver [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:40:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:42.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:43.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:44 np0005466031 nova_compute[235803]: 2025-10-02 12:40:44.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:44.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:44Z|00397|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:40:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:44Z|00398|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:40:44 np0005466031 nova_compute[235803]: 2025-10-02 12:40:44.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:45 np0005466031 kernel: tapc88f61eb-a0 (unregistering): left promiscuous mode
Oct  2 08:40:45 np0005466031 NetworkManager[44907]: <info>  [1759408845.6614] device (tapc88f61eb-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:40:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:45Z|00399|binding|INFO|Releasing lport c88f61eb-a07d-435d-a75c-39224295dd64 from this chassis (sb_readonly=0)
Oct  2 08:40:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:45Z|00400|binding|INFO|Setting lport c88f61eb-a07d-435d-a75c-39224295dd64 down in Southbound
Oct  2 08:40:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:45Z|00401|binding|INFO|Removing iface tapc88f61eb-a0 ovn-installed in OVS
Oct  2 08:40:45 np0005466031 nova_compute[235803]: 2025-10-02 12:40:45.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:45 np0005466031 nova_compute[235803]: 2025-10-02 12:40:45.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:45 np0005466031 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct  2 08:40:45 np0005466031 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006c.scope: Consumed 14.783s CPU time.
Oct  2 08:40:45 np0005466031 systemd-machined[192227]: Machine qemu-45-instance-0000006c terminated.
Oct  2 08:40:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:45.863 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:5d:9d 10.100.0.7'], port_security=['fa:16:3e:41:5d:9d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e884c412-2e45-4b28-b840-00335c863f28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01dffe06-e9c5-44f7-8e0c-9bbbdc67ec7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=c88f61eb-a07d-435d-a75c-39224295dd64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:45.865 141898 INFO neutron.agent.ovn.metadata.agent [-] Port c88f61eb-a07d-435d-a75c-39224295dd64 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis#033[00m
Oct  2 08:40:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:45.868 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3643647-7cd9-4c43-8aaa-9b0f3160274b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:40:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:45.869 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4386d2df-7cb6-49c7-b3f0-d7170a652678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:45.870 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace which is not needed anymore#033[00m
Oct  2 08:40:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:45.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:46 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[281304]: [NOTICE]   (281308) : haproxy version is 2.8.14-c23fe91
Oct  2 08:40:46 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[281304]: [NOTICE]   (281308) : path to executable is /usr/sbin/haproxy
Oct  2 08:40:46 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[281304]: [WARNING]  (281308) : Exiting Master process...
Oct  2 08:40:46 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[281304]: [ALERT]    (281308) : Current worker (281310) exited with code 143 (Terminated)
Oct  2 08:40:46 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[281304]: [WARNING]  (281308) : All workers exited. Exiting... (0)
Oct  2 08:40:46 np0005466031 systemd[1]: libpod-f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b.scope: Deactivated successfully.
Oct  2 08:40:46 np0005466031 podman[281604]: 2025-10-02 12:40:46.153849102 +0000 UTC m=+0.173122903 container died f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.229 2 INFO nova.virt.libvirt.driver [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance shutdown successfully after 25 seconds.#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.238 2 INFO nova.virt.libvirt.driver [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance destroyed successfully.#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.238 2 DEBUG nova.objects.instance [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'numa_topology' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.356 2 DEBUG nova.compute.manager [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.549 2 DEBUG oslo_concurrency.lockutils [None req-18fb1674-488a-409c-9a2e-b39ff75dd2d5 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 25.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b-userdata-shm.mount: Deactivated successfully.
Oct  2 08:40:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay-61aef39c519992cf94c8fcae72a5a375faa0055a5f7ab7d98659f748d52f5735-merged.mount: Deactivated successfully.
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.816 2 DEBUG nova.compute.manager [req-3cbfa110-b7d7-45f6-8f67-cd620243f4e0 req-a061a5dd-5e2c-4188-b894-354e4d0a1084 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-unplugged-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.817 2 DEBUG oslo_concurrency.lockutils [req-3cbfa110-b7d7-45f6-8f67-cd620243f4e0 req-a061a5dd-5e2c-4188-b894-354e4d0a1084 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.817 2 DEBUG oslo_concurrency.lockutils [req-3cbfa110-b7d7-45f6-8f67-cd620243f4e0 req-a061a5dd-5e2c-4188-b894-354e4d0a1084 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.818 2 DEBUG oslo_concurrency.lockutils [req-3cbfa110-b7d7-45f6-8f67-cd620243f4e0 req-a061a5dd-5e2c-4188-b894-354e4d0a1084 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.818 2 DEBUG nova.compute.manager [req-3cbfa110-b7d7-45f6-8f67-cd620243f4e0 req-a061a5dd-5e2c-4188-b894-354e4d0a1084 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] No waiting events found dispatching network-vif-unplugged-c88f61eb-a07d-435d-a75c-39224295dd64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.819 2 WARNING nova.compute.manager [req-3cbfa110-b7d7-45f6-8f67-cd620243f4e0 req-a061a5dd-5e2c-4188-b894-354e4d0a1084 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received unexpected event network-vif-unplugged-c88f61eb-a07d-435d-a75c-39224295dd64 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:40:46 np0005466031 nova_compute[235803]: 2025-10-02 12:40:46.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:47 np0005466031 podman[281604]: 2025-10-02 12:40:47.135757294 +0000 UTC m=+1.155031065 container cleanup f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:40:47 np0005466031 systemd[1]: libpod-conmon-f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b.scope: Deactivated successfully.
Oct  2 08:40:47 np0005466031 podman[281637]: 2025-10-02 12:40:47.564860612 +0000 UTC m=+0.402824043 container remove f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.574 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[776c8f2a-465d-4408-974d-28a7fafb00b5]: (4, ('Thu Oct  2 12:40:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b)\nf060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b\nThu Oct  2 12:40:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (f060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b)\nf060e7d3bc8652f25687b08b35cf5b1f1333bd4ff1d80f8c5935ad6fc811c26b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.576 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[418d8d97-3895-45e2-b35d-13c3d34fe60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.577 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:47 np0005466031 nova_compute[235803]: 2025-10-02 12:40:47.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:47 np0005466031 kernel: tapf3643647-70: left promiscuous mode
Oct  2 08:40:47 np0005466031 nova_compute[235803]: 2025-10-02 12:40:47.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.620 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2589ba7f-a06d-4483-a67e-6a120a81adef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.660 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa98ccd-29dc-4861-97af-576d4d8eb273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.661 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0f28ee92-1fd6-46b4-96ed-ff30195b33ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.692 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[185ed570-1bfe-43bc-9f57-8fe471e9f882]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666920, 'reachable_time': 20294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281656, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.696 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:40:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:47.696 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[88e5ac25-ab97-4350-affb-3f74bdbcb1c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:47 np0005466031 systemd[1]: run-netns-ovnmeta\x2df3643647\x2d7cd9\x2d4c43\x2d8aaa\x2d9b0f3160274b.mount: Deactivated successfully.
Oct  2 08:40:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:47.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:48.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:49 np0005466031 nova_compute[235803]: 2025-10-02 12:40:49.069 2 DEBUG nova.compute.manager [req-e572afdd-c493-44ca-9ef3-224ac30bf287 req-73df175f-d0b9-4cd9-b6f2-7ccdef2a2dcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:49 np0005466031 nova_compute[235803]: 2025-10-02 12:40:49.069 2 DEBUG oslo_concurrency.lockutils [req-e572afdd-c493-44ca-9ef3-224ac30bf287 req-73df175f-d0b9-4cd9-b6f2-7ccdef2a2dcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:49 np0005466031 nova_compute[235803]: 2025-10-02 12:40:49.069 2 DEBUG oslo_concurrency.lockutils [req-e572afdd-c493-44ca-9ef3-224ac30bf287 req-73df175f-d0b9-4cd9-b6f2-7ccdef2a2dcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:49 np0005466031 nova_compute[235803]: 2025-10-02 12:40:49.070 2 DEBUG oslo_concurrency.lockutils [req-e572afdd-c493-44ca-9ef3-224ac30bf287 req-73df175f-d0b9-4cd9-b6f2-7ccdef2a2dcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:49 np0005466031 nova_compute[235803]: 2025-10-02 12:40:49.070 2 DEBUG nova.compute.manager [req-e572afdd-c493-44ca-9ef3-224ac30bf287 req-73df175f-d0b9-4cd9-b6f2-7ccdef2a2dcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] No waiting events found dispatching network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:40:49 np0005466031 nova_compute[235803]: 2025-10-02 12:40:49.070 2 WARNING nova.compute.manager [req-e572afdd-c493-44ca-9ef3-224ac30bf287 req-73df175f-d0b9-4cd9-b6f2-7ccdef2a2dcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received unexpected event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 for instance with vm_state stopped and task_state rebuilding.#033[00m
Oct  2 08:40:49 np0005466031 nova_compute[235803]: 2025-10-02 12:40:49.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:49.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:50 np0005466031 nova_compute[235803]: 2025-10-02 12:40:50.399 2 INFO nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Rebuilding instance#033[00m
Oct  2 08:40:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:40:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:50.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:40:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:50 np0005466031 nova_compute[235803]: 2025-10-02 12:40:50.649 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'trusted_certs' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:50 np0005466031 nova_compute[235803]: 2025-10-02 12:40:50.917 2 DEBUG nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.196 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_requests' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.348 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.605 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.742 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.853 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.856 2 INFO nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance already shutdown.#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.862 2 INFO nova.virt.libvirt.driver [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance destroyed successfully.#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.866 2 INFO nova.virt.libvirt.driver [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance destroyed successfully.#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.867 2 DEBUG nova.virt.libvirt.vif [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:39:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1561014529',display_name='tempest-tempest.common.compute-instance-1561014529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1561014529',id=108,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-z4rax6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:48Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=e884c412-2e45-4b28-b840-00335c863f28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.868 2 DEBUG nova.network.os_vif_util [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.869 2 DEBUG nova.network.os_vif_util [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.869 2 DEBUG os_vif [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.872 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc88f61eb-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:51 np0005466031 nova_compute[235803]: 2025-10-02 12:40:51.878 2 INFO os_vif [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0')#033[00m
Oct  2 08:40:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:51.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:52.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:53 np0005466031 nova_compute[235803]: 2025-10-02 12:40:53.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:53 np0005466031 nova_compute[235803]: 2025-10-02 12:40:53.819 2 INFO nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Deleting instance files /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28_del#033[00m
Oct  2 08:40:53 np0005466031 nova_compute[235803]: 2025-10-02 12:40:53.821 2 INFO nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Deletion of /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28_del complete#033[00m
Oct  2 08:40:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:53.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.147 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.148 2 INFO nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Creating image(s)#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.169 2 DEBUG nova.storage.rbd_utils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.194 2 DEBUG nova.storage.rbd_utils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.221 2 DEBUG nova.storage.rbd_utils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.225 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.291 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.292 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.293 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.293 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.317 2 DEBUG nova.storage.rbd_utils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.320 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 e884c412-2e45-4b28-b840-00335c863f28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.661 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.856 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 e884c412-2e45-4b28-b840-00335c863f28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:54 np0005466031 nova_compute[235803]: 2025-10-02 12:40:54.935 2 DEBUG nova.storage.rbd_utils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] resizing rbd image e884c412-2e45-4b28-b840-00335c863f28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.050 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.050 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Ensure instance console log exists: /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.051 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.051 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.051 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.053 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Start _get_guest_xml network_info=[{"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.056 2 WARNING nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.062 2 DEBUG nova.virt.libvirt.host [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.062 2 DEBUG nova.virt.libvirt.host [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.064 2 DEBUG nova.virt.libvirt.host [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.065 2 DEBUG nova.virt.libvirt.host [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.066 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.066 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.066 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.067 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.067 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.067 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.067 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.067 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.068 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.068 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.068 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.068 2 DEBUG nova.virt.hardware [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.069 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'vcpu_model' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.115 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:40:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4238417375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.556 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.579 2 DEBUG nova.storage.rbd_utils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.583 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:55 np0005466031 nova_compute[235803]: 2025-10-02 12:40:55.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:40:56 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/813156201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.034 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.036 2 DEBUG nova.virt.libvirt.vif [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:39:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1561014529',display_name='tempest-tempest.common.compute-instance-1561014529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1561014529',id=108,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-z4rax6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:54Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=e884c412-2e45-4b28-b840-00335c863f28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.036 2 DEBUG nova.network.os_vif_util [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.037 2 DEBUG nova.network.os_vif_util [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.040 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <uuid>e884c412-2e45-4b28-b840-00335c863f28</uuid>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <name>instance-0000006c</name>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <nova:name>tempest-tempest.common.compute-instance-1561014529</nova:name>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:40:55</nova:creationTime>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <nova:port uuid="c88f61eb-a07d-435d-a75c-39224295dd64">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <entry name="serial">e884c412-2e45-4b28-b840-00335c863f28</entry>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <entry name="uuid">e884c412-2e45-4b28-b840-00335c863f28</entry>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e884c412-2e45-4b28-b840-00335c863f28_disk">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e884c412-2e45-4b28-b840-00335c863f28_disk.config">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:41:5d:9d"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <target dev="tapc88f61eb-a0"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/console.log" append="off"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:40:56 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:40:56 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:40:56 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:40:56 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.041 2 DEBUG nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Preparing to wait for external event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.041 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.042 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.042 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.043 2 DEBUG nova.virt.libvirt.vif [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:39:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1561014529',display_name='tempest-tempest.common.compute-instance-1561014529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1561014529',id=108,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-z4rax6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:54Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=e884c412-2e45-4b28-b840-00335c863f28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.043 2 DEBUG nova.network.os_vif_util [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.043 2 DEBUG nova.network.os_vif_util [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.044 2 DEBUG os_vif [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc88f61eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc88f61eb-a0, col_values=(('external_ids', {'iface-id': 'c88f61eb-a07d-435d-a75c-39224295dd64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:5d:9d', 'vm-uuid': 'e884c412-2e45-4b28-b840-00335c863f28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:56 np0005466031 NetworkManager[44907]: <info>  [1759408856.0508] manager: (tapc88f61eb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.056 2 INFO os_vif [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0')#033[00m
Oct  2 08:40:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.550 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.550 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.550 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:41:5d:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.551 2 INFO nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Using config drive#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.574 2 DEBUG nova.storage.rbd_utils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.629 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'ec2_ids' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.828 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.829 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.829 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.829 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.829 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:56 np0005466031 nova_compute[235803]: 2025-10-02 12:40:56.854 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'keypairs' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/598766264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.240 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.506 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.506 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.509 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.509 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.513 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.513 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.666 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.667 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4057MB free_disk=20.742088317871094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.668 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:57 np0005466031 nova_compute[235803]: 2025-10-02 12:40:57.668 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:57.941112) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857941140, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2431, "num_deletes": 254, "total_data_size": 5724170, "memory_usage": 5799680, "flush_reason": "Manual Compaction"}
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857956992, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3742892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44590, "largest_seqno": 47016, "table_properties": {"data_size": 3733017, "index_size": 6241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21101, "raw_average_key_size": 20, "raw_value_size": 3713052, "raw_average_value_size": 3658, "num_data_blocks": 271, "num_entries": 1015, "num_filter_entries": 1015, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408657, "oldest_key_time": 1759408657, "file_creation_time": 1759408857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 15910 microseconds, and 6772 cpu microseconds.
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:57.957022) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3742892 bytes OK
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:57.957038) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:57.958590) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:57.958601) EVENT_LOG_v1 {"time_micros": 1759408857958597, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:57.958615) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5713391, prev total WAL file size 5713391, number of live WAL files 2.
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:57.959909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3655KB)], [87(9402KB)]
Oct  2 08:40:57 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857959934, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13371010, "oldest_snapshot_seqno": -1}
Oct  2 08:40:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:40:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:57.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7133 keys, 11442959 bytes, temperature: kUnknown
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858041683, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 11442959, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11394433, "index_size": 29598, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 183702, "raw_average_key_size": 25, "raw_value_size": 11266144, "raw_average_value_size": 1579, "num_data_blocks": 1173, "num_entries": 7133, "num_filter_entries": 7133, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759408857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:58.041927) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11442959 bytes
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:58.044345) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.4 rd, 139.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 7661, records dropped: 528 output_compression: NoCompression
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:58.044421) EVENT_LOG_v1 {"time_micros": 1759408858044392, "job": 54, "event": "compaction_finished", "compaction_time_micros": 81826, "compaction_time_cpu_micros": 34205, "output_level": 6, "num_output_files": 1, "total_output_size": 11442959, "num_input_records": 7661, "num_output_records": 7133, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858045985, "job": 54, "event": "table_file_deletion", "file_number": 89}
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858049771, "job": 54, "event": "table_file_deletion", "file_number": 87}
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:57.959859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:58.049888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:58.049895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:58.049898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:58.049900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:40:58.049902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.080 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance dc4a4f9d-2d68-4b95-a651-f1817489ccd6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.080 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2a9f318e-50b4-47f5-b281-128055b9d810 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.081 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance e884c412-2e45-4b28-b840-00335c863f28 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.081 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.081 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.375 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:58.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.747 2 INFO nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Creating config drive at /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.754 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdw6t345y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/395680106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.827 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.833 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.885 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdw6t345y" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.917 2 DEBUG nova.storage.rbd_utils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image e884c412-2e45-4b28-b840-00335c863f28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.920 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config e884c412-2e45-4b28-b840-00335c863f28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:58 np0005466031 nova_compute[235803]: 2025-10-02 12:40:58.991 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.192 2 DEBUG oslo_concurrency.processutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config e884c412-2e45-4b28-b840-00335c863f28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.192 2 INFO nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Deleting local config drive /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28/disk.config because it was imported into RBD.#033[00m
Oct  2 08:40:59 np0005466031 kernel: tapc88f61eb-a0: entered promiscuous mode
Oct  2 08:40:59 np0005466031 NetworkManager[44907]: <info>  [1759408859.2403] manager: (tapc88f61eb-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:59Z|00402|binding|INFO|Claiming lport c88f61eb-a07d-435d-a75c-39224295dd64 for this chassis.
Oct  2 08:40:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:59Z|00403|binding|INFO|c88f61eb-a07d-435d-a75c-39224295dd64: Claiming fa:16:3e:41:5d:9d 10.100.0.7
Oct  2 08:40:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:59Z|00404|binding|INFO|Setting lport c88f61eb-a07d-435d-a75c-39224295dd64 ovn-installed in OVS
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:59 np0005466031 systemd-machined[192227]: New machine qemu-46-instance-0000006c.
Oct  2 08:40:59 np0005466031 systemd[1]: Started Virtual Machine qemu-46-instance-0000006c.
Oct  2 08:40:59 np0005466031 systemd-udevd[282047]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:40:59 np0005466031 NetworkManager[44907]: <info>  [1759408859.3127] device (tapc88f61eb-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:40:59 np0005466031 NetworkManager[44907]: <info>  [1759408859.3140] device (tapc88f61eb-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:40:59 np0005466031 podman[282025]: 2025-10-02 12:40:59.387778782 +0000 UTC m=+0.129464176 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.397 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:5d:9d 10.100.0.7'], port_security=['fa:16:3e:41:5d:9d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e884c412-2e45-4b28-b840-00335c863f28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '01dffe06-e9c5-44f7-8e0c-9bbbdc67ec7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=c88f61eb-a07d-435d-a75c-39224295dd64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:59Z|00405|binding|INFO|Setting lport c88f61eb-a07d-435d-a75c-39224295dd64 up in Southbound
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.398 141898 INFO neutron.agent.ovn.metadata.agent [-] Port c88f61eb-a07d-435d-a75c-39224295dd64 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.400 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b#033[00m
Oct  2 08:40:59 np0005466031 podman[282026]: 2025-10-02 12:40:59.404320778 +0000 UTC m=+0.138283870 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.412 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2a5f9c-527d-4e5e-8050-65374ccb7eaf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.413 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3643647-71 in ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.412 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.413 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.414 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3643647-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.414 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[411d14f0-9627-42f9-acb7-e0e681386682]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.415 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[db1aadae-58a0-4e3e-851c-b673ea1acfdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.434 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[1e92e713-47ea-4729-8bfb-17ce83089e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.459 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d519568d-6501-4426-b775-6fd67e6e80c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.487 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc59c4f-cbba-45d3-b26f-baa3f2c0fefb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.492 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0d08f05c-f23b-45d0-87ab-b2e0ed9b9d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 NetworkManager[44907]: <info>  [1759408859.4932] manager: (tapf3643647-70): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.529 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[090161c9-83df-4ab5-8da7-a2ef3ea8544d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.531 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8f56321f-c300-43e6-9f3c-7599d520e1f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 NetworkManager[44907]: <info>  [1759408859.5550] device (tapf3643647-70): carrier: link connected
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.564 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[72f5b374-11ba-4288-baa9-8770fd0afb2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.582 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[30b162f1-8d63-4f5d-8563-3cd60d8d760a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671514, 'reachable_time': 33203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282108, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.597 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f425ccc0-dabc-4160-a2b9-5b46ad2706dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:edfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671514, 'tstamp': 671514}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282109, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.612 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[205d4bbb-6d69-4cbc-b2b6-9bd93f521b18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671514, 'reachable_time': 33203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282110, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.637 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[438a8360-aed5-403f-9274-af3f04b8fdd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.687 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd767be-f288-45a4-ae71-4389922aeb74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.689 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.689 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.689 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:59 np0005466031 NetworkManager[44907]: <info>  [1759408859.6917] manager: (tapf3643647-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:59 np0005466031 kernel: tapf3643647-70: entered promiscuous mode
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.694 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:40:59Z|00406|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:40:59 np0005466031 nova_compute[235803]: 2025-10-02 12:40:59.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.710 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.711 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[90cab772-42ec-45d4-830a-76fd0ff8c44e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.712 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:40:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:40:59.712 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'env', 'PROCESS_TAG=haproxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3643647-7cd9-4c43-8aaa-9b0f3160274b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:40:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:40:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:59.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:41:00Z|00407|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:41:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:41:00Z|00408|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:41:00 np0005466031 podman[282185]: 2025-10-02 12:41:00.03794847 +0000 UTC m=+0.045672565 container create bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:41:00 np0005466031 systemd[1]: Started libpod-conmon-bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716.scope.
Oct  2 08:41:00 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:41:00 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66bcedbc0846fdfdcb49fee1ed7888e26b2e19faeb6270450749612bd5b0783/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:00 np0005466031 podman[282185]: 2025-10-02 12:41:00.102206599 +0000 UTC m=+0.109930724 container init bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:41:00 np0005466031 podman[282185]: 2025-10-02 12:41:00.107308276 +0000 UTC m=+0.115032381 container start bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:41:00 np0005466031 podman[282185]: 2025-10-02 12:41:00.010895222 +0000 UTC m=+0.018619347 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:41:00 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[282201]: [NOTICE]   (282205) : New worker (282217) forked
Oct  2 08:41:00 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[282201]: [NOTICE]   (282205) : Loading success.
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.334 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for e884c412-2e45-4b28-b840-00335c863f28 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.335 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408860.3341498, e884c412-2e45-4b28-b840-00335c863f28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.335 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] VM Started (Lifecycle Event)#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.406 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.411 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408860.334725, e884c412-2e45-4b28-b840-00335c863f28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.411 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.412 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.412 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:41:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:00.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.685 2 DEBUG nova.compute.manager [req-cbb48860-55f4-4b4a-acdc-f9c25630cf78 req-ef0d013c-4a8e-4754-baef-57dd599758c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.685 2 DEBUG oslo_concurrency.lockutils [req-cbb48860-55f4-4b4a-acdc-f9c25630cf78 req-ef0d013c-4a8e-4754-baef-57dd599758c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.686 2 DEBUG oslo_concurrency.lockutils [req-cbb48860-55f4-4b4a-acdc-f9c25630cf78 req-ef0d013c-4a8e-4754-baef-57dd599758c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.686 2 DEBUG oslo_concurrency.lockutils [req-cbb48860-55f4-4b4a-acdc-f9c25630cf78 req-ef0d013c-4a8e-4754-baef-57dd599758c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.686 2 DEBUG nova.compute.manager [req-cbb48860-55f4-4b4a-acdc-f9c25630cf78 req-ef0d013c-4a8e-4754-baef-57dd599758c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Processing event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.687 2 DEBUG nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.691 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.696 2 INFO nova.virt.libvirt.driver [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance spawned successfully.#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.697 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.714 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.722 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408860.6910927, e884c412-2e45-4b28-b840-00335c863f28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.722 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.906 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.910 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.910 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.911 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.911 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.912 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.912 2 DEBUG nova.virt.libvirt.driver [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.915 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.981 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.981 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:00 np0005466031 nova_compute[235803]: 2025-10-02 12:41:00.981 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.110 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.193 2 DEBUG nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.298 2 INFO nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] bringing vm to original state: 'stopped'#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.375 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.377 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.477 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.478 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.478 2 DEBUG nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.481 2 DEBUG nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:41:01 np0005466031 kernel: tapc88f61eb-a0 (unregistering): left promiscuous mode
Oct  2 08:41:01 np0005466031 NetworkManager[44907]: <info>  [1759408861.5915] device (tapc88f61eb-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:41:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:41:01Z|00409|binding|INFO|Releasing lport c88f61eb-a07d-435d-a75c-39224295dd64 from this chassis (sb_readonly=0)
Oct  2 08:41:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:41:01Z|00410|binding|INFO|Setting lport c88f61eb-a07d-435d-a75c-39224295dd64 down in Southbound
Oct  2 08:41:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:41:01Z|00411|binding|INFO|Removing iface tapc88f61eb-a0 ovn-installed in OVS
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.621 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:5d:9d 10.100.0.7'], port_security=['fa:16:3e:41:5d:9d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e884c412-2e45-4b28-b840-00335c863f28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '01dffe06-e9c5-44f7-8e0c-9bbbdc67ec7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=c88f61eb-a07d-435d-a75c-39224295dd64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.622 141898 INFO neutron.agent.ovn.metadata.agent [-] Port c88f61eb-a07d-435d-a75c-39224295dd64 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.624 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3643647-7cd9-4c43-8aaa-9b0f3160274b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.625 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c393ca10-7f57-434a-80fc-aca89bd20647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.625 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace which is not needed anymore#033[00m
Oct  2 08:41:01 np0005466031 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct  2 08:41:01 np0005466031 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006c.scope: Consumed 1.701s CPU time.
Oct  2 08:41:01 np0005466031 systemd-machined[192227]: Machine qemu-46-instance-0000006c terminated.
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.712 2 INFO nova.virt.libvirt.driver [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance destroyed successfully.#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.713 2 DEBUG nova.compute.manager [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:01 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[282201]: [NOTICE]   (282205) : haproxy version is 2.8.14-c23fe91
Oct  2 08:41:01 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[282201]: [NOTICE]   (282205) : path to executable is /usr/sbin/haproxy
Oct  2 08:41:01 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[282201]: [WARNING]  (282205) : Exiting Master process...
Oct  2 08:41:01 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[282201]: [ALERT]    (282205) : Current worker (282217) exited with code 143 (Terminated)
Oct  2 08:41:01 np0005466031 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[282201]: [WARNING]  (282205) : All workers exited. Exiting... (0)
Oct  2 08:41:01 np0005466031 systemd[1]: libpod-bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716.scope: Deactivated successfully.
Oct  2 08:41:01 np0005466031 podman[282288]: 2025-10-02 12:41:01.741798107 +0000 UTC m=+0.043759250 container died bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:41:01 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716-userdata-shm.mount: Deactivated successfully.
Oct  2 08:41:01 np0005466031 systemd[1]: var-lib-containers-storage-overlay-f66bcedbc0846fdfdcb49fee1ed7888e26b2e19faeb6270450749612bd5b0783-merged.mount: Deactivated successfully.
Oct  2 08:41:01 np0005466031 podman[282288]: 2025-10-02 12:41:01.779880393 +0000 UTC m=+0.081841526 container cleanup bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:41:01 np0005466031 systemd[1]: libpod-conmon-bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716.scope: Deactivated successfully.
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.826 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:01 np0005466031 podman[282329]: 2025-10-02 12:41:01.833236858 +0000 UTC m=+0.035120732 container remove bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.838 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2b710e-6526-4b07-9528-7a5b66335994]: (4, ('Thu Oct  2 12:41:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716)\nbcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716\nThu Oct  2 12:41:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (bcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716)\nbcb84b9081ec0dff4fa6aea3ed66b8a1523a19b8f0545fc27684fd4f9a648716\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.840 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9fa848-c13a-4ba6-a2f4-e4b6b5fffbdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.841 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466031 kernel: tapf3643647-70: left promiscuous mode
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.862 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d71cdafd-edb8-4f05-a487-edae09c61752]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.884 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.884 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:01 np0005466031 nova_compute[235803]: 2025-10-02 12:41:01.885 2 DEBUG nova.objects.instance [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.890 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed8e5f6-3c44-4c06-8148-f85a145bc99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.891 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[275b7a4f-1625-4daa-8136-f38fc5b1ec75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.904 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[82e5f99c-08b6-4877-bb4d-150b9ef83512]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671507, 'reachable_time': 37461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282348, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.905 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:41:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:01.906 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[ded30f7a-6ade-40da-bb6a-8c9eabb6cb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:01 np0005466031 systemd[1]: run-netns-ovnmeta\x2df3643647\x2d7cd9\x2d4c43\x2d8aaa\x2d9b0f3160274b.mount: Deactivated successfully.
Oct  2 08:41:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:01.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:02 np0005466031 nova_compute[235803]: 2025-10-02 12:41:02.005 2 DEBUG oslo_concurrency.lockutils [None req-4aeb47af-dfb2-4ece-8fe9-1d3f5e3a09b3 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.023 2 DEBUG nova.compute.manager [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.024 2 DEBUG oslo_concurrency.lockutils [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.025 2 DEBUG oslo_concurrency.lockutils [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.025 2 DEBUG oslo_concurrency.lockutils [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.025 2 DEBUG nova.compute.manager [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] No waiting events found dispatching network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.025 2 WARNING nova.compute.manager [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received unexpected event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.025 2 DEBUG nova.compute.manager [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-unplugged-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.026 2 DEBUG oslo_concurrency.lockutils [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.026 2 DEBUG oslo_concurrency.lockutils [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.026 2 DEBUG oslo_concurrency.lockutils [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.026 2 DEBUG nova.compute.manager [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] No waiting events found dispatching network-vif-unplugged-c88f61eb-a07d-435d-a75c-39224295dd64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.026 2 WARNING nova.compute.manager [req-71c9b332-979b-4e05-bcdc-5549964f52ff req-7c18f9ef-4ba0-4677-8022-ba7f73e7167a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received unexpected event network-vif-unplugged-c88f61eb-a07d-435d-a75c-39224295dd64 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.518 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updating instance_info_cache with network_info: [{"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.539 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-2a9f318e-50b4-47f5-b281-128055b9d810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.539 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.540 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.540 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.540 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.540 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:41:03 np0005466031 nova_compute[235803]: 2025-10-02 12:41:03.662 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:41:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:04 np0005466031 nova_compute[235803]: 2025-10-02 12:41:04.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:04.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:05 np0005466031 nova_compute[235803]: 2025-10-02 12:41:05.176 2 DEBUG nova.compute.manager [req-328f730f-00e1-4220-a915-b0c03d2209d3 req-fff1da33-1cd8-470b-a6f2-c15b0e33b1db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:05 np0005466031 nova_compute[235803]: 2025-10-02 12:41:05.177 2 DEBUG oslo_concurrency.lockutils [req-328f730f-00e1-4220-a915-b0c03d2209d3 req-fff1da33-1cd8-470b-a6f2-c15b0e33b1db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:05 np0005466031 nova_compute[235803]: 2025-10-02 12:41:05.177 2 DEBUG oslo_concurrency.lockutils [req-328f730f-00e1-4220-a915-b0c03d2209d3 req-fff1da33-1cd8-470b-a6f2-c15b0e33b1db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:05 np0005466031 nova_compute[235803]: 2025-10-02 12:41:05.177 2 DEBUG oslo_concurrency.lockutils [req-328f730f-00e1-4220-a915-b0c03d2209d3 req-fff1da33-1cd8-470b-a6f2-c15b0e33b1db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:05 np0005466031 nova_compute[235803]: 2025-10-02 12:41:05.177 2 DEBUG nova.compute.manager [req-328f730f-00e1-4220-a915-b0c03d2209d3 req-fff1da33-1cd8-470b-a6f2-c15b0e33b1db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] No waiting events found dispatching network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:05 np0005466031 nova_compute[235803]: 2025-10-02 12:41:05.177 2 WARNING nova.compute.manager [req-328f730f-00e1-4220-a915-b0c03d2209d3 req-fff1da33-1cd8-470b-a6f2-c15b0e33b1db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received unexpected event network-vif-plugged-c88f61eb-a07d-435d-a75c-39224295dd64 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:41:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:05.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:06 np0005466031 nova_compute[235803]: 2025-10-02 12:41:06.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:06.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.046 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.046 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.046 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "e884c412-2e45-4b28-b840-00335c863f28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.047 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.047 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.048 2 INFO nova.compute.manager [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Terminating instance#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.049 2 DEBUG nova.compute.manager [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.054 2 INFO nova.virt.libvirt.driver [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Instance destroyed successfully.#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.054 2 DEBUG nova.objects.instance [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid e884c412-2e45-4b28-b840-00335c863f28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.210 2 DEBUG nova.virt.libvirt.vif [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:39:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1561014529',display_name='tempest-tempest.common.compute-instance-1561014529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1561014529',id=108,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:01Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-z4rax6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:41:01Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=e884c412-2e45-4b28-b840-00335c863f28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.210 2 DEBUG nova.network.os_vif_util [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "c88f61eb-a07d-435d-a75c-39224295dd64", "address": "fa:16:3e:41:5d:9d", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc88f61eb-a0", "ovs_interfaceid": "c88f61eb-a07d-435d-a75c-39224295dd64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.211 2 DEBUG nova.network.os_vif_util [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.211 2 DEBUG os_vif [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.213 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc88f61eb-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:07 np0005466031 nova_compute[235803]: 2025-10-02 12:41:07.264 2 INFO os_vif [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:5d:9d,bridge_name='br-int',has_traffic_filtering=True,id=c88f61eb-a07d-435d-a75c-39224295dd64,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc88f61eb-a0')#033[00m
Oct  2 08:41:07 np0005466031 podman[282370]: 2025-10-02 12:41:07.634305548 +0000 UTC m=+0.060175113 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:41:07 np0005466031 podman[282371]: 2025-10-02 12:41:07.660589844 +0000 UTC m=+0.083532225 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:41:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:08.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:08 np0005466031 nova_compute[235803]: 2025-10-02 12:41:08.555 2 INFO nova.virt.libvirt.driver [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Deleting instance files /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28_del#033[00m
Oct  2 08:41:08 np0005466031 nova_compute[235803]: 2025-10-02 12:41:08.556 2 INFO nova.virt.libvirt.driver [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Deletion of /var/lib/nova/instances/e884c412-2e45-4b28-b840-00335c863f28_del complete#033[00m
Oct  2 08:41:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:08 np0005466031 nova_compute[235803]: 2025-10-02 12:41:08.620 2 INFO nova.compute.manager [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:41:08 np0005466031 nova_compute[235803]: 2025-10-02 12:41:08.620 2 DEBUG oslo.service.loopingcall [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:41:08 np0005466031 nova_compute[235803]: 2025-10-02 12:41:08.620 2 DEBUG nova.compute.manager [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:41:08 np0005466031 nova_compute[235803]: 2025-10-02 12:41:08.620 2 DEBUG nova.network.neutron [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:41:09 np0005466031 nova_compute[235803]: 2025-10-02 12:41:09.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:09.378 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:10.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e264 e264: 3 total, 3 up, 3 in
Oct  2 08:41:10 np0005466031 nova_compute[235803]: 2025-10-02 12:41:10.543 2 DEBUG nova.network.neutron [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:10.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:10 np0005466031 nova_compute[235803]: 2025-10-02 12:41:10.796 2 DEBUG nova.compute.manager [req-2060572e-9c5c-40bb-9c27-d3dacdc162e7 req-b06d6fa8-4252-4d21-b8a7-87e452840952 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Received event network-vif-deleted-c88f61eb-a07d-435d-a75c-39224295dd64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:10 np0005466031 nova_compute[235803]: 2025-10-02 12:41:10.796 2 INFO nova.compute.manager [req-2060572e-9c5c-40bb-9c27-d3dacdc162e7 req-b06d6fa8-4252-4d21-b8a7-87e452840952 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Neutron deleted interface c88f61eb-a07d-435d-a75c-39224295dd64; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:41:10 np0005466031 nova_compute[235803]: 2025-10-02 12:41:10.796 2 DEBUG nova.network.neutron [req-2060572e-9c5c-40bb-9c27-d3dacdc162e7 req-b06d6fa8-4252-4d21-b8a7-87e452840952 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:10 np0005466031 nova_compute[235803]: 2025-10-02 12:41:10.799 2 INFO nova.compute.manager [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] Took 2.18 seconds to deallocate network for instance.#033[00m
Oct  2 08:41:10 np0005466031 nova_compute[235803]: 2025-10-02 12:41:10.836 2 DEBUG nova.compute.manager [req-2060572e-9c5c-40bb-9c27-d3dacdc162e7 req-b06d6fa8-4252-4d21-b8a7-87e452840952 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e884c412-2e45-4b28-b840-00335c863f28] Detach interface failed, port_id=c88f61eb-a07d-435d-a75c-39224295dd64, reason: Instance e884c412-2e45-4b28-b840-00335c863f28 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:41:11 np0005466031 nova_compute[235803]: 2025-10-02 12:41:11.115 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:11 np0005466031 nova_compute[235803]: 2025-10-02 12:41:11.116 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:11 np0005466031 nova_compute[235803]: 2025-10-02 12:41:11.204 2 DEBUG oslo_concurrency.processutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/791234106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:11 np0005466031 nova_compute[235803]: 2025-10-02 12:41:11.648 2 DEBUG oslo_concurrency.processutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:11 np0005466031 nova_compute[235803]: 2025-10-02 12:41:11.653 2 DEBUG nova.compute.provider_tree [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:11 np0005466031 nova_compute[235803]: 2025-10-02 12:41:11.687 2 DEBUG nova.scheduler.client.report [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:11 np0005466031 nova_compute[235803]: 2025-10-02 12:41:11.893 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:12.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:12 np0005466031 nova_compute[235803]: 2025-10-02 12:41:12.127 2 INFO nova.scheduler.client.report [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocations for instance e884c412-2e45-4b28-b840-00335c863f28#033[00m
Oct  2 08:41:12 np0005466031 nova_compute[235803]: 2025-10-02 12:41:12.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e265 e265: 3 total, 3 up, 3 in
Oct  2 08:41:12 np0005466031 nova_compute[235803]: 2025-10-02 12:41:12.488 2 DEBUG oslo_concurrency.lockutils [None req-b3fd91f7-ddd7-4351-ac89-30d1fb318894 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "e884c412-2e45-4b28-b840-00335c863f28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:41:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:12.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:41:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e266 e266: 3 total, 3 up, 3 in
Oct  2 08:41:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:14.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:14 np0005466031 nova_compute[235803]: 2025-10-02 12:41:14.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:14.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:16.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:16 np0005466031 nova_compute[235803]: 2025-10-02 12:41:16.710 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408861.709678, e884c412-2e45-4b28-b840-00335c863f28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:16 np0005466031 nova_compute[235803]: 2025-10-02 12:41:16.710 2 INFO nova.compute.manager [-] [instance: e884c412-2e45-4b28-b840-00335c863f28] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:41:17 np0005466031 nova_compute[235803]: 2025-10-02 12:41:17.081 2 DEBUG nova.compute.manager [None req-44001165-3247-4c21-9bc6-d47943f58faa - - - - - -] [instance: e884c412-2e45-4b28-b840-00335c863f28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:17 np0005466031 nova_compute[235803]: 2025-10-02 12:41:17.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:17 np0005466031 nova_compute[235803]: 2025-10-02 12:41:17.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:17 np0005466031 nova_compute[235803]: 2025-10-02 12:41:17.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:41:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:18.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2588003642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:18.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:19 np0005466031 nova_compute[235803]: 2025-10-02 12:41:19.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:20.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:20.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 e267: 3 total, 3 up, 3 in
Oct  2 08:41:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:41:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:41:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:22.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:22 np0005466031 nova_compute[235803]: 2025-10-02 12:41:22.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:22.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:24.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:24 np0005466031 nova_compute[235803]: 2025-10-02 12:41:24.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:24.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:25.849 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:25.849 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:41:25.850 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:41:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:26.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:41:26 np0005466031 nova_compute[235803]: 2025-10-02 12:41:26.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:26.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:27 np0005466031 nova_compute[235803]: 2025-10-02 12:41:27.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:28.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:28.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:29 np0005466031 nova_compute[235803]: 2025-10-02 12:41:29.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:29 np0005466031 podman[282629]: 2025-10-02 12:41:29.627738132 +0000 UTC m=+0.056014503 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:41:29 np0005466031 podman[282630]: 2025-10-02 12:41:29.656890101 +0000 UTC m=+0.086239223 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:41:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:30.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:32.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:32 np0005466031 nova_compute[235803]: 2025-10-02 12:41:32.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:32.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:34.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:34 np0005466031 nova_compute[235803]: 2025-10-02 12:41:34.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 08:41:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:34.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 08:41:35 np0005466031 nova_compute[235803]: 2025-10-02 12:41:35.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:36.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:36.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:37 np0005466031 nova_compute[235803]: 2025-10-02 12:41:37.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:38.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:41:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:38.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:41:38 np0005466031 podman[282731]: 2025-10-02 12:41:38.624375611 +0000 UTC m=+0.057036112 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:41:38 np0005466031 podman[282730]: 2025-10-02 12:41:38.629040035 +0000 UTC m=+0.059275366 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:41:39 np0005466031 nova_compute[235803]: 2025-10-02 12:41:39.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:40.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:40.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:41:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:42.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:41:42 np0005466031 nova_compute[235803]: 2025-10-02 12:41:42.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:42.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:44.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:44 np0005466031 nova_compute[235803]: 2025-10-02 12:41:44.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:44.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:41:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:46.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:41:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:46.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:47 np0005466031 nova_compute[235803]: 2025-10-02 12:41:47.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:48.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:48 np0005466031 ovn_controller[132413]: 2025-10-02T12:41:48Z|00412|binding|INFO|Releasing lport 838ef2e5-5061-44a9-8e66-5a057b2abc50 from this chassis (sb_readonly=0)
Oct  2 08:41:49 np0005466031 nova_compute[235803]: 2025-10-02 12:41:49.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:49 np0005466031 nova_compute[235803]: 2025-10-02 12:41:49.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:50.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:50.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:52 np0005466031 nova_compute[235803]: 2025-10-02 12:41:52.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:53 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct  2 08:41:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:54.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:54 np0005466031 nova_compute[235803]: 2025-10-02 12:41:54.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:54.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:55 np0005466031 nova_compute[235803]: 2025-10-02 12:41:55.727 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:56.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:56 np0005466031 nova_compute[235803]: 2025-10-02 12:41:56.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:56 np0005466031 nova_compute[235803]: 2025-10-02 12:41:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:56 np0005466031 nova_compute[235803]: 2025-10-02 12:41:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:56 np0005466031 nova_compute[235803]: 2025-10-02 12:41:56.667 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:56 np0005466031 nova_compute[235803]: 2025-10-02 12:41:56.667 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:56 np0005466031 nova_compute[235803]: 2025-10-02 12:41:56.668 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:56 np0005466031 nova_compute[235803]: 2025-10-02 12:41:56.668 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:41:56 np0005466031 nova_compute[235803]: 2025-10-02 12:41:56.669 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3154034574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.092 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.161 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.162 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.165 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.165 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.343 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.344 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4020MB free_disk=20.712051391601562GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.345 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.346 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.436 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance dc4a4f9d-2d68-4b95-a651-f1817489ccd6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.437 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2a9f318e-50b4-47f5-b281-128055b9d810 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.437 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.437 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:41:57 np0005466031 nova_compute[235803]: 2025-10-02 12:41:57.509 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3672207254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:58 np0005466031 nova_compute[235803]: 2025-10-02 12:41:58.020 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:58 np0005466031 nova_compute[235803]: 2025-10-02 12:41:58.027 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:58 np0005466031 nova_compute[235803]: 2025-10-02 12:41:58.046 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:41:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:41:58 np0005466031 nova_compute[235803]: 2025-10-02 12:41:58.086 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:41:58 np0005466031 nova_compute[235803]: 2025-10-02 12:41:58.087 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:41:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:58 np0005466031 nova_compute[235803]: 2025-10-02 12:41:58.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:59 np0005466031 nova_compute[235803]: 2025-10-02 12:41:59.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:00 np0005466031 nova_compute[235803]: 2025-10-02 12:42:00.087 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:00 np0005466031 nova_compute[235803]: 2025-10-02 12:42:00.087 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:42:00 np0005466031 nova_compute[235803]: 2025-10-02 12:42:00.088 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:42:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:00 np0005466031 podman[282877]: 2025-10-02 12:42:00.649415476 +0000 UTC m=+0.066465904 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:42:00 np0005466031 podman[282881]: 2025-10-02 12:42:00.667593359 +0000 UTC m=+0.086851850 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:42:00 np0005466031 nova_compute[235803]: 2025-10-02 12:42:00.776 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:00 np0005466031 nova_compute[235803]: 2025-10-02 12:42:00.776 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:00 np0005466031 nova_compute[235803]: 2025-10-02 12:42:00.777 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:42:00 np0005466031 nova_compute[235803]: 2025-10-02 12:42:00.777 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:01 np0005466031 nova_compute[235803]: 2025-10-02 12:42:01.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:01.901 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:01.902 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:42:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:02.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.282 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [{"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.296 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-dc4a4f9d-2d68-4b95-a651-f1817489ccd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.297 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.297 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.298 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.298 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:02 np0005466031 nova_compute[235803]: 2025-10-02 12:42:02.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:42:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:04 np0005466031 nova_compute[235803]: 2025-10-02 12:42:04.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:05 np0005466031 nova_compute[235803]: 2025-10-02 12:42:05.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:06.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:42:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270305858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:42:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:06.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:06.905 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:07 np0005466031 nova_compute[235803]: 2025-10-02 12:42:07.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:08.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:08.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:09 np0005466031 nova_compute[235803]: 2025-10-02 12:42:09.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:09 np0005466031 podman[282977]: 2025-10-02 12:42:09.633818461 +0000 UTC m=+0.058711341 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:42:09 np0005466031 podman[282978]: 2025-10-02 12:42:09.63377969 +0000 UTC m=+0.051730000 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:42:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:10.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:10.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:42:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:12.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:42:12 np0005466031 nova_compute[235803]: 2025-10-02 12:42:12.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:12.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:14.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:14 np0005466031 nova_compute[235803]: 2025-10-02 12:42:14.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:14.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:15 np0005466031 nova_compute[235803]: 2025-10-02 12:42:15.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:16.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:17 np0005466031 nova_compute[235803]: 2025-10-02 12:42:17.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:18.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:18.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:19 np0005466031 nova_compute[235803]: 2025-10-02 12:42:19.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:42:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2044087468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:42:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:20.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:20.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:22.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:22 np0005466031 nova_compute[235803]: 2025-10-02 12:42:22.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:22.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e268 e268: 3 total, 3 up, 3 in
Oct  2 08:42:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:24.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:24 np0005466031 nova_compute[235803]: 2025-10-02 12:42:24.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:24.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:25.849 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:25.850 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:25.850 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:26.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:26.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:27 np0005466031 nova_compute[235803]: 2025-10-02 12:42:27.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:42:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:42:29 np0005466031 nova_compute[235803]: 2025-10-02 12:42:29.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:30.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e269 e269: 3 total, 3 up, 3 in
Oct  2 08:42:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:42:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:42:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:42:31 np0005466031 podman[283209]: 2025-10-02 12:42:31.624981387 +0000 UTC m=+0.055891329 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 08:42:31 np0005466031 podman[283210]: 2025-10-02 12:42:31.659465959 +0000 UTC m=+0.088745654 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:42:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:32.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:32 np0005466031 nova_compute[235803]: 2025-10-02 12:42:32.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:34.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:34 np0005466031 nova_compute[235803]: 2025-10-02 12:42:34.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:36.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:36.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e270 e270: 3 total, 3 up, 3 in
Oct  2 08:42:37 np0005466031 nova_compute[235803]: 2025-10-02 12:42:37.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:37 np0005466031 nova_compute[235803]: 2025-10-02 12:42:37.995 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:37 np0005466031 nova_compute[235803]: 2025-10-02 12:42:37.996 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:37 np0005466031 nova_compute[235803]: 2025-10-02 12:42:37.996 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:37 np0005466031 nova_compute[235803]: 2025-10-02 12:42:37.996 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:37 np0005466031 nova_compute[235803]: 2025-10-02 12:42:37.996 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:37 np0005466031 nova_compute[235803]: 2025-10-02 12:42:37.998 2 INFO nova.compute.manager [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Terminating instance#033[00m
Oct  2 08:42:37 np0005466031 nova_compute[235803]: 2025-10-02 12:42:37.999 2 DEBUG nova.compute.manager [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:42:38 np0005466031 kernel: tap4a059cfc-92 (unregistering): left promiscuous mode
Oct  2 08:42:38 np0005466031 NetworkManager[44907]: <info>  [1759408958.0623] device (tap4a059cfc-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:42:38 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:42:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:42:38Z|00413|binding|INFO|Releasing lport 4a059cfc-9263-4a5c-b335-f23e936035a1 from this chassis (sb_readonly=0)
Oct  2 08:42:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:42:38Z|00414|binding|INFO|Setting lport 4a059cfc-9263-4a5c-b335-f23e936035a1 down in Southbound
Oct  2 08:42:38 np0005466031 ovn_controller[132413]: 2025-10-02T12:42:38Z|00415|binding|INFO|Removing iface tap4a059cfc-92 ovn-installed in OVS
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.096 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:2a:b6 10.100.0.5'], port_security=['fa:16:3e:e4:2a:b6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2a9f318e-50b4-47f5-b281-128055b9d810', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=4a059cfc-9263-4a5c-b335-f23e936035a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.097 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 4a059cfc-9263-4a5c-b335-f23e936035a1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.098 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 494beff4-7fba-4749-8998-3432c91ac5d2#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.112 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[64bc6a53-c3d5-4db6-96d2-c340cbcb2fd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:38 np0005466031 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct  2 08:42:38 np0005466031 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000068.scope: Consumed 18.896s CPU time.
Oct  2 08:42:38 np0005466031 systemd-machined[192227]: Machine qemu-44-instance-00000068 terminated.
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.142 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e59192-893c-4031-8264-b4407c135f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.146 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[22a96969-1a86-4dc4-bc6e-62144ab169e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:38.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.170 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[60c01f3c-6cc5-474c-ac65-618abdc7c86c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.184 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[db267cda-8379-4d56-aab1-cf5124c3c679]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap494beff4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:4a:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 1084, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 1084, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652505, 'reachable_time': 38483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283318, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.196 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a2d47c-942b-4749-96cb-dffb1a399c30]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652513, 'tstamp': 652513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283319, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap494beff4-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652515, 'tstamp': 652515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283319, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.198 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.203 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap494beff4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.203 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.203 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap494beff4-70, col_values=(('external_ids', {'iface-id': '838ef2e5-5061-44a9-8e66-5a057b2abc50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:38.204 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.232 2 INFO nova.virt.libvirt.driver [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Instance destroyed successfully.#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.232 2 DEBUG nova.objects.instance [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'resources' on Instance uuid 2a9f318e-50b4-47f5-b281-128055b9d810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.324 2 DEBUG nova.virt.libvirt.vif [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-183740436',display_name='tempest-ServerStableDeviceRescueTest-server-183740436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-183740436',id=104,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:39:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-tjzcygr6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:39:54Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=2a9f318e-50b4-47f5-b281-128055b9d810,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.324 2 DEBUG nova.network.os_vif_util [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "4a059cfc-9263-4a5c-b335-f23e936035a1", "address": "fa:16:3e:e4:2a:b6", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a059cfc-92", "ovs_interfaceid": "4a059cfc-9263-4a5c-b335-f23e936035a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.325 2 DEBUG nova.network.os_vif_util [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a059cfc-9263-4a5c-b335-f23e936035a1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a059cfc-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.325 2 DEBUG os_vif [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a059cfc-9263-4a5c-b335-f23e936035a1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a059cfc-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a059cfc-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.336 2 INFO os_vif [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=4a059cfc-9263-4a5c-b335-f23e936035a1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a059cfc-92')#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.358 2 DEBUG nova.compute.manager [req-15b1d07f-1761-4227-9560-f81557352ccf req-27f32b30-4216-48dc-9144-2e1d0c81c145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.359 2 DEBUG oslo_concurrency.lockutils [req-15b1d07f-1761-4227-9560-f81557352ccf req-27f32b30-4216-48dc-9144-2e1d0c81c145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.359 2 DEBUG oslo_concurrency.lockutils [req-15b1d07f-1761-4227-9560-f81557352ccf req-27f32b30-4216-48dc-9144-2e1d0c81c145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.360 2 DEBUG oslo_concurrency.lockutils [req-15b1d07f-1761-4227-9560-f81557352ccf req-27f32b30-4216-48dc-9144-2e1d0c81c145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.360 2 DEBUG nova.compute.manager [req-15b1d07f-1761-4227-9560-f81557352ccf req-27f32b30-4216-48dc-9144-2e1d0c81c145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:38 np0005466031 nova_compute[235803]: 2025-10-02 12:42:38.360 2 DEBUG nova.compute.manager [req-15b1d07f-1761-4227-9560-f81557352ccf req-27f32b30-4216-48dc-9144-2e1d0c81c145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-unplugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:42:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:38.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e271 e271: 3 total, 3 up, 3 in
Oct  2 08:42:39 np0005466031 nova_compute[235803]: 2025-10-02 12:42:39.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:40.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.271 2 INFO nova.virt.libvirt.driver [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Deleting instance files /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810_del#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.272 2 INFO nova.virt.libvirt.driver [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Deletion of /var/lib/nova/instances/2a9f318e-50b4-47f5-b281-128055b9d810_del complete#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.537 2 DEBUG nova.compute.manager [req-f41cdee4-c51f-44ab-88fe-18d8e4d280e4 req-c4eb93b1-4008-45cd-a5f6-9e049218afc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.538 2 DEBUG oslo_concurrency.lockutils [req-f41cdee4-c51f-44ab-88fe-18d8e4d280e4 req-c4eb93b1-4008-45cd-a5f6-9e049218afc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.538 2 DEBUG oslo_concurrency.lockutils [req-f41cdee4-c51f-44ab-88fe-18d8e4d280e4 req-c4eb93b1-4008-45cd-a5f6-9e049218afc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.538 2 DEBUG oslo_concurrency.lockutils [req-f41cdee4-c51f-44ab-88fe-18d8e4d280e4 req-c4eb93b1-4008-45cd-a5f6-9e049218afc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.538 2 DEBUG nova.compute.manager [req-f41cdee4-c51f-44ab-88fe-18d8e4d280e4 req-c4eb93b1-4008-45cd-a5f6-9e049218afc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] No waiting events found dispatching network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.538 2 WARNING nova.compute.manager [req-f41cdee4-c51f-44ab-88fe-18d8e4d280e4 req-c4eb93b1-4008-45cd-a5f6-9e049218afc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received unexpected event network-vif-plugged-4a059cfc-9263-4a5c-b335-f23e936035a1 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:42:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:40 np0005466031 podman[283353]: 2025-10-02 12:42:40.627499255 +0000 UTC m=+0.054187210 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:42:40 np0005466031 podman[283352]: 2025-10-02 12:42:40.629043009 +0000 UTC m=+0.055626171 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  2 08:42:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:40.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.675 2 INFO nova.compute.manager [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Took 2.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.675 2 DEBUG oslo.service.loopingcall [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.676 2 DEBUG nova.compute.manager [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:42:40 np0005466031 nova_compute[235803]: 2025-10-02 12:42:40.676 2 DEBUG nova.network.neutron [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:42:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e272 e272: 3 total, 3 up, 3 in
Oct  2 08:42:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:42.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e273 e273: 3 total, 3 up, 3 in
Oct  2 08:42:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:42.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:43 np0005466031 nova_compute[235803]: 2025-10-02 12:42:43.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.033 2 DEBUG nova.network.neutron [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:44.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.346 2 INFO nova.compute.manager [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Took 3.67 seconds to deallocate network for instance.#033[00m
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.400 2 DEBUG nova.compute.manager [req-7ebdc9cd-3e2f-4df2-a19d-d67b88b9b937 req-d4a3ccc2-579a-4f51-a116-aca34a3eec9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Received event network-vif-deleted-4a059cfc-9263-4a5c-b335-f23e936035a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.401 2 INFO nova.compute.manager [req-7ebdc9cd-3e2f-4df2-a19d-d67b88b9b937 req-d4a3ccc2-579a-4f51-a116-aca34a3eec9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Neutron deleted interface 4a059cfc-9263-4a5c-b335-f23e936035a1; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.402 2 DEBUG nova.network.neutron [req-7ebdc9cd-3e2f-4df2-a19d-d67b88b9b937 req-d4a3ccc2-579a-4f51-a116-aca34a3eec9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.447 2 DEBUG nova.compute.manager [req-7ebdc9cd-3e2f-4df2-a19d-d67b88b9b937 req-d4a3ccc2-579a-4f51-a116-aca34a3eec9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Detach interface failed, port_id=4a059cfc-9263-4a5c-b335-f23e936035a1, reason: Instance 2a9f318e-50b4-47f5-b281-128055b9d810 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:42:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:42:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:44.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.669 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.669 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:44 np0005466031 nova_compute[235803]: 2025-10-02 12:42:44.745 2 DEBUG oslo_concurrency.processutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4163876452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:45 np0005466031 nova_compute[235803]: 2025-10-02 12:42:45.223 2 DEBUG oslo_concurrency.processutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:45 np0005466031 nova_compute[235803]: 2025-10-02 12:42:45.230 2 DEBUG nova.compute.provider_tree [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:45 np0005466031 nova_compute[235803]: 2025-10-02 12:42:45.446 2 DEBUG nova.scheduler.client.report [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:45 np0005466031 nova_compute[235803]: 2025-10-02 12:42:45.510 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:45 np0005466031 nova_compute[235803]: 2025-10-02 12:42:45.714 2 INFO nova.scheduler.client.report [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Deleted allocations for instance 2a9f318e-50b4-47f5-b281-128055b9d810#033[00m
Oct  2 08:42:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:46.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e274 e274: 3 total, 3 up, 3 in
Oct  2 08:42:46 np0005466031 nova_compute[235803]: 2025-10-02 12:42:46.264 2 DEBUG oslo_concurrency.lockutils [None req-403d626a-393e-45d8-91fe-f6c4d3ae394b fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "2a9f318e-50b4-47f5-b281-128055b9d810" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:46.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e275 e275: 3 total, 3 up, 3 in
Oct  2 08:42:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:48.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:48 np0005466031 nova_compute[235803]: 2025-10-02 12:42:48.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:48.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:49 np0005466031 nova_compute[235803]: 2025-10-02 12:42:49.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:42:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:50.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:42:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e276 e276: 3 total, 3 up, 3 in
Oct  2 08:42:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:52.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:52.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:53 np0005466031 nova_compute[235803]: 2025-10-02 12:42:53.232 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408958.2307749, 2a9f318e-50b4-47f5-b281-128055b9d810 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:53 np0005466031 nova_compute[235803]: 2025-10-02 12:42:53.232 2 INFO nova.compute.manager [-] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:42:53 np0005466031 nova_compute[235803]: 2025-10-02 12:42:53.271 2 DEBUG nova.compute.manager [None req-ed27a120-c458-4314-92e9-e2e0c3d642c0 - - - - - -] [instance: 2a9f318e-50b4-47f5-b281-128055b9d810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:53 np0005466031 nova_compute[235803]: 2025-10-02 12:42:53.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:54.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:54 np0005466031 nova_compute[235803]: 2025-10-02 12:42:54.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:54.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:56.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e277 e277: 3 total, 3 up, 3 in
Oct  2 08:42:56 np0005466031 nova_compute[235803]: 2025-10-02 12:42:56.657 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:42:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:56.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:42:57 np0005466031 nova_compute[235803]: 2025-10-02 12:42:57.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:57 np0005466031 nova_compute[235803]: 2025-10-02 12:42:57.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:57 np0005466031 nova_compute[235803]: 2025-10-02 12:42:57.698 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:57 np0005466031 nova_compute[235803]: 2025-10-02 12:42:57.698 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:57 np0005466031 nova_compute[235803]: 2025-10-02 12:42:57.698 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:57 np0005466031 nova_compute[235803]: 2025-10-02 12:42:57.698 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:42:57 np0005466031 nova_compute[235803]: 2025-10-02 12:42:57.699 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3834112678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:58 np0005466031 nova_compute[235803]: 2025-10-02 12:42:58.180 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:58.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:58 np0005466031 nova_compute[235803]: 2025-10-02 12:42:58.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:42:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:58.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:58 np0005466031 nova_compute[235803]: 2025-10-02 12:42:58.806 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:58 np0005466031 nova_compute[235803]: 2025-10-02 12:42:58.807 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:58 np0005466031 nova_compute[235803]: 2025-10-02 12:42:58.989 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:42:58 np0005466031 nova_compute[235803]: 2025-10-02 12:42:58.991 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4245MB free_disk=20.868064880371094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:42:58 np0005466031 nova_compute[235803]: 2025-10-02 12:42:58.991 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:58 np0005466031 nova_compute[235803]: 2025-10-02 12:42:58.991 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.257 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.257 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.258 2 INFO nova.compute.manager [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Unshelving#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.260 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance dc4a4f9d-2d68-4b95-a651-f1817489ccd6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.322 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.323 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.323 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.372 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.439 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.440 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.441 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.441 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.441 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.443 2 INFO nova.compute.manager [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Terminating instance#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.443 2 DEBUG nova.compute.manager [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:42:59 np0005466031 kernel: tap79d9c544-9d (unregistering): left promiscuous mode
Oct  2 08:42:59 np0005466031 NetworkManager[44907]: <info>  [1759408979.5107] device (tap79d9c544-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:42:59Z|00416|binding|INFO|Releasing lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 from this chassis (sb_readonly=0)
Oct  2 08:42:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:42:59Z|00417|binding|INFO|Setting lport 79d9c544-9d33-410a-a1d5-393ff0908cb1 down in Southbound
Oct  2 08:42:59 np0005466031 ovn_controller[132413]: 2025-10-02T12:42:59Z|00418|binding|INFO|Removing iface tap79d9c544-9d ovn-installed in OVS
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.554 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:82:e0 10.100.0.6'], port_security=['fa:16:3e:56:82:e0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc4a4f9d-2d68-4b95-a651-f1817489ccd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-494beff4-7fba-4749-8998-3432c91ac5d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79bb765ab1e4aa18672c9641b6187b9', 'neutron:revision_number': '10', 'neutron:security_group_ids': '93cf5398-1b1b-45ba-8c73-0a614ebcdc6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=989713b4-6cc6-4481-a97f-af60cb79e539, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=79d9c544-9d33-410a-a1d5-393ff0908cb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.555 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 79d9c544-9d33-410a-a1d5-393ff0908cb1 in datapath 494beff4-7fba-4749-8998-3432c91ac5d2 unbound from our chassis#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.556 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 494beff4-7fba-4749-8998-3432c91ac5d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:42:59 np0005466031 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct  2 08:42:59 np0005466031 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000062.scope: Consumed 24.900s CPU time.
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.561 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c812e226-ff9e-480f-8324-7e4747a6737e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.562 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 namespace which is not needed anymore#033[00m
Oct  2 08:42:59 np0005466031 systemd-machined[192227]: Machine qemu-41-instance-00000062 terminated.
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.591 2 INFO nova.virt.block_device [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Booting with volume 6ff47fe7-ec04-463b-9d03-426ce1963408 at /dev/vdc#033[00m
Oct  2 08:42:59 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278319]: [NOTICE]   (278323) : haproxy version is 2.8.14-c23fe91
Oct  2 08:42:59 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278319]: [NOTICE]   (278323) : path to executable is /usr/sbin/haproxy
Oct  2 08:42:59 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278319]: [WARNING]  (278323) : Exiting Master process...
Oct  2 08:42:59 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278319]: [ALERT]    (278323) : Current worker (278325) exited with code 143 (Terminated)
Oct  2 08:42:59 np0005466031 neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2[278319]: [WARNING]  (278323) : All workers exited. Exiting... (0)
Oct  2 08:42:59 np0005466031 systemd[1]: libpod-e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de.scope: Deactivated successfully.
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.687 2 INFO nova.virt.libvirt.driver [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Instance destroyed successfully.#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.688 2 DEBUG nova.objects.instance [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lazy-loading 'resources' on Instance uuid dc4a4f9d-2d68-4b95-a651-f1817489ccd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:59 np0005466031 podman[283541]: 2025-10-02 12:42:59.693322963 +0000 UTC m=+0.046545540 container died e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.715 2 DEBUG nova.virt.libvirt.vif [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1588003337',display_name='tempest-ServerStableDeviceRescueTest-server-1588003337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1588003337',id=98,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79bb765ab1e4aa18672c9641b6187b9',ramdisk_id='',reservation_id='r-903zs7bi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-2109974660',owner_user_name='tempest-ServerStableDeviceRescueTest-2109974660-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:37:50Z,user_data=None,user_id='fdbe447f49374937a828d6281949a2a4',uuid=dc4a4f9d-2d68-4b95-a651-f1817489ccd6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.715 2 DEBUG nova.network.os_vif_util [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converting VIF {"id": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "address": "fa:16:3e:56:82:e0", "network": {"id": "494beff4-7fba-4749-8998-3432c91ac5d2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1801884151-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79bb765ab1e4aa18672c9641b6187b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d9c544-9d", "ovs_interfaceid": "79d9c544-9d33-410a-a1d5-393ff0908cb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.716 2 DEBUG nova.network.os_vif_util [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:82:e0,bridge_name='br-int',has_traffic_filtering=True,id=79d9c544-9d33-410a-a1d5-393ff0908cb1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d9c544-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.716 2 DEBUG os_vif [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:82:e0,bridge_name='br-int',has_traffic_filtering=True,id=79d9c544-9d33-410a-a1d5-393ff0908cb1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d9c544-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79d9c544-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.725 2 INFO os_vif [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:82:e0,bridge_name='br-int',has_traffic_filtering=True,id=79d9c544-9d33-410a-a1d5-393ff0908cb1,network=Network(494beff4-7fba-4749-8998-3432c91ac5d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d9c544-9d')#033[00m
Oct  2 08:42:59 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de-userdata-shm.mount: Deactivated successfully.
Oct  2 08:42:59 np0005466031 systemd[1]: var-lib-containers-storage-overlay-ac159d90a81fbe34ae90f210ea1190e4448846257e83124647d29e1c6c8c993c-merged.mount: Deactivated successfully.
Oct  2 08:42:59 np0005466031 podman[283541]: 2025-10-02 12:42:59.742414266 +0000 UTC m=+0.095636843 container cleanup e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:42:59 np0005466031 systemd[1]: libpod-conmon-e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de.scope: Deactivated successfully.
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.790 2 DEBUG os_brick.utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.792 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:59 np0005466031 podman[283596]: 2025-10-02 12:42:59.808646262 +0000 UTC m=+0.041257008 container remove e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.813 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[254d7927-7c6a-4bf5-ac0a-2b64fa5562ac]: (4, ('Thu Oct  2 12:42:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de)\ne0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de\nThu Oct  2 12:42:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 (e0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de)\ne0ac73de98e69cbf882a1d061ac2d407d526ef34d965c54d4e2db24ed53e95de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.815 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa8b059-78f2-4f4f-b594-ee159243be24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.817 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap494beff4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 kernel: tap494beff4-70: left promiscuous mode
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.821 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.822 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[c0dbdbfc-1288-43d8-b048-6731972ea8ea]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.824 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.826 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d91f38-bdec-4d89-a2cd-61f86d18bc9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.840 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.841 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[361bc9b2-2ece-4399-9386-078cd8f225b0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.846 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.854 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.854 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4c0b7d-3d1d-40d7-bfaf-f8ec570fea90]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.855 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5a28e16a-3c02-4e1b-b3a8-d947f89b0624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.857 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[f6625374-7919-4bbd-b3b7-3870c6ee2dfa]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.857 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea5ac33-7166-449b-ba86-f1eb577d7c3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.858 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.871 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa344e7-fe90-4883-be53-6e8ed9103a40]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652499, 'reachable_time': 44695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283621, 'error': None, 'target': 'ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.874 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-494beff4-7fba-4749-8998-3432c91ac5d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:42:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:42:59.874 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[bb445a96-856f-4866-b055-6c22d8300a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:59 np0005466031 systemd[1]: run-netns-ovnmeta\x2d494beff4\x2d7fba\x2d4749\x2d8998\x2d3432c91ac5d2.mount: Deactivated successfully.
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.887 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:59 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/534810139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.890 2 DEBUG os_brick.initiator.connectors.lightos [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.890 2 DEBUG os_brick.initiator.connectors.lightos [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.890 2 DEBUG os_brick.initiator.connectors.lightos [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.890 2 DEBUG os_brick.utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.891 2 DEBUG nova.virt.block_device [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating existing volume attachment record: d29b3e2b-dfc7-4306-8aff-534a50263e4a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.908 2 DEBUG nova.compute.manager [req-a2255959-f81d-48e0-9ba1-529695ec47cc req-75c12499-4e51-41a9-b058-a9e6c6f1cb48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.908 2 DEBUG oslo_concurrency.lockutils [req-a2255959-f81d-48e0-9ba1-529695ec47cc req-75c12499-4e51-41a9-b058-a9e6c6f1cb48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.908 2 DEBUG oslo_concurrency.lockutils [req-a2255959-f81d-48e0-9ba1-529695ec47cc req-75c12499-4e51-41a9-b058-a9e6c6f1cb48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.908 2 DEBUG oslo_concurrency.lockutils [req-a2255959-f81d-48e0-9ba1-529695ec47cc req-75c12499-4e51-41a9-b058-a9e6c6f1cb48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.909 2 DEBUG nova.compute.manager [req-a2255959-f81d-48e0-9ba1-529695ec47cc req-75c12499-4e51-41a9-b058-a9e6c6f1cb48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.909 2 DEBUG nova.compute.manager [req-a2255959-f81d-48e0-9ba1-529695ec47cc req-75c12499-4e51-41a9-b058-a9e6c6f1cb48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-unplugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.910 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.915 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.963 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.964 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:42:59 np0005466031 nova_compute[235803]: 2025-10-02 12:42:59.965 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:00.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:00 np0005466031 nova_compute[235803]: 2025-10-02 12:43:00.515 2 INFO nova.virt.libvirt.driver [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Deleting instance files /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_del#033[00m
Oct  2 08:43:00 np0005466031 nova_compute[235803]: 2025-10-02 12:43:00.516 2 INFO nova.virt.libvirt.driver [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Deletion of /var/lib/nova/instances/dc4a4f9d-2d68-4b95-a651-f1817489ccd6_del complete#033[00m
Oct  2 08:43:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:00 np0005466031 nova_compute[235803]: 2025-10-02 12:43:00.653 2 INFO nova.compute.manager [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:43:00 np0005466031 nova_compute[235803]: 2025-10-02 12:43:00.653 2 DEBUG oslo.service.loopingcall [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:43:00 np0005466031 nova_compute[235803]: 2025-10-02 12:43:00.654 2 DEBUG nova.compute.manager [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:43:00 np0005466031 nova_compute[235803]: 2025-10-02 12:43:00.654 2 DEBUG nova.network.neutron [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:43:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:00 np0005466031 nova_compute[235803]: 2025-10-02 12:43:00.965 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:00 np0005466031 nova_compute[235803]: 2025-10-02 12:43:00.965 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.071 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.071 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.072 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.072 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.072 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.097 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.097 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.102 2 DEBUG nova.objects.instance [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'pci_requests' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.120 2 DEBUG nova.objects.instance [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'numa_topology' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.213 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.214 2 INFO nova.compute.claims [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.514 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.603 2 DEBUG nova.network.neutron [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.609 2 DEBUG nova.compute.manager [req-d1de0038-1239-4468-8987-5f411a8a133e req-9d947fa7-514a-46c9-aeee-9e065caf0224 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-deleted-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.610 2 INFO nova.compute.manager [req-d1de0038-1239-4468-8987-5f411a8a133e req-9d947fa7-514a-46c9-aeee-9e065caf0224 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Neutron deleted interface 79d9c544-9d33-410a-a1d5-393ff0908cb1; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.610 2 DEBUG nova.network.neutron [req-d1de0038-1239-4468-8987-5f411a8a133e req-9d947fa7-514a-46c9-aeee-9e065caf0224 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.667 2 INFO nova.compute.manager [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Took 1.01 seconds to deallocate network for instance.#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.673 2 DEBUG nova.compute.manager [req-d1de0038-1239-4468-8987-5f411a8a133e req-9d947fa7-514a-46c9-aeee-9e065caf0224 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Detach interface failed, port_id=79d9c544-9d33-410a-a1d5-393ff0908cb1, reason: Instance dc4a4f9d-2d68-4b95-a651-f1817489ccd6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.767 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1798938763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.948 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.954 2 DEBUG nova.compute.provider_tree [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:01 np0005466031 nova_compute[235803]: 2025-10-02 12:43:01.971 2 DEBUG nova.scheduler.client.report [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.024 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.029 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.030 2 DEBUG nova.compute.manager [req-509201ca-aaba-4e43-a267-649d84b16852 req-018fb907-fdc4-4883-8121-848cb1a76174 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.031 2 DEBUG oslo_concurrency.lockutils [req-509201ca-aaba-4e43-a267-649d84b16852 req-018fb907-fdc4-4883-8121-848cb1a76174 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.031 2 DEBUG oslo_concurrency.lockutils [req-509201ca-aaba-4e43-a267-649d84b16852 req-018fb907-fdc4-4883-8121-848cb1a76174 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.031 2 DEBUG oslo_concurrency.lockutils [req-509201ca-aaba-4e43-a267-649d84b16852 req-018fb907-fdc4-4883-8121-848cb1a76174 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.031 2 DEBUG nova.compute.manager [req-509201ca-aaba-4e43-a267-649d84b16852 req-018fb907-fdc4-4883-8121-848cb1a76174 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] No waiting events found dispatching network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.032 2 WARNING nova.compute.manager [req-509201ca-aaba-4e43-a267-649d84b16852 req-018fb907-fdc4-4883-8121-848cb1a76174 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Received unexpected event network-vif-plugged-79d9c544-9d33-410a-a1d5-393ff0908cb1 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.106 2 DEBUG oslo_concurrency.processutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:02.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.206 2 INFO nova.network.neutron [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating port 8eb9e971-5920-4103-9ba9-c0846182952d with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:43:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2220759351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.500 2 DEBUG oslo_concurrency.processutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.505 2 DEBUG nova.compute.provider_tree [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.570 2 DEBUG nova.scheduler.client.report [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:02 np0005466031 podman[283721]: 2025-10-02 12:43:02.614455956 +0000 UTC m=+0.048705913 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:43:02 np0005466031 podman[283722]: 2025-10-02 12:43:02.649666539 +0000 UTC m=+0.083667749 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:43:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:02.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.765 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:02 np0005466031 nova_compute[235803]: 2025-10-02 12:43:02.852 2 INFO nova.scheduler.client.report [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Deleted allocations for instance dc4a4f9d-2d68-4b95-a651-f1817489ccd6#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.059 2 DEBUG oslo_concurrency.lockutils [None req-41353982-2248-485a-8751-fb10e8600772 fdbe447f49374937a828d6281949a2a4 a79bb765ab1e4aa18672c9641b6187b9 - - default default] Lock "dc4a4f9d-2d68-4b95-a651-f1817489ccd6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.705 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.705 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.705 2 DEBUG nova.network.neutron [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.906 2 DEBUG nova.compute.manager [req-87e8fc62-16bb-4b80-8a75-4d086d5ab935 req-1ae2ae1a-16aa-40e7-8312-907d49913c78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.907 2 DEBUG nova.compute.manager [req-87e8fc62-16bb-4b80-8a75-4d086d5ab935 req-1ae2ae1a-16aa-40e7-8312-907d49913c78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing instance network info cache due to event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:03 np0005466031 nova_compute[235803]: 2025-10-02 12:43:03.907 2 DEBUG oslo_concurrency.lockutils [req-87e8fc62-16bb-4b80-8a75-4d086d5ab935 req-1ae2ae1a-16aa-40e7-8312-907d49913c78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:04 np0005466031 nova_compute[235803]: 2025-10-02 12:43:04.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:04.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:04 np0005466031 nova_compute[235803]: 2025-10-02 12:43:04.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:04.893 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:04.894 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:43:04 np0005466031 nova_compute[235803]: 2025-10-02 12:43:04.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:43:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/69113505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:43:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:43:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/69113505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:43:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:06.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.289 2 DEBUG nova.network.neutron [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e278 e278: 3 total, 3 up, 3 in
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.438 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.439 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.440 2 INFO nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating image(s)#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.673 2 DEBUG nova.storage.rbd_utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.676 2 DEBUG nova.objects.instance [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.678 2 DEBUG oslo_concurrency.lockutils [req-87e8fc62-16bb-4b80-8a75-4d086d5ab935 req-1ae2ae1a-16aa-40e7-8312-907d49913c78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.678 2 DEBUG nova.network.neutron [req-87e8fc62-16bb-4b80-8a75-4d086d5ab935 req-1ae2ae1a-16aa-40e7-8312-907d49913c78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:06.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.731 2 DEBUG nova.storage.rbd_utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.760 2 DEBUG nova.storage.rbd_utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.764 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "ac5305de8ed57eac2613bd039e8cc914e372a250" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:06 np0005466031 nova_compute[235803]: 2025-10-02 12:43:06.765 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "ac5305de8ed57eac2613bd039e8cc914e372a250" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:07 np0005466031 nova_compute[235803]: 2025-10-02 12:43:07.069 2 DEBUG nova.virt.libvirt.imagebackend [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c37f4151-ac68-47f8-adfa-bd0c85e4c75d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c37f4151-ac68-47f8-adfa-bd0c85e4c75d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:43:07 np0005466031 nova_compute[235803]: 2025-10-02 12:43:07.120 2 DEBUG nova.virt.libvirt.imagebackend [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c37f4151-ac68-47f8-adfa-bd0c85e4c75d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:43:07 np0005466031 nova_compute[235803]: 2025-10-02 12:43:07.121 2 DEBUG nova.storage.rbd_utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] cloning images/c37f4151-ac68-47f8-adfa-bd0c85e4c75d@snap to None/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.008 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "ac5305de8ed57eac2613bd039e8cc914e372a250" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.171 2 DEBUG nova.objects.instance [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'migration_context' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:08.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.342 2 DEBUG nova.storage.rbd_utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] flattening vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:43:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:08.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.831 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Image rbd:vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.832 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.832 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Ensure instance console log exists: /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.832 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.833 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.833 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.836 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start _get_guest_xml network_info=[{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:42:34Z,direct_url=<?>,disk_format='raw',id=c37f4151-ac68-47f8-adfa-bd0c85e4c75d,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1408399936-shelved',owner='f7e2edef094b4ba5a56a5ec5ffce911e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:42:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6ff47fe7-ec04-463b-9d03-426ce1963408', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6ff47fe7-ec04-463b-9d03-426ce1963408', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attached', 'instance': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb', 'attached_at': '', 'detached_at': '', 'volume_id': '6ff47fe7-ec04-463b-9d03-426ce1963408', 'serial': '6ff47fe7-ec04-463b-9d03-426ce1963408'}, 'attachment_id': 'd29b3e2b-dfc7-4306-8aff-534a50263e4a', 'delete_on_termination': False, 'mount_device': '/dev/vdc', 'guest_format': None, 'boot_index': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.839 2 WARNING nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.843 2 DEBUG nova.virt.libvirt.host [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.844 2 DEBUG nova.virt.libvirt.host [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.850 2 DEBUG nova.virt.libvirt.host [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.850 2 DEBUG nova.virt.libvirt.host [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.851 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.851 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:42:34Z,direct_url=<?>,disk_format='raw',id=c37f4151-ac68-47f8-adfa-bd0c85e4c75d,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1408399936-shelved',owner='f7e2edef094b4ba5a56a5ec5ffce911e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:42:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.851 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.852 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.852 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.852 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.852 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.852 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.852 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.853 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.853 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.853 2 DEBUG nova.virt.hardware [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.853 2 DEBUG nova.objects.instance [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:08 np0005466031 nova_compute[235803]: 2025-10-02 12:43:08.886 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3519200454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.328 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.363 2 DEBUG nova.storage.rbd_utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.370 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.530 2 DEBUG nova.network.neutron [req-87e8fc62-16bb-4b80-8a75-4d086d5ab935 req-1ae2ae1a-16aa-40e7-8312-907d49913c78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updated VIF entry in instance network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.531 2 DEBUG nova.network.neutron [req-87e8fc62-16bb-4b80-8a75-4d086d5ab935 req-1ae2ae1a-16aa-40e7-8312-907d49913c78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.555 2 DEBUG oslo_concurrency.lockutils [req-87e8fc62-16bb-4b80-8a75-4d086d5ab935 req-1ae2ae1a-16aa-40e7-8312-907d49913c78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3432120987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.785 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.828 2 DEBUG nova.virt.libvirt.vif [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='c37f4151-ac68-47f8-adfa-bd0c85e4c75d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-372158786',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member',shelved_at='2025-10-02T12:42:45.077638',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c37f4151-ac68-47f8-adfa-bd0c85e4c75d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.829 2 DEBUG nova.network.os_vif_util [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.830 2 DEBUG nova.network.os_vif_util [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.831 2 DEBUG nova.objects.instance [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.849 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <uuid>1f9101c6-f4d8-46c7-8884-386f9f08e6fb</uuid>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <name>instance-00000070</name>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-1408399936</nova:name>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:43:08</nova:creationTime>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <nova:user uuid="3151966e941f4652ba984616bfa760c7">tempest-AttachVolumeShelveTestJSON-1943710095-project-member</nova:user>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <nova:project uuid="f7e2edef094b4ba5a56a5ec5ffce911e">tempest-AttachVolumeShelveTestJSON-1943710095</nova:project>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="c37f4151-ac68-47f8-adfa-bd0c85e4c75d"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <nova:port uuid="8eb9e971-5920-4103-9ba9-c0846182952d">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <entry name="serial">1f9101c6-f4d8-46c7-8884-386f9f08e6fb</entry>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <entry name="uuid">1f9101c6-f4d8-46c7-8884-386f9f08e6fb</entry>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-6ff47fe7-ec04-463b-9d03-426ce1963408">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <target dev="vdc" bus="virtio"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <serial>6ff47fe7-ec04-463b-9d03-426ce1963408</serial>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:43:40:72"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <target dev="tap8eb9e971-59"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/console.log" append="off"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:43:09 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:43:09 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:43:09 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:43:09 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.850 2 DEBUG nova.compute.manager [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Preparing to wait for external event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.851 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.851 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.851 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.852 2 DEBUG nova.virt.libvirt.vif [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='c37f4151-ac68-47f8-adfa-bd0c85e4c75d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-372158786',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member',shelved_at='2025-10-02T12:42:45.077638',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c37f4151-ac68-47f8-adfa-bd0c85e4c75d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.852 2 DEBUG nova.network.os_vif_util [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.853 2 DEBUG nova.network.os_vif_util [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.853 2 DEBUG os_vif [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8eb9e971-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8eb9e971-59, col_values=(('external_ids', {'iface-id': '8eb9e971-5920-4103-9ba9-c0846182952d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:40:72', 'vm-uuid': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:09 np0005466031 NetworkManager[44907]: <info>  [1759408989.8597] manager: (tap8eb9e971-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.866 2 INFO os_vif [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59')#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.939 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.939 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.940 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.940 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No VIF found with MAC fa:16:3e:43:40:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.941 2 INFO nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Using config drive#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.968 2 DEBUG nova.storage.rbd_utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:09 np0005466031 nova_compute[235803]: 2025-10-02 12:43:09.992 2 DEBUG nova.objects.instance [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.034 2 DEBUG nova.objects.instance [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'keypairs' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:10.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.430 2 INFO nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating config drive at /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config#033[00m
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.436 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1tuh0esx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.570 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1tuh0esx" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.604 2 DEBUG nova.storage.rbd_utils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.609 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:10.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.797 2 DEBUG oslo_concurrency.processutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.798 2 INFO nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deleting local config drive /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config because it was imported into RBD.#033[00m
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:10 np0005466031 kernel: tap8eb9e971-59: entered promiscuous mode
Oct  2 08:43:10 np0005466031 NetworkManager[44907]: <info>  [1759408990.8546] manager: (tap8eb9e971-59): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:10Z|00419|binding|INFO|Claiming lport 8eb9e971-5920-4103-9ba9-c0846182952d for this chassis.
Oct  2 08:43:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:10Z|00420|binding|INFO|8eb9e971-5920-4103-9ba9-c0846182952d: Claiming fa:16:3e:43:40:72 10.100.0.10
Oct  2 08:43:10 np0005466031 NetworkManager[44907]: <info>  [1759408990.8691] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Oct  2 08:43:10 np0005466031 NetworkManager[44907]: <info>  [1759408990.8698] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Oct  2 08:43:10 np0005466031 nova_compute[235803]: 2025-10-02 12:43:10.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.875 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:40:72 10.100.0.10'], port_security=['fa:16:3e:43:40:72 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '041f6b5e-0e14-4ae5-9597-3a584e6f87e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=8eb9e971-5920-4103-9ba9-c0846182952d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.876 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 8eb9e971-5920-4103-9ba9-c0846182952d in datapath 385a384c-5df0-4b04-b928-517a46df04f4 bound to our chassis#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.878 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 385a384c-5df0-4b04-b928-517a46df04f4#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.890 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[76989307-f9cf-475b-b809-a15843850eab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.891 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap385a384c-51 in ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.898 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap385a384c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.898 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9344c442-3485-4402-9093-3f6f3e7b85dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.900 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[50064a52-831d-45ad-bb09-34c12e886405]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.917 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4a773334-271a-4148-a8eb-97da404b7a87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:10 np0005466031 systemd-machined[192227]: New machine qemu-47-instance-00000070.
Oct  2 08:43:10 np0005466031 systemd[1]: Started Virtual Machine qemu-47-instance-00000070.
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.943 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6662e5c8-e574-47e0-8726-bb3485640210]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:10 np0005466031 systemd-udevd[284154]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:43:10 np0005466031 podman[284117]: 2025-10-02 12:43:10.961643227 +0000 UTC m=+0.074206916 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:43:10 np0005466031 podman[284116]: 2025-10-02 12:43:10.966996401 +0000 UTC m=+0.080579330 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd)
Oct  2 08:43:10 np0005466031 NetworkManager[44907]: <info>  [1759408990.9671] device (tap8eb9e971-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:43:10 np0005466031 NetworkManager[44907]: <info>  [1759408990.9682] device (tap8eb9e971-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.981 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[395d1d9d-f526-4321-87ed-8dee6c3a0633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:10 np0005466031 NetworkManager[44907]: <info>  [1759408990.9901] manager: (tap385a384c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Oct  2 08:43:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:10.989 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[75c2382a-ea3a-4779-b997-99fdb5d32320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.035 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[607854a0-862e-4e2f-9288-062f1d06eeb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.037 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3d7556-24c5-4f61-89bb-cd6c9056fcec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466031 NetworkManager[44907]: <info>  [1759408991.0611] device (tap385a384c-50): carrier: link connected
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.066 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6e0c0c-b83b-4a02-96ad-2001257b4812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:11Z|00421|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d ovn-installed in OVS
Oct  2 08:43:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:11Z|00422|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d up in Southbound
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.084 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dea37a-0016-4e68-88fe-cf225a9eb1f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684665, 'reachable_time': 26574, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284190, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.102 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8aaccd1d-ebee-44d1-a240-31c5168bfab3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:d461'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 684665, 'tstamp': 684665}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284191, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.122 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2e863e47-9d67-4a9c-801c-e5abbf1fa5c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684665, 'reachable_time': 26574, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284192, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.153 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6351b9fb-1ed5-46c4-a49f-976b9bf829be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.203 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8357b938-d575-44eb-b9ed-59fe213cd86b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.205 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.205 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.206 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap385a384c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:11 np0005466031 kernel: tap385a384c-50: entered promiscuous mode
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466031 NetworkManager[44907]: <info>  [1759408991.2088] manager: (tap385a384c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.211 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap385a384c-50, col_values=(('external_ids', {'iface-id': '12496c3c-f50d-4104-bfb7-81f1aa24617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:11Z|00423|binding|INFO|Releasing lport 12496c3c-f50d-4104-bfb7-81f1aa24617e from this chassis (sb_readonly=0)
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.215 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.216 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8a8127-91d8-4217-8e48-de6713dbe5f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.216 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-385a384c-5df0-4b04-b928-517a46df04f4
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 385a384c-5df0-4b04-b928-517a46df04f4
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:43:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:11.217 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'env', 'PROCESS_TAG=haproxy-385a384c-5df0-4b04-b928-517a46df04f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/385a384c-5df0-4b04-b928-517a46df04f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.376 2 DEBUG nova.compute.manager [req-13113d5a-62e9-48e5-b4ae-2b3c93f05828 req-049d7bc1-95fb-4f9e-9c02-3e4317a1a4ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.377 2 DEBUG oslo_concurrency.lockutils [req-13113d5a-62e9-48e5-b4ae-2b3c93f05828 req-049d7bc1-95fb-4f9e-9c02-3e4317a1a4ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.377 2 DEBUG oslo_concurrency.lockutils [req-13113d5a-62e9-48e5-b4ae-2b3c93f05828 req-049d7bc1-95fb-4f9e-9c02-3e4317a1a4ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.377 2 DEBUG oslo_concurrency.lockutils [req-13113d5a-62e9-48e5-b4ae-2b3c93f05828 req-049d7bc1-95fb-4f9e-9c02-3e4317a1a4ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:11 np0005466031 nova_compute[235803]: 2025-10-02 12:43:11.378 2 DEBUG nova.compute.manager [req-13113d5a-62e9-48e5-b4ae-2b3c93f05828 req-049d7bc1-95fb-4f9e-9c02-3e4317a1a4ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Processing event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:43:11 np0005466031 podman[284238]: 2025-10-02 12:43:11.556011279 +0000 UTC m=+0.020496540 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:43:11 np0005466031 podman[284238]: 2025-10-02 12:43:11.731913221 +0000 UTC m=+0.196398462 container create e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 08:43:11 np0005466031 systemd[1]: Started libpod-conmon-e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1.scope.
Oct  2 08:43:11 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:43:11 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7596c94d5c07a93b1e943915139c841c80022e97cf566dc31f4e4e9a70100f2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:43:11 np0005466031 podman[284238]: 2025-10-02 12:43:11.878008804 +0000 UTC m=+0.342494065 container init e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:43:11 np0005466031 podman[284238]: 2025-10-02 12:43:11.884437809 +0000 UTC m=+0.348923050 container start e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:43:11 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[284299]: [NOTICE]   (284304) : New worker (284306) forked
Oct  2 08:43:11 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[284299]: [NOTICE]   (284304) : Loading success.
Oct  2 08:43:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:12.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.227 2 DEBUG nova.compute.manager [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.228 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408992.2267923, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.228 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Started (Lifecycle Event)#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.231 2 DEBUG nova.virt.libvirt.driver [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.233 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance spawned successfully.#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.284 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.288 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.331 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.331 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408992.227631, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.332 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.363 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.367 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759408992.2304602, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.367 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.411 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.414 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:12 np0005466031 nova_compute[235803]: 2025-10-02 12:43:12.437 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e279 e279: 3 total, 3 up, 3 in
Oct  2 08:43:13 np0005466031 nova_compute[235803]: 2025-10-02 12:43:13.549 2 DEBUG nova.compute.manager [req-5aa15625-3108-43cd-b7e0-5b3d5c65851d req-99cb619b-5bf2-4f6d-b572-e7c3082363fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:13 np0005466031 nova_compute[235803]: 2025-10-02 12:43:13.549 2 DEBUG oslo_concurrency.lockutils [req-5aa15625-3108-43cd-b7e0-5b3d5c65851d req-99cb619b-5bf2-4f6d-b572-e7c3082363fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:13 np0005466031 nova_compute[235803]: 2025-10-02 12:43:13.549 2 DEBUG oslo_concurrency.lockutils [req-5aa15625-3108-43cd-b7e0-5b3d5c65851d req-99cb619b-5bf2-4f6d-b572-e7c3082363fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:13 np0005466031 nova_compute[235803]: 2025-10-02 12:43:13.550 2 DEBUG oslo_concurrency.lockutils [req-5aa15625-3108-43cd-b7e0-5b3d5c65851d req-99cb619b-5bf2-4f6d-b572-e7c3082363fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:13 np0005466031 nova_compute[235803]: 2025-10-02 12:43:13.550 2 DEBUG nova.compute.manager [req-5aa15625-3108-43cd-b7e0-5b3d5c65851d req-99cb619b-5bf2-4f6d-b572-e7c3082363fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:13 np0005466031 nova_compute[235803]: 2025-10-02 12:43:13.550 2 WARNING nova.compute.manager [req-5aa15625-3108-43cd-b7e0-5b3d5c65851d req-99cb619b-5bf2-4f6d-b572-e7c3082363fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received unexpected event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Oct  2 08:43:13 np0005466031 nova_compute[235803]: 2025-10-02 12:43:13.998 2 DEBUG nova.compute.manager [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:14 np0005466031 nova_compute[235803]: 2025-10-02 12:43:14.153 2 DEBUG oslo_concurrency.lockutils [None req-aa9d12bc-3c90-48f8-a9b4-f013f56fa28d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 14.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:14.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:14 np0005466031 nova_compute[235803]: 2025-10-02 12:43:14.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:14 np0005466031 nova_compute[235803]: 2025-10-02 12:43:14.680 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408979.678833, dc4a4f9d-2d68-4b95-a651-f1817489ccd6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:14 np0005466031 nova_compute[235803]: 2025-10-02 12:43:14.681 2 INFO nova.compute.manager [-] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:43:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:14.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:14 np0005466031 nova_compute[235803]: 2025-10-02 12:43:14.720 2 DEBUG nova.compute.manager [None req-4aa9aa3a-a4f4-4a0d-b9de-d83f751a9fd2 - - - - - -] [instance: dc4a4f9d-2d68-4b95-a651-f1817489ccd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:14 np0005466031 nova_compute[235803]: 2025-10-02 12:43:14.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:14.896 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:16.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:18.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:18.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:19 np0005466031 nova_compute[235803]: 2025-10-02 12:43:19.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:19 np0005466031 nova_compute[235803]: 2025-10-02 12:43:19.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:20.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:20.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 e280: 3 total, 3 up, 3 in
Oct  2 08:43:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:22.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:22.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:22 np0005466031 nova_compute[235803]: 2025-10-02 12:43:22.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:24 np0005466031 nova_compute[235803]: 2025-10-02 12:43:24.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:24.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:24 np0005466031 nova_compute[235803]: 2025-10-02 12:43:24.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:25Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:40:72 10.100.0.10
Oct  2 08:43:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:25.850 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:25.850 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:25.851 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:26 np0005466031 nova_compute[235803]: 2025-10-02 12:43:26.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:43:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:26.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:43:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:28.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:29 np0005466031 nova_compute[235803]: 2025-10-02 12:43:29.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:29 np0005466031 nova_compute[235803]: 2025-10-02 12:43:29.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:30.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:30 np0005466031 nova_compute[235803]: 2025-10-02 12:43:30.577 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:30 np0005466031 nova_compute[235803]: 2025-10-02 12:43:30.577 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:30 np0005466031 nova_compute[235803]: 2025-10-02 12:43:30.594 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:43:30 np0005466031 nova_compute[235803]: 2025-10-02 12:43:30.662 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:30 np0005466031 nova_compute[235803]: 2025-10-02 12:43:30.663 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:30 np0005466031 nova_compute[235803]: 2025-10-02 12:43:30.671 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:43:30 np0005466031 nova_compute[235803]: 2025-10-02 12:43:30.671 2 INFO nova.compute.claims [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:43:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:30.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:30 np0005466031 nova_compute[235803]: 2025-10-02 12:43:30.861 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3590946673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.335 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.341 2 DEBUG nova.compute.provider_tree [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.372 2 DEBUG nova.scheduler.client.report [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.429 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.429 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.498 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.499 2 DEBUG nova.network.neutron [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.541 2 INFO nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.578 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.657 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.658 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.658 2 INFO nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Creating image(s)#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.687 2 DEBUG nova.storage.rbd_utils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.714 2 DEBUG nova.storage.rbd_utils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.741 2 DEBUG nova.storage.rbd_utils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.744 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.774 2 DEBUG nova.policy [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b168e90f7c0c414ba26c576fb8706a80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.824 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.825 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.826 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.826 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.853 2 DEBUG nova.storage.rbd_utils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.857 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.913 2 DEBUG oslo_concurrency.lockutils [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.914 2 DEBUG oslo_concurrency.lockutils [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:31 np0005466031 nova_compute[235803]: 2025-10-02 12:43:31.930 2 INFO nova.compute.manager [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Detaching volume 6ff47fe7-ec04-463b-9d03-426ce1963408#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.090 2 INFO nova.virt.block_device [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Attempting to driver detach volume 6ff47fe7-ec04-463b-9d03-426ce1963408 from mountpoint /dev/vdc#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.099 2 DEBUG nova.virt.libvirt.driver [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Attempting to detach device vdc from instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.100 2 DEBUG nova.virt.libvirt.guest [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-6ff47fe7-ec04-463b-9d03-426ce1963408">
Oct  2 08:43:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <target dev="vdc" bus="virtio"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <serial>6ff47fe7-ec04-463b-9d03-426ce1963408</serial>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:43:32 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.234 2 INFO nova.virt.libvirt.driver [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully detached device vdc from instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb from the persistent domain config.#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.234 2 DEBUG nova.virt.libvirt.driver [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.235 2 DEBUG nova.virt.libvirt.guest [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-6ff47fe7-ec04-463b-9d03-426ce1963408">
Oct  2 08:43:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <target dev="vdc" bus="virtio"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <serial>6ff47fe7-ec04-463b-9d03-426ce1963408</serial>
Oct  2 08:43:32 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct  2 08:43:32 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:43:32 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:43:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:32.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.500 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759409012.499523, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.502 2 DEBUG nova.virt.libvirt.driver [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.506 2 INFO nova.virt.libvirt.driver [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully detached device vdc from instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb from the live domain config.#033[00m
Oct  2 08:43:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.710 2 DEBUG nova.network.neutron [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Successfully created port: 5e2a83a5-11e1-45b1-82ce-5fee577f67fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.814 2 DEBUG nova.objects.instance [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'flavor' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:32 np0005466031 nova_compute[235803]: 2025-10-02 12:43:32.874 2 DEBUG oslo_concurrency.lockutils [None req-93d70584-f5f9-41e9-936b-a95ad5e51589 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.359 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.359 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.360 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.360 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.360 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.361 2 INFO nova.compute.manager [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Terminating instance#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.362 2 DEBUG nova.compute.manager [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.473 2 DEBUG nova.network.neutron [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Successfully updated port: 5e2a83a5-11e1-45b1-82ce-5fee577f67fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.489 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.489 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.489 2 DEBUG nova.network.neutron [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:43:33 np0005466031 kernel: tap8eb9e971-59 (unregistering): left promiscuous mode
Oct  2 08:43:33 np0005466031 NetworkManager[44907]: <info>  [1759409013.5657] device (tap8eb9e971-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:43:33 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:33Z|00424|binding|INFO|Releasing lport 8eb9e971-5920-4103-9ba9-c0846182952d from this chassis (sb_readonly=0)
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:33 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:33Z|00425|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d down in Southbound
Oct  2 08:43:33 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:33Z|00426|binding|INFO|Removing iface tap8eb9e971-59 ovn-installed in OVS
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:33.584 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:40:72 10.100.0.10'], port_security=['fa:16:3e:43:40:72 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '041f6b5e-0e14-4ae5-9597-3a584e6f87e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.243', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=8eb9e971-5920-4103-9ba9-c0846182952d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:33.586 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 8eb9e971-5920-4103-9ba9-c0846182952d in datapath 385a384c-5df0-4b04-b928-517a46df04f4 unbound from our chassis#033[00m
Oct  2 08:43:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:33.587 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 385a384c-5df0-4b04-b928-517a46df04f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:43:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:33.589 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d426a4-ce3a-4196-ae22-1c35d5eb497d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:33.589 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace which is not needed anymore#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:33 np0005466031 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct  2 08:43:33 np0005466031 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000070.scope: Consumed 14.308s CPU time.
Oct  2 08:43:33 np0005466031 systemd-machined[192227]: Machine qemu-47-instance-00000070 terminated.
Oct  2 08:43:33 np0005466031 podman[284493]: 2025-10-02 12:43:33.64387516 +0000 UTC m=+0.070043137 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:43:33 np0005466031 podman[284494]: 2025-10-02 12:43:33.672175434 +0000 UTC m=+0.095478748 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.689 2 DEBUG nova.compute.manager [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-changed-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.689 2 DEBUG nova.compute.manager [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Refreshing instance network info cache due to event network-changed-5e2a83a5-11e1-45b1-82ce-5fee577f67fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.689 2 DEBUG oslo_concurrency.lockutils [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.805 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance destroyed successfully.#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.805 2 DEBUG nova.objects.instance [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'resources' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:33 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[284299]: [NOTICE]   (284304) : haproxy version is 2.8.14-c23fe91
Oct  2 08:43:33 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[284299]: [NOTICE]   (284304) : path to executable is /usr/sbin/haproxy
Oct  2 08:43:33 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[284299]: [WARNING]  (284304) : Exiting Master process...
Oct  2 08:43:33 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[284299]: [WARNING]  (284304) : Exiting Master process...
Oct  2 08:43:33 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[284299]: [ALERT]    (284304) : Current worker (284306) exited with code 143 (Terminated)
Oct  2 08:43:33 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[284299]: [WARNING]  (284304) : All workers exited. Exiting... (0)
Oct  2 08:43:33 np0005466031 systemd[1]: libpod-e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1.scope: Deactivated successfully.
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.816 2 DEBUG nova.virt.libvirt.vif [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMPES/J98pyyGI+xC972/PJIY+D7X9SqOgh45Z1MPKQ6L1b0LXV7IORHQBCxCHGOsCWQssLDPZp4WJ8irI2AsYuAH5MVzTXEt9QIB2bOJQbGultCK6n77bAruhlsubzH7w==',key_name='tempest-keypair-372158786',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:43:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.817 2 DEBUG nova.network.os_vif_util [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.817 2 DEBUG nova.network.os_vif_util [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.818 2 DEBUG os_vif [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.820 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8eb9e971-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:33 np0005466031 podman[284556]: 2025-10-02 12:43:33.823188489 +0000 UTC m=+0.147597388 container died e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:33 np0005466031 nova_compute[235803]: 2025-10-02 12:43:33.825 2 INFO os_vif [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59')#033[00m
Oct  2 08:43:33 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1-userdata-shm.mount: Deactivated successfully.
Oct  2 08:43:33 np0005466031 systemd[1]: var-lib-containers-storage-overlay-7596c94d5c07a93b1e943915139c841c80022e97cf566dc31f4e4e9a70100f2f-merged.mount: Deactivated successfully.
Oct  2 08:43:33 np0005466031 podman[284556]: 2025-10-02 12:43:33.960537001 +0000 UTC m=+0.284945940 container cleanup e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:43:33 np0005466031 systemd[1]: libpod-conmon-e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1.scope: Deactivated successfully.
Oct  2 08:43:34 np0005466031 podman[284619]: 2025-10-02 12:43:34.168921518 +0000 UTC m=+0.188183646 container remove e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.176 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9e8690-3654-44fa-807a-56df46f9618d]: (4, ('Thu Oct  2 12:43:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1)\ne5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1\nThu Oct  2 12:43:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (e5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1)\ne5d0d7acef7902a8b5400607bb406e556dc545dd058c82db03cb1c503555c0c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.177 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b31926a8-3951-4224-bf95-3d59e0c8060e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.178 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:34 np0005466031 nova_compute[235803]: 2025-10-02 12:43:34.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:34 np0005466031 kernel: tap385a384c-50: left promiscuous mode
Oct  2 08:43:34 np0005466031 nova_compute[235803]: 2025-10-02 12:43:34.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.196 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[21ee8ad3-8241-4d70-b4b6-7d4fbd4af257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.225 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[47ff0393-7cdb-4c2e-90c2-bda337730eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.226 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5d467ae2-bc5d-4803-8967-867dfd8dc2be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.242 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e0613944-9a67-408a-a226-aaa56dac1409]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684656, 'reachable_time': 29664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284635, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.244 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:43:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:34.245 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa77ff5-e13d-4beb-9aa2-4f1d39b13580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:34 np0005466031 systemd[1]: run-netns-ovnmeta\x2d385a384c\x2d5df0\x2d4b04\x2db928\x2d517a46df04f4.mount: Deactivated successfully.
Oct  2 08:43:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:34.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:34 np0005466031 nova_compute[235803]: 2025-10-02 12:43:34.267 2 DEBUG nova.network.neutron [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:43:34 np0005466031 nova_compute[235803]: 2025-10-02 12:43:34.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:35 np0005466031 nova_compute[235803]: 2025-10-02 12:43:35.273 2 DEBUG nova.network.neutron [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updating instance_info_cache with network_info: [{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:35 np0005466031 nova_compute[235803]: 2025-10-02 12:43:35.290 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:35 np0005466031 nova_compute[235803]: 2025-10-02 12:43:35.291 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance network_info: |[{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:43:35 np0005466031 nova_compute[235803]: 2025-10-02 12:43:35.291 2 DEBUG oslo_concurrency.lockutils [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:35 np0005466031 nova_compute[235803]: 2025-10-02 12:43:35.292 2 DEBUG nova.network.neutron [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Refreshing network info cache for port 5e2a83a5-11e1-45b1-82ce-5fee577f67fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.002 2 DEBUG nova.compute.manager [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.003 2 DEBUG oslo_concurrency.lockutils [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.003 2 DEBUG oslo_concurrency.lockutils [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.003 2 DEBUG oslo_concurrency.lockutils [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.004 2 DEBUG nova.compute.manager [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.004 2 DEBUG nova.compute.manager [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.004 2 DEBUG nova.compute.manager [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.005 2 DEBUG oslo_concurrency.lockutils [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.005 2 DEBUG oslo_concurrency.lockutils [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.005 2 DEBUG oslo_concurrency.lockutils [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.006 2 DEBUG nova.compute.manager [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.006 2 WARNING nova.compute.manager [req-a207f081-fd25-4a3d-b004-ead88bc4f888 req-16d54179-e13d-426b-8dd3-815160cac4a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received unexpected event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.079 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:43:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:36.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.302 2 DEBUG nova.storage.rbd_utils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] resizing rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.596 2 DEBUG nova.network.neutron [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updated VIF entry in instance network info cache for port 5e2a83a5-11e1-45b1-82ce-5fee577f67fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.596 2 DEBUG nova.network.neutron [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updating instance_info_cache with network_info: [{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:36 np0005466031 nova_compute[235803]: 2025-10-02 12:43:36.615 2 DEBUG oslo_concurrency.lockutils [req-c72bec1f-0d2b-490c-b546-5ff696cd4fb7 req-6d648650-99a5-476d-a5c7-1a529be0169f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3507850382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.174 2 DEBUG nova.objects.instance [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'migration_context' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.194 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.195 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Ensure instance console log exists: /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.195 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.196 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.196 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.198 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Start _get_guest_xml network_info=[{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.203 2 WARNING nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.209 2 DEBUG nova.virt.libvirt.host [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.210 2 DEBUG nova.virt.libvirt.host [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.214 2 DEBUG nova.virt.libvirt.host [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.215 2 DEBUG nova.virt.libvirt.host [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.215 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.216 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.216 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.216 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.217 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.217 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.217 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.217 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.218 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.218 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.218 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.219 2 DEBUG nova.virt.hardware [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.221 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:38.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:38 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/783646978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.677 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.701 2 DEBUG nova.storage.rbd_utils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.705 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:38.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:38 np0005466031 nova_compute[235803]: 2025-10-02 12:43:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4191999098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.193 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.197 2 DEBUG nova.virt.libvirt.vif [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-377097009',display_name='tempest-ServerRescueNegativeTestJSON-server-377097009',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-377097009',id=117,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-msakr1ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:31Z,user_data=None,user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=17266fac-3772-4df3-b4d7-c47d8292f6d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.198 2 DEBUG nova.network.os_vif_util [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.199 2 DEBUG nova.network.os_vif_util [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:09:61,bridge_name='br-int',has_traffic_filtering=True,id=5e2a83a5-11e1-45b1-82ce-5fee577f67fe,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e2a83a5-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.202 2 DEBUG nova.objects.instance [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.225 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <uuid>17266fac-3772-4df3-b4d7-c47d8292f6d6</uuid>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <name>instance-00000075</name>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-377097009</nova:name>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:43:38</nova:creationTime>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <nova:user uuid="b168e90f7c0c414ba26c576fb8706a80">tempest-ServerRescueNegativeTestJSON-488939839-project-member</nova:user>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <nova:project uuid="c87621e5c0ba4f13abfff528143c1c00">tempest-ServerRescueNegativeTestJSON-488939839</nova:project>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <nova:port uuid="5e2a83a5-11e1-45b1-82ce-5fee577f67fe">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <entry name="serial">17266fac-3772-4df3-b4d7-c47d8292f6d6</entry>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <entry name="uuid">17266fac-3772-4df3-b4d7-c47d8292f6d6</entry>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/17266fac-3772-4df3-b4d7-c47d8292f6d6_disk">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:34:09:61"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <target dev="tap5e2a83a5-11"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/console.log" append="off"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:43:39 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:43:39 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:43:39 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:43:39 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.227 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Preparing to wait for external event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.228 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.228 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.228 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.229 2 DEBUG nova.virt.libvirt.vif [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-377097009',display_name='tempest-ServerRescueNegativeTestJSON-server-377097009',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-377097009',id=117,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-msakr1ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:31Z,user_data=None,user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=17266fac-3772-4df3-b4d7-c47d8292f6d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.229 2 DEBUG nova.network.os_vif_util [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.230 2 DEBUG nova.network.os_vif_util [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:09:61,bridge_name='br-int',has_traffic_filtering=True,id=5e2a83a5-11e1-45b1-82ce-5fee577f67fe,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e2a83a5-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.231 2 DEBUG os_vif [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:09:61,bridge_name='br-int',has_traffic_filtering=True,id=5e2a83a5-11e1-45b1-82ce-5fee577f67fe,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e2a83a5-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2a83a5-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e2a83a5-11, col_values=(('external_ids', {'iface-id': '5e2a83a5-11e1-45b1-82ce-5fee577f67fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:09:61', 'vm-uuid': '17266fac-3772-4df3-b4d7-c47d8292f6d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:39 np0005466031 NetworkManager[44907]: <info>  [1759409019.2382] manager: (tap5e2a83a5-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.245 2 INFO os_vif [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:09:61,bridge_name='br-int',has_traffic_filtering=True,id=5e2a83a5-11e1-45b1-82ce-5fee577f67fe,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e2a83a5-11')#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.324 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.325 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.325 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:34:09:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.326 2 INFO nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Using config drive#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.352 2 DEBUG nova.storage.rbd_utils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:43:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:43:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.729 2 INFO nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Creating config drive at /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.739 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_dw7szdc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.890 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_dw7szdc" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.927 2 DEBUG nova.storage.rbd_utils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:39 np0005466031 nova_compute[235803]: 2025-10-02 12:43:39.932 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:40.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:40.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:40 np0005466031 nova_compute[235803]: 2025-10-02 12:43:40.950 2 DEBUG oslo_concurrency.processutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:40 np0005466031 nova_compute[235803]: 2025-10-02 12:43:40.951 2 INFO nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Deleting local config drive /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config because it was imported into RBD.#033[00m
Oct  2 08:43:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:41 np0005466031 kernel: tap5e2a83a5-11: entered promiscuous mode
Oct  2 08:43:41 np0005466031 NetworkManager[44907]: <info>  [1759409021.0132] manager: (tap5e2a83a5-11): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:41Z|00427|binding|INFO|Claiming lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe for this chassis.
Oct  2 08:43:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:41Z|00428|binding|INFO|5e2a83a5-11e1-45b1-82ce-5fee577f67fe: Claiming fa:16:3e:34:09:61 10.100.0.6
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.033 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:09:61 10.100.0.6'], port_security=['fa:16:3e:34:09:61 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '17266fac-3772-4df3-b4d7-c47d8292f6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5e2a83a5-11e1-45b1-82ce-5fee577f67fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:41Z|00429|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe ovn-installed in OVS
Oct  2 08:43:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:41Z|00430|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe up in Southbound
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.034 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5e2a83a5-11e1-45b1-82ce-5fee577f67fe in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 bound to our chassis#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.035 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.047 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3f423694-dccd-490f-9997-946ae0223c49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.048 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.049 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.050 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[99d260c8-0673-470d-8e1f-4ec70e18925d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.050 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b3552c-696f-4aab-b720-2fa948dbc407]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 systemd-machined[192227]: New machine qemu-48-instance-00000075.
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.061 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[73ab6035-d6ff-4c23-b00c-89f01980f658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 systemd[1]: Started Virtual Machine qemu-48-instance-00000075.
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.093 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[76edbdc3-f266-4cbe-a646-56bc890cff58]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 systemd-udevd[285013]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:43:41 np0005466031 NetworkManager[44907]: <info>  [1759409021.1073] device (tap5e2a83a5-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:43:41 np0005466031 NetworkManager[44907]: <info>  [1759409021.1082] device (tap5e2a83a5-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:43:41 np0005466031 podman[284976]: 2025-10-02 12:43:41.119563553 +0000 UTC m=+0.074249887 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.121 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[201e99bc-ede1-449e-9ce7-e1505f91b8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 podman[284974]: 2025-10-02 12:43:41.128197962 +0000 UTC m=+0.082972579 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:43:41 np0005466031 NetworkManager[44907]: <info>  [1759409021.1321] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/205)
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.132 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[08834c43-f224-488c-930a-0ad0057b2b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.170 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[61b81cfd-1c3b-4b0f-84ab-2314625ba772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.173 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5c406e0f-0d72-4e16-8f9a-b0e999faca7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 NetworkManager[44907]: <info>  [1759409021.1971] device (tapf3934261-b0): carrier: link connected
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.201 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f96d8db6-0845-43ed-9831-e9172e1debb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.216 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f611df30-3502-47d5-8ff2-667a57f9f8e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687678, 'reachable_time': 18588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285052, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.230 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[427a4084-b25f-4360-b6ae-1a295190fb70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687678, 'tstamp': 687678}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285053, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.245 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[18f94cfa-c1b7-41b5-aeba-55f4a4d1dac5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687678, 'reachable_time': 18588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285054, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.272 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf5f77d-f0af-483b-b1c9-cfd844565d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.315 2 DEBUG nova.compute.manager [req-ab4499a8-43c5-4287-917b-caee5d6407f3 req-6124d3d8-1e98-48f2-813c-a3d765913b48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.315 2 DEBUG oslo_concurrency.lockutils [req-ab4499a8-43c5-4287-917b-caee5d6407f3 req-6124d3d8-1e98-48f2-813c-a3d765913b48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.315 2 DEBUG oslo_concurrency.lockutils [req-ab4499a8-43c5-4287-917b-caee5d6407f3 req-6124d3d8-1e98-48f2-813c-a3d765913b48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.315 2 DEBUG oslo_concurrency.lockutils [req-ab4499a8-43c5-4287-917b-caee5d6407f3 req-6124d3d8-1e98-48f2-813c-a3d765913b48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.316 2 DEBUG nova.compute.manager [req-ab4499a8-43c5-4287-917b-caee5d6407f3 req-6124d3d8-1e98-48f2-813c-a3d765913b48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Processing event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.335 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fbae27-f612-4ccd-af76-b5bf3586b39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.338 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.338 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.338 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:41 np0005466031 kernel: tapf3934261-b0: entered promiscuous mode
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:41 np0005466031 NetworkManager[44907]: <info>  [1759409021.3416] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.345 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:41 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:41Z|00431|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.348 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.350 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0875683a-6018-4f5e-9ecf-124b8850194a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.351 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:43:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:43:41.353 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:43:41 np0005466031 nova_compute[235803]: 2025-10-02 12:43:41.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:41 np0005466031 podman[285134]: 2025-10-02 12:43:41.747004677 +0000 UTC m=+0.062599712 container create 424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:43:41 np0005466031 systemd[1]: Started libpod-conmon-424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61.scope.
Oct  2 08:43:41 np0005466031 podman[285134]: 2025-10-02 12:43:41.716382256 +0000 UTC m=+0.031977321 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:43:41 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:43:41 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2338f59e8f01308306b4c518f081cc8977a6ad100bac0e48ef647fc641136931/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.832213) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021832281, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2202, "num_deletes": 263, "total_data_size": 4804207, "memory_usage": 4858272, "flush_reason": "Manual Compaction"}
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021849919, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 3143689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47021, "largest_seqno": 49218, "table_properties": {"data_size": 3134738, "index_size": 5509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19614, "raw_average_key_size": 20, "raw_value_size": 3116370, "raw_average_value_size": 3290, "num_data_blocks": 239, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408858, "oldest_key_time": 1759408858, "file_creation_time": 1759409021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 18098 microseconds, and 7998 cpu microseconds.
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:41 np0005466031 podman[285134]: 2025-10-02 12:43:41.851883504 +0000 UTC m=+0.167478549 container init 424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.850321) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 3143689 bytes OK
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.850438) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.852158) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.852174) EVENT_LOG_v1 {"time_micros": 1759409021852168, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.852192) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 4794266, prev total WAL file size 4794266, number of live WAL files 2.
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.855161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353039' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(3070KB)], [90(10MB)]
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021855213, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 14586648, "oldest_snapshot_seqno": -1}
Oct  2 08:43:41 np0005466031 podman[285134]: 2025-10-02 12:43:41.859115432 +0000 UTC m=+0.174710467 container start 424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:43:41 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[285186]: [NOTICE]   (285196) : New worker (285199) forked
Oct  2 08:43:41 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[285186]: [NOTICE]   (285196) : Loading success.
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7540 keys, 14437558 bytes, temperature: kUnknown
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021968204, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 14437558, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14383060, "index_size": 34546, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 193498, "raw_average_key_size": 25, "raw_value_size": 14244489, "raw_average_value_size": 1889, "num_data_blocks": 1379, "num_entries": 7540, "num_filter_entries": 7540, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.968535) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 14437558 bytes
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.970439) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.9 rd, 127.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.9 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 8080, records dropped: 540 output_compression: NoCompression
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.970462) EVENT_LOG_v1 {"time_micros": 1759409021970451, "job": 56, "event": "compaction_finished", "compaction_time_micros": 113121, "compaction_time_cpu_micros": 45631, "output_level": 6, "num_output_files": 1, "total_output_size": 14437558, "num_input_records": 8080, "num_output_records": 7540, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021971417, "job": 56, "event": "table_file_deletion", "file_number": 92}
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021974520, "job": 56, "event": "table_file_deletion", "file_number": 90}
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.855073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.974587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.974592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.974595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.974598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:41 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:41.974601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.171689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022171752, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 250, "total_data_size": 23018, "memory_usage": 28768, "flush_reason": "Manual Compaction"}
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022174364, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 13846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49220, "largest_seqno": 49474, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409022, "oldest_key_time": 1759409022, "file_creation_time": 1759409022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 2714 microseconds, and 1018 cpu microseconds.
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.174407) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 13846 bytes OK
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.174427) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.175812) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.175830) EVENT_LOG_v1 {"time_micros": 1759409022175825, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.175848) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 21000, prev total WAL file size 21000, number of live WAL files 2.
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.176296) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(13KB)], [93(13MB)]
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022176334, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 14451404, "oldest_snapshot_seqno": -1}
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.240 2 INFO nova.virt.libvirt.driver [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deleting instance files /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_del#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.241 2 INFO nova.virt.libvirt.driver [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deletion of /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_del complete#033[00m
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7291 keys, 10622885 bytes, temperature: kUnknown
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022248325, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10622885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10574978, "index_size": 28595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 188516, "raw_average_key_size": 25, "raw_value_size": 10445622, "raw_average_value_size": 1432, "num_data_blocks": 1130, "num_entries": 7291, "num_filter_entries": 7291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.248687) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10622885 bytes
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.249813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.4 rd, 147.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(1810.9) write-amplify(767.2) OK, records in: 7795, records dropped: 504 output_compression: NoCompression
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.249834) EVENT_LOG_v1 {"time_micros": 1759409022249824, "job": 58, "event": "compaction_finished", "compaction_time_micros": 72124, "compaction_time_cpu_micros": 39185, "output_level": 6, "num_output_files": 1, "total_output_size": 10622885, "num_input_records": 7795, "num_output_records": 7291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022249963, "job": 58, "event": "table_file_deletion", "file_number": 95}
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022252770, "job": 58, "event": "table_file_deletion", "file_number": 93}
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.176188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.252891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.252900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.252903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.252905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:43:42.252907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:42.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.308 2 INFO nova.compute.manager [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Took 8.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.310 2 DEBUG oslo.service.loopingcall [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.310 2 DEBUG nova.compute.manager [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.311 2 DEBUG nova.network.neutron [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.394 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409022.393187, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.395 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.397 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.402 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.408 2 INFO nova.virt.libvirt.driver [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance spawned successfully.#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.409 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.415 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.420 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.441 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.442 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.443 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.443 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.444 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.445 2 DEBUG nova.virt.libvirt.driver [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.683 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.684 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409022.393474, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.684 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.713 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:42.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.720 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409022.4013462, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.721 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.746 2 INFO nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Took 11.09 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.747 2 DEBUG nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.756 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.759 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.793 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.858 2 INFO nova.compute.manager [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Took 12.23 seconds to build instance.#033[00m
Oct  2 08:43:42 np0005466031 nova_compute[235803]: 2025-10-02 12:43:42.885 2 DEBUG oslo_concurrency.lockutils [None req-68247e04-87d7-4b75-b1d9-695c15dec54d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:43 np0005466031 nova_compute[235803]: 2025-10-02 12:43:43.471 2 DEBUG nova.compute.manager [req-77c6e215-adba-43cf-9763-f0c388ba9e54 req-348ef75f-c37f-4d15-ae3f-88398a5a5280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:43 np0005466031 nova_compute[235803]: 2025-10-02 12:43:43.471 2 DEBUG oslo_concurrency.lockutils [req-77c6e215-adba-43cf-9763-f0c388ba9e54 req-348ef75f-c37f-4d15-ae3f-88398a5a5280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:43 np0005466031 nova_compute[235803]: 2025-10-02 12:43:43.472 2 DEBUG oslo_concurrency.lockutils [req-77c6e215-adba-43cf-9763-f0c388ba9e54 req-348ef75f-c37f-4d15-ae3f-88398a5a5280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:43 np0005466031 nova_compute[235803]: 2025-10-02 12:43:43.472 2 DEBUG oslo_concurrency.lockutils [req-77c6e215-adba-43cf-9763-f0c388ba9e54 req-348ef75f-c37f-4d15-ae3f-88398a5a5280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:43 np0005466031 nova_compute[235803]: 2025-10-02 12:43:43.472 2 DEBUG nova.compute.manager [req-77c6e215-adba-43cf-9763-f0c388ba9e54 req-348ef75f-c37f-4d15-ae3f-88398a5a5280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:43 np0005466031 nova_compute[235803]: 2025-10-02 12:43:43.472 2 WARNING nova.compute.manager [req-77c6e215-adba-43cf-9763-f0c388ba9e54 req-348ef75f-c37f-4d15-ae3f-88398a5a5280 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state active and task_state None.#033[00m
Oct  2 08:43:44 np0005466031 nova_compute[235803]: 2025-10-02 12:43:44.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:44.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:44 np0005466031 nova_compute[235803]: 2025-10-02 12:43:44.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:44.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.207 2 DEBUG nova.network.neutron [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.259 2 INFO nova.compute.manager [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Took 2.95 seconds to deallocate network for instance.#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.311 2 DEBUG nova.compute.manager [req-766730f9-117c-43af-895f-9a8cd0d63294 req-c2a89f69-7063-4ae1-9efa-312bffe5cd5a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-deleted-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.323 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.324 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.414 2 DEBUG oslo_concurrency.processutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.860 2 DEBUG oslo_concurrency.processutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.866 2 DEBUG nova.compute.provider_tree [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.883 2 DEBUG nova.scheduler.client.report [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.905 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:45 np0005466031 nova_compute[235803]: 2025-10-02 12:43:45.930 2 INFO nova.scheduler.client.report [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Deleted allocations for instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb#033[00m
Oct  2 08:43:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:46 np0005466031 nova_compute[235803]: 2025-10-02 12:43:46.001 2 DEBUG oslo_concurrency.lockutils [None req-0f9697bc-3428-479a-a459-9470c250867d 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:46.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:46.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:48.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:48.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:48 np0005466031 nova_compute[235803]: 2025-10-02 12:43:48.805 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409013.8037293, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:48 np0005466031 nova_compute[235803]: 2025-10-02 12:43:48.806 2 INFO nova.compute.manager [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:43:48 np0005466031 nova_compute[235803]: 2025-10-02 12:43:48.826 2 DEBUG nova.compute.manager [None req-3e66cbf5-e8ae-494d-bacd-79dd6f44a912 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:49 np0005466031 nova_compute[235803]: 2025-10-02 12:43:49.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:49 np0005466031 nova_compute[235803]: 2025-10-02 12:43:49.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:50.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:50 np0005466031 nova_compute[235803]: 2025-10-02 12:43:50.593 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2c258788-9569-41b0-9163-e8ea9985b91c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:50 np0005466031 nova_compute[235803]: 2025-10-02 12:43:50.593 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:50 np0005466031 nova_compute[235803]: 2025-10-02 12:43:50.607 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:43:50 np0005466031 nova_compute[235803]: 2025-10-02 12:43:50.674 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:50 np0005466031 nova_compute[235803]: 2025-10-02 12:43:50.675 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:50 np0005466031 nova_compute[235803]: 2025-10-02 12:43:50.680 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:43:50 np0005466031 nova_compute[235803]: 2025-10-02 12:43:50.681 2 INFO nova.compute.claims [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:43:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:50.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:50 np0005466031 nova_compute[235803]: 2025-10-02 12:43:50.893 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3030946270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.330 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.336 2 DEBUG nova.compute.provider_tree [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.352 2 DEBUG nova.scheduler.client.report [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.382 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.383 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.476 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.477 2 DEBUG nova.network.neutron [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.497 2 INFO nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.518 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.616 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.617 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.618 2 INFO nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Creating image(s)#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.645 2 DEBUG nova.storage.rbd_utils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2c258788-9569-41b0-9163-e8ea9985b91c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.671 2 DEBUG nova.storage.rbd_utils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2c258788-9569-41b0-9163-e8ea9985b91c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.697 2 DEBUG nova.storage.rbd_utils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2c258788-9569-41b0-9163-e8ea9985b91c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.703 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.741 2 DEBUG nova.policy [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.775 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.776 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.777 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.777 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.802 2 DEBUG nova.storage.rbd_utils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2c258788-9569-41b0-9163-e8ea9985b91c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:51 np0005466031 nova_compute[235803]: 2025-10-02 12:43:51.806 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2c258788-9569-41b0-9163-e8ea9985b91c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:52.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:52.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:53 np0005466031 nova_compute[235803]: 2025-10-02 12:43:53.293 2 DEBUG nova.network.neutron [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Successfully created port: b7c94b5d-e8b3-487f-ad24-795aa8a72b5e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:43:54 np0005466031 nova_compute[235803]: 2025-10-02 12:43:54.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:43:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:54.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:43:54 np0005466031 nova_compute[235803]: 2025-10-02 12:43:54.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:54.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:54 np0005466031 nova_compute[235803]: 2025-10-02 12:43:54.959 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2c258788-9569-41b0-9163-e8ea9985b91c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.240 2 DEBUG nova.storage.rbd_utils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 2c258788-9569-41b0-9163-e8ea9985b91c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:43:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:43:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.744 2 DEBUG nova.network.neutron [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Successfully updated port: b7c94b5d-e8b3-487f-ad24-795aa8a72b5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.804 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.805 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.805 2 DEBUG nova.network.neutron [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.871 2 DEBUG nova.compute.manager [req-8e9d27be-eaa9-4918-abc2-ca9082bd627a req-24392943-1884-4585-a55f-972f0e15b4de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received event network-changed-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.872 2 DEBUG nova.compute.manager [req-8e9d27be-eaa9-4918-abc2-ca9082bd627a req-24392943-1884-4585-a55f-972f0e15b4de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Refreshing instance network info cache due to event network-changed-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.872 2 DEBUG oslo_concurrency.lockutils [req-8e9d27be-eaa9-4918-abc2-ca9082bd627a req-24392943-1884-4585-a55f-972f0e15b4de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:55 np0005466031 nova_compute[235803]: 2025-10-02 12:43:55.966 2 DEBUG nova.network.neutron [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:43:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:56 np0005466031 nova_compute[235803]: 2025-10-02 12:43:56.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:56.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:56 np0005466031 nova_compute[235803]: 2025-10-02 12:43:56.850 2 DEBUG nova.objects.instance [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 2c258788-9569-41b0-9163-e8ea9985b91c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:56 np0005466031 nova_compute[235803]: 2025-10-02 12:43:56.865 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:43:56 np0005466031 nova_compute[235803]: 2025-10-02 12:43:56.866 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Ensure instance console log exists: /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:43:56 np0005466031 nova_compute[235803]: 2025-10-02 12:43:56.866 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:56 np0005466031 nova_compute[235803]: 2025-10-02 12:43:56.867 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:56 np0005466031 nova_compute[235803]: 2025-10-02 12:43:56.867 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:43:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:58.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:58Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:09:61 10.100.0.6
Oct  2 08:43:58 np0005466031 ovn_controller[132413]: 2025-10-02T12:43:58Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:09:61 10.100.0.6
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.861 2 DEBUG nova.network.neutron [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Updating instance_info_cache with network_info: [{"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.885 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.886 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance network_info: |[{"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.886 2 DEBUG oslo_concurrency.lockutils [req-8e9d27be-eaa9-4918-abc2-ca9082bd627a req-24392943-1884-4585-a55f-972f0e15b4de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.886 2 DEBUG nova.network.neutron [req-8e9d27be-eaa9-4918-abc2-ca9082bd627a req-24392943-1884-4585-a55f-972f0e15b4de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Refreshing network info cache for port b7c94b5d-e8b3-487f-ad24-795aa8a72b5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.890 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Start _get_guest_xml network_info=[{"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.895 2 WARNING nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.899 2 DEBUG nova.virt.libvirt.host [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.900 2 DEBUG nova.virt.libvirt.host [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.903 2 DEBUG nova.virt.libvirt.host [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.903 2 DEBUG nova.virt.libvirt.host [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.904 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.905 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.905 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.905 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.905 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.906 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.906 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.906 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.906 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.906 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.907 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.907 2 DEBUG nova.virt.hardware [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:43:58 np0005466031 nova_compute[235803]: 2025-10-02 12:43:58.909 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:59 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3154617667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.354 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.380 2 DEBUG nova.storage.rbd_utils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2c258788-9569-41b0-9163-e8ea9985b91c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.385 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.657 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:43:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:59 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3350584005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.868 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.869 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.869 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:43:59 np0005466031 nova_compute[235803]: 2025-10-02 12:43:59.869 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.036 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.037 2 DEBUG nova.virt.libvirt.vif [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1867337880',display_name='tempest-DeleteServersTestJSON-server-1867337880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1867337880',id=120,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-gwnvko3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:51Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2c258788-9569-41b0-9163-e8ea9985b91c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.038 2 DEBUG nova.network.os_vif_util [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.040 2 DEBUG nova.network.os_vif_util [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:21,bridge_name='br-int',has_traffic_filtering=True,id=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7c94b5d-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.041 2 DEBUG nova.objects.instance [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c258788-9569-41b0-9163-e8ea9985b91c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.054 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <uuid>2c258788-9569-41b0-9163-e8ea9985b91c</uuid>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <name>instance-00000078</name>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <nova:name>tempest-DeleteServersTestJSON-server-1867337880</nova:name>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:43:58</nova:creationTime>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <nova:port uuid="b7c94b5d-e8b3-487f-ad24-795aa8a72b5e">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <entry name="serial">2c258788-9569-41b0-9163-e8ea9985b91c</entry>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <entry name="uuid">2c258788-9569-41b0-9163-e8ea9985b91c</entry>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2c258788-9569-41b0-9163-e8ea9985b91c_disk">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2c258788-9569-41b0-9163-e8ea9985b91c_disk.config">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:2d:11:21"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <target dev="tapb7c94b5d-e8"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c/console.log" append="off"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:44:00 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:44:00 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:44:00 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:44:00 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.055 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Preparing to wait for external event network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.055 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.056 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.056 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.056 2 DEBUG nova.virt.libvirt.vif [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1867337880',display_name='tempest-DeleteServersTestJSON-server-1867337880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1867337880',id=120,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-gwnvko3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:51Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2c258788-9569-41b0-9163-e8ea9985b91c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.057 2 DEBUG nova.network.os_vif_util [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.057 2 DEBUG nova.network.os_vif_util [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:21,bridge_name='br-int',has_traffic_filtering=True,id=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7c94b5d-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.057 2 DEBUG os_vif [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:21,bridge_name='br-int',has_traffic_filtering=True,id=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7c94b5d-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7c94b5d-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7c94b5d-e8, col_values=(('external_ids', {'iface-id': 'b7c94b5d-e8b3-487f-ad24-795aa8a72b5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:11:21', 'vm-uuid': '2c258788-9569-41b0-9163-e8ea9985b91c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:00 np0005466031 NetworkManager[44907]: <info>  [1759409040.0630] manager: (tapb7c94b5d-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.072 2 INFO os_vif [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:21,bridge_name='br-int',has_traffic_filtering=True,id=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7c94b5d-e8')#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.134 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.135 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.135 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:2d:11:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.136 2 INFO nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Using config drive#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.159 2 DEBUG nova.storage.rbd_utils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2c258788-9569-41b0-9163-e8ea9985b91c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.211 2 DEBUG nova.network.neutron [req-8e9d27be-eaa9-4918-abc2-ca9082bd627a req-24392943-1884-4585-a55f-972f0e15b4de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Updated VIF entry in instance network info cache for port b7c94b5d-e8b3-487f-ad24-795aa8a72b5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.212 2 DEBUG nova.network.neutron [req-8e9d27be-eaa9-4918-abc2-ca9082bd627a req-24392943-1884-4585-a55f-972f0e15b4de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Updating instance_info_cache with network_info: [{"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.284 2 DEBUG oslo_concurrency.lockutils [req-8e9d27be-eaa9-4918-abc2-ca9082bd627a req-24392943-1884-4585-a55f-972f0e15b4de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:44:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.617 2 INFO nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Creating config drive at /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c/disk.config#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.622 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2x_4ggmx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:00.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.754 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2x_4ggmx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.786 2 DEBUG nova.storage.rbd_utils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2c258788-9569-41b0-9163-e8ea9985b91c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:00 np0005466031 nova_compute[235803]: 2025-10-02 12:44:00.791 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c/disk.config 2c258788-9569-41b0-9163-e8ea9985b91c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.010 2 DEBUG oslo_concurrency.processutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c/disk.config 2c258788-9569-41b0-9163-e8ea9985b91c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.010 2 INFO nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Deleting local config drive /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c/disk.config because it was imported into RBD.#033[00m
Oct  2 08:44:01 np0005466031 kernel: tapb7c94b5d-e8: entered promiscuous mode
Oct  2 08:44:01 np0005466031 NetworkManager[44907]: <info>  [1759409041.0534] manager: (tapb7c94b5d-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:01Z|00432|binding|INFO|Claiming lport b7c94b5d-e8b3-487f-ad24-795aa8a72b5e for this chassis.
Oct  2 08:44:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:01Z|00433|binding|INFO|b7c94b5d-e8b3-487f-ad24-795aa8a72b5e: Claiming fa:16:3e:2d:11:21 10.100.0.4
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.060 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:11:21 10.100.0.4'], port_security=['fa:16:3e:2d:11:21 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2c258788-9569-41b0-9163-e8ea9985b91c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.061 141898 INFO neutron.agent.ovn.metadata.agent [-] Port b7c94b5d-e8b3-487f-ad24-795aa8a72b5e in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.063 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24#033[00m
Oct  2 08:44:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:01Z|00434|binding|INFO|Setting lport b7c94b5d-e8b3-487f-ad24-795aa8a72b5e ovn-installed in OVS
Oct  2 08:44:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:01Z|00435|binding|INFO|Setting lport b7c94b5d-e8b3-487f-ad24-795aa8a72b5e up in Southbound
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.074 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9b685176-27e6-491f-952a-78f3f448e8d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.076 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.077 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.077 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[68fceec8-d75d-4016-85ab-aeff1103d826]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.079 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5775a777-add9-4140-aa8d-d7857d1ee6f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 systemd-machined[192227]: New machine qemu-49-instance-00000078.
Oct  2 08:44:01 np0005466031 systemd-udevd[285613]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.090 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[0c49e94e-1cdd-4401-8125-40ee8fc9d5e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 NetworkManager[44907]: <info>  [1759409041.0958] device (tapb7c94b5d-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:44:01 np0005466031 NetworkManager[44907]: <info>  [1759409041.0970] device (tapb7c94b5d-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:44:01 np0005466031 systemd[1]: Started Virtual Machine qemu-49-instance-00000078.
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.116 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[68064d0e-ccb3-4b62-b211-92124bac48bc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.147 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[02203c18-73f1-49f4-8395-efe836e46853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.152 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[714c31b2-eddd-41ef-b969-639f522d68ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 NetworkManager[44907]: <info>  [1759409041.1535] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.185 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d1503bf9-63af-4eb8-b9be-0231786f6dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.188 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[31401683-9973-40d7-9579-7abc650e1340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 NetworkManager[44907]: <info>  [1759409041.2094] device (tapfd4432c5-b0): carrier: link connected
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.214 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[cd45c186-20d6-4ffb-8924-c9ecca22e378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.230 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ca39fa94-3d2f-40ac-a1d4-d500b22cb25d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689680, 'reachable_time': 32582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285645, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.246 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d6decb-2537-4773-be24-722c99e017c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689680, 'tstamp': 689680}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285646, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.260 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[712bec85-aa5e-4332-a0e7-e025cca7c75b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689680, 'reachable_time': 32582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285647, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.288 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ff26332c-ac8d-479f-aaf0-4abf5c41c2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.342 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ae191bd2-7903-427d-8aca-65edc3d61c71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.343 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.344 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.344 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:01 np0005466031 NetworkManager[44907]: <info>  [1759409041.3467] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Oct  2 08:44:01 np0005466031 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.352 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:01Z|00436|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.359 2 DEBUG nova.compute.manager [req-5c5799c5-1e92-4dbe-b8bd-fc622f943382 req-1ccf020d-72ce-47e1-a817-97b964e42b08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received event network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.359 2 DEBUG oslo_concurrency.lockutils [req-5c5799c5-1e92-4dbe-b8bd-fc622f943382 req-1ccf020d-72ce-47e1-a817-97b964e42b08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.360 2 DEBUG oslo_concurrency.lockutils [req-5c5799c5-1e92-4dbe-b8bd-fc622f943382 req-1ccf020d-72ce-47e1-a817-97b964e42b08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.360 2 DEBUG oslo_concurrency.lockutils [req-5c5799c5-1e92-4dbe-b8bd-fc622f943382 req-1ccf020d-72ce-47e1-a817-97b964e42b08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.360 2 DEBUG nova.compute.manager [req-5c5799c5-1e92-4dbe-b8bd-fc622f943382 req-1ccf020d-72ce-47e1-a817-97b964e42b08 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Processing event network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:44:01 np0005466031 nova_compute[235803]: 2025-10-02 12:44:01.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.377 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.378 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a2678f29-1f45-4453-98b4-e7ca064aed62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.379 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:44:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:01.379 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:44:01 np0005466031 podman[285771]: 2025-10-02 12:44:01.733120493 +0000 UTC m=+0.024899378 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:44:01 np0005466031 podman[285771]: 2025-10-02 12:44:01.876479468 +0000 UTC m=+0.168258333 container create 3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:44:01 np0005466031 systemd[1]: Started libpod-conmon-3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa.scope.
Oct  2 08:44:01 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:44:01 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e8c0908ac09b468a82f5043ae4b96563fbce9290a36ad99c49ee0714cffc90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:44:01 np0005466031 podman[285771]: 2025-10-02 12:44:01.96137286 +0000 UTC m=+0.253151745 container init 3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:44:01 np0005466031 podman[285771]: 2025-10-02 12:44:01.968575858 +0000 UTC m=+0.260354723 container start 3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:44:01 np0005466031 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[285787]: [NOTICE]   (285791) : New worker (285793) forked
Oct  2 08:44:01 np0005466031 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[285787]: [NOTICE]   (285791) : Loading success.
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.115 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409042.115491, 2c258788-9569-41b0-9163-e8ea9985b91c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.116 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] VM Started (Lifecycle Event)#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.120 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.123 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.127 2 INFO nova.virt.libvirt.driver [-] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance spawned successfully.#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.127 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.152 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.157 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.157 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.158 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.158 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.159 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.160 2 DEBUG nova.virt.libvirt.driver [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.164 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.198 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.198 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409042.1193373, 2c258788-9569-41b0-9163-e8ea9985b91c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.198 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.220 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.224 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409042.1223416, 2c258788-9569-41b0-9163-e8ea9985b91c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.224 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.233 2 INFO nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Took 10.62 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.234 2 DEBUG nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.263 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.266 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:44:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:02.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.305 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.318 2 INFO nova.compute.manager [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Took 11.66 seconds to build instance.#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.353 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updating instance_info_cache with network_info: [{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.360 2 DEBUG oslo_concurrency.lockutils [None req-7dbe38cc-b702-4030-a16a-efc9103ce05e ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.385 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.385 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.385 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.386 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.386 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.386 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.406 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.406 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.406 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.406 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.407 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:44:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/771119567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.869 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.946 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.947 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.950 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:44:02 np0005466031 nova_compute[235803]: 2025-10-02 12:44:02.951 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.135 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.136 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4072MB free_disk=20.781471252441406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.137 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.137 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.223 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 17266fac-3772-4df3-b4d7-c47d8292f6d6 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.224 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2c258788-9569-41b0-9163-e8ea9985b91c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.224 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.224 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.267 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.451 2 DEBUG nova.compute.manager [req-a5a97ccd-9713-406d-b108-38aa3e5eb600 req-fa12c6d6-6500-44c4-acca-46056ffcc482 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received event network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.452 2 DEBUG oslo_concurrency.lockutils [req-a5a97ccd-9713-406d-b108-38aa3e5eb600 req-fa12c6d6-6500-44c4-acca-46056ffcc482 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.452 2 DEBUG oslo_concurrency.lockutils [req-a5a97ccd-9713-406d-b108-38aa3e5eb600 req-fa12c6d6-6500-44c4-acca-46056ffcc482 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.452 2 DEBUG oslo_concurrency.lockutils [req-a5a97ccd-9713-406d-b108-38aa3e5eb600 req-fa12c6d6-6500-44c4-acca-46056ffcc482 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.452 2 DEBUG nova.compute.manager [req-a5a97ccd-9713-406d-b108-38aa3e5eb600 req-fa12c6d6-6500-44c4-acca-46056ffcc482 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] No waiting events found dispatching network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.453 2 WARNING nova.compute.manager [req-a5a97ccd-9713-406d-b108-38aa3e5eb600 req-fa12c6d6-6500-44c4-acca-46056ffcc482 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received unexpected event network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e for instance with vm_state active and task_state None.#033[00m
Oct  2 08:44:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:44:03 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2152615919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.783 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.788 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.812 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.846 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.847 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.992 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2c258788-9569-41b0-9163-e8ea9985b91c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.993 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:03 np0005466031 nova_compute[235803]: 2025-10-02 12:44:03.994 2 INFO nova.compute.manager [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Shelving#033[00m
Oct  2 08:44:04 np0005466031 nova_compute[235803]: 2025-10-02 12:44:04.014 2 DEBUG nova.virt.libvirt.driver [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:44:04 np0005466031 nova_compute[235803]: 2025-10-02 12:44:04.097 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:04.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:04 np0005466031 nova_compute[235803]: 2025-10-02 12:44:04.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:04 np0005466031 podman[285848]: 2025-10-02 12:44:04.624300083 +0000 UTC m=+0.052973086 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:44:04 np0005466031 nova_compute[235803]: 2025-10-02 12:44:04.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:04 np0005466031 nova_compute[235803]: 2025-10-02 12:44:04.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:44:04 np0005466031 podman[285849]: 2025-10-02 12:44:04.677858604 +0000 UTC m=+0.103658864 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:44:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:04.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:05 np0005466031 nova_compute[235803]: 2025-10-02 12:44:05.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:06.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:06.927441) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046927504, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 530, "num_deletes": 251, "total_data_size": 755363, "memory_usage": 766504, "flush_reason": "Manual Compaction"}
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046955786, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 498458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49479, "largest_seqno": 50004, "table_properties": {"data_size": 495592, "index_size": 838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6983, "raw_average_key_size": 19, "raw_value_size": 489860, "raw_average_value_size": 1356, "num_data_blocks": 36, "num_entries": 361, "num_filter_entries": 361, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409022, "oldest_key_time": 1759409022, "file_creation_time": 1759409046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 28386 microseconds, and 2166 cpu microseconds.
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:06.955835) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 498458 bytes OK
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:06.955854) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:06.979751) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:06.979800) EVENT_LOG_v1 {"time_micros": 1759409046979789, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:06.979823) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 752206, prev total WAL file size 752206, number of live WAL files 2.
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:06.980387) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(486KB)], [96(10MB)]
Oct  2 08:44:06 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046980424, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11121343, "oldest_snapshot_seqno": -1}
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7139 keys, 9260394 bytes, temperature: kUnknown
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047079181, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 9260394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9214748, "index_size": 26718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 186112, "raw_average_key_size": 26, "raw_value_size": 9089243, "raw_average_value_size": 1273, "num_data_blocks": 1045, "num_entries": 7139, "num_filter_entries": 7139, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:07.079418) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9260394 bytes
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:07.082279) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.5 rd, 93.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.1 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(40.9) write-amplify(18.6) OK, records in: 7652, records dropped: 513 output_compression: NoCompression
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:07.082296) EVENT_LOG_v1 {"time_micros": 1759409047082288, "job": 60, "event": "compaction_finished", "compaction_time_micros": 98833, "compaction_time_cpu_micros": 25418, "output_level": 6, "num_output_files": 1, "total_output_size": 9260394, "num_input_records": 7652, "num_output_records": 7139, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047082541, "job": 60, "event": "table_file_deletion", "file_number": 98}
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047084429, "job": 60, "event": "table_file_deletion", "file_number": 96}
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:06.980299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:07.084475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:07.084479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:07.084481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:07.084482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:44:07.084484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:07.556 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:44:07 np0005466031 nova_compute[235803]: 2025-10-02 12:44:07.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:07.557 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:44:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:07.558 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:08.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:09 np0005466031 nova_compute[235803]: 2025-10-02 12:44:09.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:10 np0005466031 nova_compute[235803]: 2025-10-02 12:44:10.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:10.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:10.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:11 np0005466031 podman[285894]: 2025-10-02 12:44:11.634035899 +0000 UTC m=+0.063261161 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:44:11 np0005466031 podman[285895]: 2025-10-02 12:44:11.645792597 +0000 UTC m=+0.069871851 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:44:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:12.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:14 np0005466031 nova_compute[235803]: 2025-10-02 12:44:14.062 2 DEBUG nova.virt.libvirt.driver [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:44:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:14.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:14 np0005466031 nova_compute[235803]: 2025-10-02 12:44:14.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:14.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:15 np0005466031 nova_compute[235803]: 2025-10-02 12:44:15.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:15Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:11:21 10.100.0.4
Oct  2 08:44:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:15Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:11:21 10.100.0.4
Oct  2 08:44:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:16.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:16.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:17 np0005466031 nova_compute[235803]: 2025-10-02 12:44:17.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:18.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:18.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:19 np0005466031 nova_compute[235803]: 2025-10-02 12:44:19.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:20 np0005466031 nova_compute[235803]: 2025-10-02 12:44:20.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:20.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:20.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:44:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:22.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:44:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:22.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:24 np0005466031 nova_compute[235803]: 2025-10-02 12:44:24.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:24.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:25 np0005466031 nova_compute[235803]: 2025-10-02 12:44:25.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:25 np0005466031 nova_compute[235803]: 2025-10-02 12:44:25.104 2 DEBUG nova.virt.libvirt.driver [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:44:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:25.852 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:25.853 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:25.854 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:26.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:28.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:28.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:29 np0005466031 nova_compute[235803]: 2025-10-02 12:44:29.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:29 np0005466031 kernel: tapb7c94b5d-e8 (unregistering): left promiscuous mode
Oct  2 08:44:29 np0005466031 NetworkManager[44907]: <info>  [1759409069.5799] device (tapb7c94b5d-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:44:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:29Z|00437|binding|INFO|Releasing lport b7c94b5d-e8b3-487f-ad24-795aa8a72b5e from this chassis (sb_readonly=0)
Oct  2 08:44:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:29Z|00438|binding|INFO|Setting lport b7c94b5d-e8b3-487f-ad24-795aa8a72b5e down in Southbound
Oct  2 08:44:29 np0005466031 nova_compute[235803]: 2025-10-02 12:44:29.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:29Z|00439|binding|INFO|Removing iface tapb7c94b5d-e8 ovn-installed in OVS
Oct  2 08:44:29 np0005466031 nova_compute[235803]: 2025-10-02 12:44:29.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:29 np0005466031 nova_compute[235803]: 2025-10-02 12:44:29.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:29.612 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:11:21 10.100.0.4'], port_security=['fa:16:3e:2d:11:21 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2c258788-9569-41b0-9163-e8ea9985b91c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:44:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:29.613 141898 INFO neutron.agent.ovn.metadata.agent [-] Port b7c94b5d-e8b3-487f-ad24-795aa8a72b5e in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis#033[00m
Oct  2 08:44:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:29.614 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:44:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:29.615 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[331ce6c9-59c5-49af-aae5-4e73a81555c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:29.616 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore#033[00m
Oct  2 08:44:29 np0005466031 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct  2 08:44:29 np0005466031 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000078.scope: Consumed 13.932s CPU time.
Oct  2 08:44:29 np0005466031 systemd-machined[192227]: Machine qemu-49-instance-00000078 terminated.
Oct  2 08:44:29 np0005466031 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[285787]: [NOTICE]   (285791) : haproxy version is 2.8.14-c23fe91
Oct  2 08:44:29 np0005466031 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[285787]: [NOTICE]   (285791) : path to executable is /usr/sbin/haproxy
Oct  2 08:44:29 np0005466031 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[285787]: [WARNING]  (285791) : Exiting Master process...
Oct  2 08:44:29 np0005466031 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[285787]: [ALERT]    (285791) : Current worker (285793) exited with code 143 (Terminated)
Oct  2 08:44:29 np0005466031 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[285787]: [WARNING]  (285791) : All workers exited. Exiting... (0)
Oct  2 08:44:29 np0005466031 systemd[1]: libpod-3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa.scope: Deactivated successfully.
Oct  2 08:44:29 np0005466031 podman[286017]: 2025-10-02 12:44:29.823481339 +0000 UTC m=+0.108215025 container died 3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:44:29 np0005466031 nova_compute[235803]: 2025-10-02 12:44:29.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:29 np0005466031 nova_compute[235803]: 2025-10-02 12:44:29.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:29 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa-userdata-shm.mount: Deactivated successfully.
Oct  2 08:44:29 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b9e8c0908ac09b468a82f5043ae4b96563fbce9290a36ad99c49ee0714cffc90-merged.mount: Deactivated successfully.
Oct  2 08:44:29 np0005466031 podman[286017]: 2025-10-02 12:44:29.871505091 +0000 UTC m=+0.156238777 container cleanup 3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:44:29 np0005466031 systemd[1]: libpod-conmon-3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa.scope: Deactivated successfully.
Oct  2 08:44:29 np0005466031 podman[286054]: 2025-10-02 12:44:29.936569033 +0000 UTC m=+0.043269206 container remove 3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:44:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:29.942 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a695c8-6f42-4166-abde-1be7a0093099]: (4, ('Thu Oct  2 12:44:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa)\n3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa\nThu Oct  2 12:44:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa)\n3ca979281d03d2134f62796bb2178d028df0dbefb7b0df648950fdec0111acfa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:29.944 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[71e2b549-1a69-4213-a95e-3ba0baadf61c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:29.945 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:29 np0005466031 kernel: tapfd4432c5-b0: left promiscuous mode
Oct  2 08:44:29 np0005466031 nova_compute[235803]: 2025-10-02 12:44:29.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:30 np0005466031 nova_compute[235803]: 2025-10-02 12:44:30.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:30 np0005466031 nova_compute[235803]: 2025-10-02 12:44:30.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:30.013 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[488fff8e-7312-495b-9ba7-5649b31cc206]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:30.045 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd6409d-330c-4588-800e-0cc6e3fb980d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:30.046 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[67e96d5d-7e19-4ec1-a409-50e4e4319805]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:30.062 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ee5369-9865-4efd-a57a-8f45dd009775]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689673, 'reachable_time': 41628, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286072, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:30 np0005466031 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct  2 08:44:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:30.067 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:44:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:30.067 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4339c183-8e7f-4c9b-885b-5feb1de1a7f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:30 np0005466031 nova_compute[235803]: 2025-10-02 12:44:30.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:30 np0005466031 nova_compute[235803]: 2025-10-02 12:44:30.125 2 INFO nova.virt.libvirt.driver [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance shutdown successfully after 26 seconds.#033[00m
Oct  2 08:44:30 np0005466031 nova_compute[235803]: 2025-10-02 12:44:30.129 2 INFO nova.virt.libvirt.driver [-] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance destroyed successfully.#033[00m
Oct  2 08:44:30 np0005466031 nova_compute[235803]: 2025-10-02 12:44:30.130 2 DEBUG nova.objects.instance [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'numa_topology' on Instance uuid 2c258788-9569-41b0-9163-e8ea9985b91c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:30.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:30.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:31 np0005466031 nova_compute[235803]: 2025-10-02 12:44:31.198 2 INFO nova.virt.libvirt.driver [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Beginning cold snapshot process#033[00m
Oct  2 08:44:31 np0005466031 nova_compute[235803]: 2025-10-02 12:44:31.709 2 DEBUG nova.virt.libvirt.imagebackend [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:44:31 np0005466031 nova_compute[235803]: 2025-10-02 12:44:31.761 2 DEBUG nova.compute.manager [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received event network-vif-unplugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:31 np0005466031 nova_compute[235803]: 2025-10-02 12:44:31.761 2 DEBUG oslo_concurrency.lockutils [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:31 np0005466031 nova_compute[235803]: 2025-10-02 12:44:31.761 2 DEBUG oslo_concurrency.lockutils [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:31 np0005466031 nova_compute[235803]: 2025-10-02 12:44:31.761 2 DEBUG oslo_concurrency.lockutils [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:31 np0005466031 nova_compute[235803]: 2025-10-02 12:44:31.762 2 DEBUG nova.compute.manager [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] No waiting events found dispatching network-vif-unplugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:31 np0005466031 nova_compute[235803]: 2025-10-02 12:44:31.762 2 WARNING nova.compute.manager [req-89fba09b-d76d-454c-bf51-3e920468de0e req-b6a6452b-f929-4e8c-b3d3-87cffa274b22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received unexpected event network-vif-unplugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 08:44:32 np0005466031 nova_compute[235803]: 2025-10-02 12:44:32.170 2 DEBUG nova.storage.rbd_utils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] creating snapshot(795ae2751e6946719afa5a18807bd407) on rbd image(2c258788-9569-41b0-9163-e8ea9985b91c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:44:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:32.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:32.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e281 e281: 3 total, 3 up, 3 in
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.066 2 DEBUG nova.storage.rbd_utils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] cloning vms/2c258788-9569-41b0-9163-e8ea9985b91c_disk@795ae2751e6946719afa5a18807bd407 to images/8ef5cf5e-703e-423e-815f-02321271fc3c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.221 2 INFO nova.compute.manager [None req-9250ff4b-a635-409d-a8f4-b0a8d79b6971 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Pausing#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.221 2 DEBUG nova.objects.instance [None req-9250ff4b-a635-409d-a8f4-b0a8d79b6971 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.290 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409073.2902641, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.290 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.292 2 DEBUG nova.compute.manager [None req-9250ff4b-a635-409d-a8f4-b0a8d79b6971 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.315 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.318 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.373 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 08:44:33 np0005466031 nova_compute[235803]: 2025-10-02 12:44:33.828 2 DEBUG nova.storage.rbd_utils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] flattening images/8ef5cf5e-703e-423e-815f-02321271fc3c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:44:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:34.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:34 np0005466031 nova_compute[235803]: 2025-10-02 12:44:34.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:34.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:35 np0005466031 nova_compute[235803]: 2025-10-02 12:44:35.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:35 np0005466031 podman[286180]: 2025-10-02 12:44:35.617807413 +0000 UTC m=+0.044456590 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:44:35 np0005466031 podman[286181]: 2025-10-02 12:44:35.68162562 +0000 UTC m=+0.105849477 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 08:44:35 np0005466031 nova_compute[235803]: 2025-10-02 12:44:35.849 2 DEBUG nova.compute.manager [req-47f8bcb3-74e1-4b71-8837-893335116063 req-c83a0b98-4c50-4c73-b5f9-c707563a8c9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received event network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:35 np0005466031 nova_compute[235803]: 2025-10-02 12:44:35.849 2 DEBUG oslo_concurrency.lockutils [req-47f8bcb3-74e1-4b71-8837-893335116063 req-c83a0b98-4c50-4c73-b5f9-c707563a8c9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:35 np0005466031 nova_compute[235803]: 2025-10-02 12:44:35.850 2 DEBUG oslo_concurrency.lockutils [req-47f8bcb3-74e1-4b71-8837-893335116063 req-c83a0b98-4c50-4c73-b5f9-c707563a8c9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:35 np0005466031 nova_compute[235803]: 2025-10-02 12:44:35.850 2 DEBUG oslo_concurrency.lockutils [req-47f8bcb3-74e1-4b71-8837-893335116063 req-c83a0b98-4c50-4c73-b5f9-c707563a8c9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:35 np0005466031 nova_compute[235803]: 2025-10-02 12:44:35.850 2 DEBUG nova.compute.manager [req-47f8bcb3-74e1-4b71-8837-893335116063 req-c83a0b98-4c50-4c73-b5f9-c707563a8c9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] No waiting events found dispatching network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:35 np0005466031 nova_compute[235803]: 2025-10-02 12:44:35.850 2 WARNING nova.compute.manager [req-47f8bcb3-74e1-4b71-8837-893335116063 req-c83a0b98-4c50-4c73-b5f9-c707563a8c9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received unexpected event network-vif-plugged-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 08:44:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:36.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:36.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:38 np0005466031 nova_compute[235803]: 2025-10-02 12:44:38.096 2 DEBUG nova.storage.rbd_utils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] removing snapshot(795ae2751e6946719afa5a18807bd407) on rbd image(2c258788-9569-41b0-9163-e8ea9985b91c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:44:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:38.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e282 e282: 3 total, 3 up, 3 in
Oct  2 08:44:39 np0005466031 nova_compute[235803]: 2025-10-02 12:44:39.560 2 DEBUG nova.storage.rbd_utils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] creating snapshot(snap) on rbd image(8ef5cf5e-703e-423e-815f-02321271fc3c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:44:39 np0005466031 nova_compute[235803]: 2025-10-02 12:44:39.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.010 2 INFO nova.compute.manager [None req-eaa0a5f5-6e9c-4599-9b7d-aed2de06656b b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Unpausing#033[00m
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.010 2 DEBUG nova.objects.instance [None req-eaa0a5f5-6e9c-4599-9b7d-aed2de06656b b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:40.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e283 e283: 3 total, 3 up, 3 in
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.673 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409080.6734025, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.674 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:44:40 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.677 2 DEBUG nova.virt.libvirt.guest [None req-eaa0a5f5-6e9c-4599-9b7d-aed2de06656b b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.677 2 DEBUG nova.compute.manager [None req-eaa0a5f5-6e9c-4599-9b7d-aed2de06656b b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.703 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.706 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:44:40 np0005466031 nova_compute[235803]: 2025-10-02 12:44:40.761 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct  2 08:44:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:40.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:41 np0005466031 podman[286288]: 2025-10-02 12:44:41.86050112 +0000 UTC m=+0.068009218 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:44:41 np0005466031 podman[286289]: 2025-10-02 12:44:41.860412377 +0000 UTC m=+0.067028429 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:44:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:42.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:42.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:44.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:44 np0005466031 nova_compute[235803]: 2025-10-02 12:44:44.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:44.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:44 np0005466031 nova_compute[235803]: 2025-10-02 12:44:44.850 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409069.8463197, 2c258788-9569-41b0-9163-e8ea9985b91c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:44:44 np0005466031 nova_compute[235803]: 2025-10-02 12:44:44.851 2 INFO nova.compute.manager [-] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:44:45 np0005466031 nova_compute[235803]: 2025-10-02 12:44:45.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:45 np0005466031 nova_compute[235803]: 2025-10-02 12:44:45.607 2 DEBUG nova.compute.manager [None req-b7aeddc8-322f-4968-aae7-db17241118d3 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:45 np0005466031 nova_compute[235803]: 2025-10-02 12:44:45.611 2 DEBUG nova.compute.manager [None req-b7aeddc8-322f-4968-aae7-db17241118d3 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:44:45 np0005466031 nova_compute[235803]: 2025-10-02 12:44:45.671 2 INFO nova.compute.manager [None req-b7aeddc8-322f-4968-aae7-db17241118d3 - - - - - -] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.#033[00m
Oct  2 08:44:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.077 2 INFO nova.compute.manager [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Rescuing#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.077 2 DEBUG oslo_concurrency.lockutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.077 2 DEBUG oslo_concurrency.lockutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.078 2 DEBUG nova.network.neutron [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:44:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.754 2 INFO nova.virt.libvirt.driver [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Snapshot image upload complete#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.755 2 DEBUG nova.compute.manager [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:46.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.919 2 INFO nova.compute.manager [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Shelve offloading#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.926 2 INFO nova.virt.libvirt.driver [-] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance destroyed successfully.#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.926 2 DEBUG nova.compute.manager [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.929 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.929 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:44:46 np0005466031 nova_compute[235803]: 2025-10-02 12:44:46.929 2 DEBUG nova.network.neutron [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:44:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e284 e284: 3 total, 3 up, 3 in
Oct  2 08:44:48 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:48Z|00440|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  2 08:44:48 np0005466031 nova_compute[235803]: 2025-10-02 12:44:48.323 2 DEBUG nova.network.neutron [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updating instance_info_cache with network_info: [{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:44:48 np0005466031 nova_compute[235803]: 2025-10-02 12:44:48.358 2 DEBUG oslo_concurrency.lockutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:44:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:48.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:48 np0005466031 nova_compute[235803]: 2025-10-02 12:44:48.685 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:44:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:48.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:49 np0005466031 nova_compute[235803]: 2025-10-02 12:44:49.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e285 e285: 3 total, 3 up, 3 in
Oct  2 08:44:50 np0005466031 nova_compute[235803]: 2025-10-02 12:44:50.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:50.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:50 np0005466031 nova_compute[235803]: 2025-10-02 12:44:50.485 2 DEBUG nova.network.neutron [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Updating instance_info_cache with network_info: [{"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:44:50 np0005466031 nova_compute[235803]: 2025-10-02 12:44:50.521 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:44:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:50.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:51 np0005466031 kernel: tap5e2a83a5-11 (unregistering): left promiscuous mode
Oct  2 08:44:51 np0005466031 NetworkManager[44907]: <info>  [1759409091.9528] device (tap5e2a83a5-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:44:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:51Z|00441|binding|INFO|Releasing lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe from this chassis (sb_readonly=0)
Oct  2 08:44:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:51Z|00442|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe down in Southbound
Oct  2 08:44:51 np0005466031 nova_compute[235803]: 2025-10-02 12:44:51.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:51 np0005466031 nova_compute[235803]: 2025-10-02 12:44:51.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:44:51Z|00443|binding|INFO|Removing iface tap5e2a83a5-11 ovn-installed in OVS
Oct  2 08:44:51 np0005466031 nova_compute[235803]: 2025-10-02 12:44:51.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:52 np0005466031 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct  2 08:44:52 np0005466031 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000075.scope: Consumed 16.282s CPU time.
Oct  2 08:44:52 np0005466031 systemd-machined[192227]: Machine qemu-48-instance-00000075 terminated.
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.117 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:09:61 10.100.0.6'], port_security=['fa:16:3e:34:09:61 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '17266fac-3772-4df3-b4d7-c47d8292f6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5e2a83a5-11e1-45b1-82ce-5fee577f67fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.119 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5e2a83a5-11e1-45b1-82ce-5fee577f67fe in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.120 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.121 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0cae2618-3bd8-4a29-b86c-f6b96b7410bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.122 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:52 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[285186]: [NOTICE]   (285196) : haproxy version is 2.8.14-c23fe91
Oct  2 08:44:52 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[285186]: [NOTICE]   (285196) : path to executable is /usr/sbin/haproxy
Oct  2 08:44:52 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[285186]: [WARNING]  (285196) : Exiting Master process...
Oct  2 08:44:52 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[285186]: [WARNING]  (285196) : Exiting Master process...
Oct  2 08:44:52 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[285186]: [ALERT]    (285196) : Current worker (285199) exited with code 143 (Terminated)
Oct  2 08:44:52 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[285186]: [WARNING]  (285196) : All workers exited. Exiting... (0)
Oct  2 08:44:52 np0005466031 systemd[1]: libpod-424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61.scope: Deactivated successfully.
Oct  2 08:44:52 np0005466031 podman[286396]: 2025-10-02 12:44:52.31461292 +0000 UTC m=+0.076326620 container died 424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:44:52 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61-userdata-shm.mount: Deactivated successfully.
Oct  2 08:44:52 np0005466031 systemd[1]: var-lib-containers-storage-overlay-2338f59e8f01308306b4c518f081cc8977a6ad100bac0e48ef647fc641136931-merged.mount: Deactivated successfully.
Oct  2 08:44:52 np0005466031 podman[286396]: 2025-10-02 12:44:52.35628426 +0000 UTC m=+0.117997960 container cleanup 424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:44:52 np0005466031 systemd[1]: libpod-conmon-424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61.scope: Deactivated successfully.
Oct  2 08:44:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:52.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:52 np0005466031 podman[286426]: 2025-10-02 12:44:52.43891906 +0000 UTC m=+0.047276412 container remove 424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.445 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5823759b-9263-467c-bd50-7fc5e5627740]: (4, ('Thu Oct  2 12:44:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61)\n424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61\nThu Oct  2 12:44:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61)\n424d124601143ce252393e39a3bad56fcfabd418c04493c56121b4f239bc1d61\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.446 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aacbd04d-6dc6-4e63-8fe1-e2506d807bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.447 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:52 np0005466031 kernel: tapf3934261-b0: left promiscuous mode
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.477 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ffe6be-6964-4cb4-bafc-21a0c3a4662a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.486 2 DEBUG nova.compute.manager [req-951856ff-2a16-401e-b79b-6024f465fd97 req-ee7c7268-e463-4e6c-91e0-5d8071259930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-unplugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.486 2 DEBUG oslo_concurrency.lockutils [req-951856ff-2a16-401e-b79b-6024f465fd97 req-ee7c7268-e463-4e6c-91e0-5d8071259930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.487 2 DEBUG oslo_concurrency.lockutils [req-951856ff-2a16-401e-b79b-6024f465fd97 req-ee7c7268-e463-4e6c-91e0-5d8071259930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.487 2 DEBUG oslo_concurrency.lockutils [req-951856ff-2a16-401e-b79b-6024f465fd97 req-ee7c7268-e463-4e6c-91e0-5d8071259930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.487 2 DEBUG nova.compute.manager [req-951856ff-2a16-401e-b79b-6024f465fd97 req-ee7c7268-e463-4e6c-91e0-5d8071259930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-unplugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.487 2 WARNING nova.compute.manager [req-951856ff-2a16-401e-b79b-6024f465fd97 req-ee7c7268-e463-4e6c-91e0-5d8071259930 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-unplugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.518 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[61eb0d79-36dc-4aa9-9dec-7694f5fb00e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.519 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c78f0aad-31fc-4368-bfe4-f0c4f02bc7c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.536 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1aae9fab-1ff7-417c-a265-cf0911fd2ec9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687670, 'reachable_time': 18655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286447, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.538 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:44:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:44:52.539 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[63fdfde9-15ee-41f0-b25e-db452f971856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:52 np0005466031 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.705 2 INFO nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance shutdown successfully after 4 seconds.#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.711 2 INFO nova.virt.libvirt.driver [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance destroyed successfully.#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.712 2 DEBUG nova.objects.instance [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.737 2 INFO nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Attempting rescue#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.738 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.742 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.742 2 INFO nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Creating image(s)#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.766 2 DEBUG nova.storage.rbd_utils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.769 2 DEBUG nova.objects.instance [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:52.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.825 2 DEBUG nova.storage.rbd_utils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.861 2 DEBUG nova.storage.rbd_utils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.866 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.926 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.928 2 DEBUG oslo_concurrency.lockutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.929 2 DEBUG oslo_concurrency.lockutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.929 2 DEBUG oslo_concurrency.lockutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.957 2 DEBUG nova.storage.rbd_utils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:52 np0005466031 nova_compute[235803]: 2025-10-02 12:44:52.961 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.023 2 INFO nova.virt.libvirt.driver [-] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Instance destroyed successfully.#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.024 2 DEBUG nova.objects.instance [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 2c258788-9569-41b0-9163-e8ea9985b91c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.060 2 DEBUG nova.virt.libvirt.vif [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1867337880',display_name='tempest-DeleteServersTestJSON-server-1867337880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1867337880',id=120,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-gwnvko3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member',shelved_at='2025-10-02T12:44:46.755109',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='8ef5cf5e-703e-423e-815f-02321271fc3c'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:31Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2c258788-9569-41b0-9163-e8ea9985b91c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.061 2 DEBUG nova.network.os_vif_util [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.062 2 DEBUG nova.network.os_vif_util [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:21,bridge_name='br-int',has_traffic_filtering=True,id=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7c94b5d-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.063 2 DEBUG os_vif [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:21,bridge_name='br-int',has_traffic_filtering=True,id=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7c94b5d-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.067 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7c94b5d-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.078 2 INFO os_vif [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:21,bridge_name='br-int',has_traffic_filtering=True,id=b7c94b5d-e8b3-487f-ad24-795aa8a72b5e,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7c94b5d-e8')#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.130 2 DEBUG nova.compute.manager [req-1a334204-e7f3-47ae-a202-01db9fd2fa82 req-bd7ac735-fc18-4b34-ba58-df26b2a29f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Received event network-changed-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.131 2 DEBUG nova.compute.manager [req-1a334204-e7f3-47ae-a202-01db9fd2fa82 req-bd7ac735-fc18-4b34-ba58-df26b2a29f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Refreshing instance network info cache due to event network-changed-b7c94b5d-e8b3-487f-ad24-795aa8a72b5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.131 2 DEBUG oslo_concurrency.lockutils [req-1a334204-e7f3-47ae-a202-01db9fd2fa82 req-bd7ac735-fc18-4b34-ba58-df26b2a29f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.131 2 DEBUG oslo_concurrency.lockutils [req-1a334204-e7f3-47ae-a202-01db9fd2fa82 req-bd7ac735-fc18-4b34-ba58-df26b2a29f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.132 2 DEBUG nova.network.neutron [req-1a334204-e7f3-47ae-a202-01db9fd2fa82 req-bd7ac735-fc18-4b34-ba58-df26b2a29f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Refreshing network info cache for port b7c94b5d-e8b3-487f-ad24-795aa8a72b5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:44:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:54.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.577 2 DEBUG nova.compute.manager [req-e18f89fb-73db-42da-bd65-3f5ae8592ccc req-a43a2c47-61fd-442a-a7a5-7c8dc5ce9883 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.578 2 DEBUG oslo_concurrency.lockutils [req-e18f89fb-73db-42da-bd65-3f5ae8592ccc req-a43a2c47-61fd-442a-a7a5-7c8dc5ce9883 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.579 2 DEBUG oslo_concurrency.lockutils [req-e18f89fb-73db-42da-bd65-3f5ae8592ccc req-a43a2c47-61fd-442a-a7a5-7c8dc5ce9883 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.579 2 DEBUG oslo_concurrency.lockutils [req-e18f89fb-73db-42da-bd65-3f5ae8592ccc req-a43a2c47-61fd-442a-a7a5-7c8dc5ce9883 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.579 2 DEBUG nova.compute.manager [req-e18f89fb-73db-42da-bd65-3f5ae8592ccc req-a43a2c47-61fd-442a-a7a5-7c8dc5ce9883 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.579 2 WARNING nova.compute.manager [req-e18f89fb-73db-42da-bd65-3f5ae8592ccc req-a43a2c47-61fd-442a-a7a5-7c8dc5ce9883 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:44:54 np0005466031 nova_compute[235803]: 2025-10-02 12:44:54.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:54.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.135 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.136 2 DEBUG nova.objects.instance [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'migration_context' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e286 e286: 3 total, 3 up, 3 in
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.234 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.236 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Start _get_guest_xml network_info=[{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:34:09:61"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.237 2 DEBUG nova.objects.instance [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'resources' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.292 2 WARNING nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.307 2 DEBUG nova.virt.libvirt.host [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.308 2 DEBUG nova.virt.libvirt.host [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.316 2 DEBUG nova.virt.libvirt.host [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.317 2 DEBUG nova.virt.libvirt.host [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.318 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.319 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.319 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.320 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.320 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.320 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.321 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.321 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.321 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.321 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.323 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.323 2 DEBUG nova.virt.hardware [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.323 2 DEBUG nova.objects.instance [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.370 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:55 np0005466031 podman[286750]: 2025-10-02 12:44:55.829851072 +0000 UTC m=+0.075152356 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  2 08:44:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:44:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3079862474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.853 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:55 np0005466031 nova_compute[235803]: 2025-10-02 12:44:55.855 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:55 np0005466031 podman[286750]: 2025-10-02 12:44:55.953253357 +0000 UTC m=+0.198554631 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:44:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e287 e287: 3 total, 3 up, 3 in
Oct  2 08:44:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:44:56 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2297640210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:44:56 np0005466031 nova_compute[235803]: 2025-10-02 12:44:56.330 2 DEBUG nova.network.neutron [req-1a334204-e7f3-47ae-a202-01db9fd2fa82 req-bd7ac735-fc18-4b34-ba58-df26b2a29f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Updated VIF entry in instance network info cache for port b7c94b5d-e8b3-487f-ad24-795aa8a72b5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:44:56 np0005466031 nova_compute[235803]: 2025-10-02 12:44:56.331 2 DEBUG nova.network.neutron [req-1a334204-e7f3-47ae-a202-01db9fd2fa82 req-bd7ac735-fc18-4b34-ba58-df26b2a29f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Updating instance_info_cache with network_info: [{"id": "b7c94b5d-e8b3-487f-ad24-795aa8a72b5e", "address": "fa:16:3e:2d:11:21", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": null, "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb7c94b5d-e8", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:44:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:56.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:56 np0005466031 podman[286913]: 2025-10-02 12:44:56.634050629 +0000 UTC m=+0.138300055 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:44:56 np0005466031 nova_compute[235803]: 2025-10-02 12:44:56.707 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:56 np0005466031 nova_compute[235803]: 2025-10-02 12:44:56.709 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:56 np0005466031 podman[286936]: 2025-10-02 12:44:56.724830694 +0000 UTC m=+0.073812108 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:44:56 np0005466031 podman[286913]: 2025-10-02 12:44:56.733123693 +0000 UTC m=+0.237373119 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:44:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:56.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:56 np0005466031 nova_compute[235803]: 2025-10-02 12:44:56.951 2 DEBUG oslo_concurrency.lockutils [req-1a334204-e7f3-47ae-a202-01db9fd2fa82 req-bd7ac735-fc18-4b34-ba58-df26b2a29f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2c258788-9569-41b0-9163-e8ea9985b91c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:44:56 np0005466031 podman[286998]: 2025-10-02 12:44:56.994961404 +0000 UTC m=+0.077860303 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, release=1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived)
Oct  2 08:44:57 np0005466031 podman[286998]: 2025-10-02 12:44:57.007078663 +0000 UTC m=+0.089977562 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, distribution-scope=public, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, release=1793, vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.130 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:44:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/865932368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.444 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.445 2 DEBUG nova.virt.libvirt.vif [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-377097009',display_name='tempest-ServerRescueNegativeTestJSON-server-377097009',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-377097009',id=117,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-msakr1ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:44:40Z,user_data=None,user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=17266fac-3772-4df3-b4d7-c47d8292f6d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:34:09:61"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.446 2 DEBUG nova.network.os_vif_util [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:34:09:61"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.447 2 DEBUG nova.network.os_vif_util [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:09:61,bridge_name='br-int',has_traffic_filtering=True,id=5e2a83a5-11e1-45b1-82ce-5fee577f67fe,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e2a83a5-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.448 2 DEBUG nova.objects.instance [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.475 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <uuid>17266fac-3772-4df3-b4d7-c47d8292f6d6</uuid>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <name>instance-00000075</name>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-377097009</nova:name>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:44:55</nova:creationTime>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <nova:user uuid="b168e90f7c0c414ba26c576fb8706a80">tempest-ServerRescueNegativeTestJSON-488939839-project-member</nova:user>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <nova:project uuid="c87621e5c0ba4f13abfff528143c1c00">tempest-ServerRescueNegativeTestJSON-488939839</nova:project>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <nova:port uuid="5e2a83a5-11e1-45b1-82ce-5fee577f67fe">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <entry name="serial">17266fac-3772-4df3-b4d7-c47d8292f6d6</entry>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <entry name="uuid">17266fac-3772-4df3-b4d7-c47d8292f6d6</entry>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.rescue">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/17266fac-3772-4df3-b4d7-c47d8292f6d6_disk">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config.rescue">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:34:09:61"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <target dev="tap5e2a83a5-11"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/console.log" append="off"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:44:57 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:44:57 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:44:57 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:44:57 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.485 2 INFO nova.virt.libvirt.driver [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance destroyed successfully.#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.588 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.589 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.589 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.590 2 DEBUG nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:34:09:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.591 2 INFO nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Using config drive#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.631 2 DEBUG nova.storage.rbd_utils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.661 2 DEBUG nova.objects.instance [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:57 np0005466031 nova_compute[235803]: 2025-10-02 12:44:57.716 2 DEBUG nova.objects.instance [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'keypairs' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:58 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:44:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:58.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:58 np0005466031 nova_compute[235803]: 2025-10-02 12:44:58.505 2 INFO nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Creating config drive at /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config.rescue#033[00m
Oct  2 08:44:58 np0005466031 nova_compute[235803]: 2025-10-02 12:44:58.510 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi4njmqvd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:58 np0005466031 nova_compute[235803]: 2025-10-02 12:44:58.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:58 np0005466031 nova_compute[235803]: 2025-10-02 12:44:58.656 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi4njmqvd" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:58 np0005466031 nova_compute[235803]: 2025-10-02 12:44:58.682 2 DEBUG nova.storage.rbd_utils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:58 np0005466031 nova_compute[235803]: 2025-10-02 12:44:58.686 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config.rescue 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:44:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:44:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:58.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:44:59 np0005466031 nova_compute[235803]: 2025-10-02 12:44:59.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:44:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:44:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:44:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:44:59 np0005466031 nova_compute[235803]: 2025-10-02 12:44:59.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:59 np0005466031 nova_compute[235803]: 2025-10-02 12:44:59.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.085 2 DEBUG oslo_concurrency.processutils [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config.rescue 17266fac-3772-4df3-b4d7-c47d8292f6d6_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.086 2 INFO nova.virt.libvirt.driver [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Deleting local config drive /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:45:00 np0005466031 kernel: tap5e2a83a5-11: entered promiscuous mode
Oct  2 08:45:00 np0005466031 NetworkManager[44907]: <info>  [1759409100.1582] manager: (tap5e2a83a5-11): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:00Z|00444|binding|INFO|Claiming lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe for this chassis.
Oct  2 08:45:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:00Z|00445|binding|INFO|5e2a83a5-11e1-45b1-82ce-5fee577f67fe: Claiming fa:16:3e:34:09:61 10.100.0.6
Oct  2 08:45:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:00Z|00446|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe ovn-installed in OVS
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:00 np0005466031 systemd-udevd[287252]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:45:00 np0005466031 systemd-machined[192227]: New machine qemu-50-instance-00000075.
Oct  2 08:45:00 np0005466031 NetworkManager[44907]: <info>  [1759409100.2119] device (tap5e2a83a5-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:45:00 np0005466031 NetworkManager[44907]: <info>  [1759409100.2139] device (tap5e2a83a5-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:45:00 np0005466031 systemd[1]: Started Virtual Machine qemu-50-instance-00000075.
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.357 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:09:61 10.100.0.6'], port_security=['fa:16:3e:34:09:61 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '17266fac-3772-4df3-b4d7-c47d8292f6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5e2a83a5-11e1-45b1-82ce-5fee577f67fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.358 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5e2a83a5-11e1-45b1-82ce-5fee577f67fe in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 bound to our chassis#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.359 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9#033[00m
Oct  2 08:45:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:00Z|00447|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe up in Southbound
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.377 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ca240b7e-5a11-4aed-be5a-94509c0917b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.378 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.380 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.380 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e6eb6b57-72bf-42c8-9440-ea482b266d8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.381 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[410431d4-4a2b-4c7b-af5c-7a84a6c26938]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.398 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[c38548cf-7087-4cf6-b384-4b04ed2b4523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.425 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f85deb4f-071e-4b45-bb3d-80de72f2c0db]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.471 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b6dcc-00e8-4f3a-b601-4ecdb825be53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.479 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1afe23-a9dc-4146-90ef-fc9051b3a0e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 NetworkManager[44907]: <info>  [1759409100.4802] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.517 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5640e6d8-7a3c-4468-9c34-d92e0fb185ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.521 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a715fcc8-3db1-4b56-95ce-e636f1ca9a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 NetworkManager[44907]: <info>  [1759409100.5516] device (tapf3934261-b0): carrier: link connected
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.557 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[49d53525-3baf-44b6-83d8-75d5f69c254e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.579 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4be5d86c-d7de-4577-8067-9c207606e438]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695614, 'reachable_time': 26233, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287304, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.597 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2d634e1a-64b4-4748-9629-a1309c305683]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695614, 'tstamp': 695614}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287305, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.619 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d52c6ee2-159c-425a-80e9-8ff4b1519fcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695614, 'reachable_time': 26233, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287306, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.658 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae975a0-6017-4519-81c5-c18b2a35f155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.713 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[25426e12-60c1-416e-8838-da2130adb632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.715 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.715 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.716 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:00 np0005466031 NetworkManager[44907]: <info>  [1759409100.7187] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct  2 08:45:00 np0005466031 kernel: tapf3934261-b0: entered promiscuous mode
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.720 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:00 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:00Z|00448|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.738 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.739 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[443c2fb9-f5e3-491e-92e7-ea27df6b7b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.740 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:45:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:00.740 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.759 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.759 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.760 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.761 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.761 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.797 2 INFO nova.virt.libvirt.driver [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Deleting instance files /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c_del#033[00m
Oct  2 08:45:00 np0005466031 nova_compute[235803]: 2025-10-02 12:45:00.798 2 INFO nova.virt.libvirt.driver [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2c258788-9569-41b0-9163-e8ea9985b91c] Deletion of /var/lib/nova/instances/2c258788-9569-41b0-9163-e8ea9985b91c_del complete#033[00m
Oct  2 08:45:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:01 np0005466031 podman[287400]: 2025-10-02 12:45:01.123608437 +0000 UTC m=+0.073221251 container create 4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:45:01 np0005466031 podman[287400]: 2025-10-02 12:45:01.074793471 +0000 UTC m=+0.024406295 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:45:01 np0005466031 systemd[1]: Started libpod-conmon-4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6.scope.
Oct  2 08:45:01 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:45:01 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512dd9dbb44075dd628d6eb4b2c61f7a61902294fee05f2fa12dbc9d10acc518/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:45:01 np0005466031 podman[287400]: 2025-10-02 12:45:01.233897354 +0000 UTC m=+0.183510188 container init 4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:45:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3557431853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:01 np0005466031 podman[287400]: 2025-10-02 12:45:01.24106538 +0000 UTC m=+0.190678184 container start 4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.257 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:01 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287416]: [NOTICE]   (287422) : New worker (287424) forked
Oct  2 08:45:01 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287416]: [NOTICE]   (287422) : Loading success.
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.270 2 INFO nova.scheduler.client.report [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocations for instance 2c258788-9569-41b0-9163-e8ea9985b91c#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.442 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 17266fac-3772-4df3-b4d7-c47d8292f6d6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.443 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409101.4422228, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.444 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.450 2 DEBUG nova.compute.manager [None req-569fd670-cf0f-48f8-a041-52c1e523aeb5 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.762 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.764 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.799 2 DEBUG nova.scheduler.client.report [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.818 2 DEBUG nova.scheduler.client.report [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.818 2 DEBUG nova.compute.provider_tree [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.835 2 DEBUG nova.scheduler.client.report [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.862 2 DEBUG nova.scheduler.client.report [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.908 2 DEBUG oslo_concurrency.processutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.947 2 DEBUG nova.compute.manager [req-6ae33de2-0430-4183-9288-e8c28ec6efb5 req-94b0234e-d634-40be-9e2b-c279b84a5e32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.949 2 DEBUG oslo_concurrency.lockutils [req-6ae33de2-0430-4183-9288-e8c28ec6efb5 req-94b0234e-d634-40be-9e2b-c279b84a5e32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.949 2 DEBUG oslo_concurrency.lockutils [req-6ae33de2-0430-4183-9288-e8c28ec6efb5 req-94b0234e-d634-40be-9e2b-c279b84a5e32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.949 2 DEBUG oslo_concurrency.lockutils [req-6ae33de2-0430-4183-9288-e8c28ec6efb5 req-94b0234e-d634-40be-9e2b-c279b84a5e32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.949 2 DEBUG nova.compute.manager [req-6ae33de2-0430-4183-9288-e8c28ec6efb5 req-94b0234e-d634-40be-9e2b-c279b84a5e32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.950 2 WARNING nova.compute.manager [req-6ae33de2-0430-4183-9288-e8c28ec6efb5 req-94b0234e-d634-40be-9e2b-c279b84a5e32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.983 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.987 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.998 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.999 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:01 np0005466031 nova_compute[235803]: 2025-10-02 12:45:01.999 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.181 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.183 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4261MB free_disk=20.695758819580078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.183 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e288 e288: 3 total, 3 up, 3 in
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.346 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409101.4463704, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.347 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:45:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3417613539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.399 2 DEBUG oslo_concurrency.processutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:02.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.406 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.409 2 DEBUG nova.compute.provider_tree [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.413 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.533 2 DEBUG nova.scheduler.client.report [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.597 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.599 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.719 2 DEBUG oslo_concurrency.lockutils [None req-935864cf-ea6c-472f-baad-7b2a8de7c5e2 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2c258788-9569-41b0-9163-e8ea9985b91c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 58.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.724 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 17266fac-3772-4df3-b4d7-c47d8292f6d6 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.724 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.725 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:45:02 np0005466031 nova_compute[235803]: 2025-10-02 12:45:02.777 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:03 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1132442729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.214 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.219 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.255 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.291 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.291 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.873 2 INFO nova.compute.manager [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Unrescuing#033[00m
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.874 2 DEBUG oslo_concurrency.lockutils [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.874 2 DEBUG oslo_concurrency.lockutils [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:03 np0005466031 nova_compute[235803]: 2025-10-02 12:45:03.874 2 DEBUG nova.network.neutron [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.050 2 DEBUG nova.compute.manager [req-625fb242-f9ba-453a-9409-b10e9a2f4e55 req-91d5c91f-fdb5-4035-8439-50adffe107ef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.050 2 DEBUG oslo_concurrency.lockutils [req-625fb242-f9ba-453a-9409-b10e9a2f4e55 req-91d5c91f-fdb5-4035-8439-50adffe107ef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.050 2 DEBUG oslo_concurrency.lockutils [req-625fb242-f9ba-453a-9409-b10e9a2f4e55 req-91d5c91f-fdb5-4035-8439-50adffe107ef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.051 2 DEBUG oslo_concurrency.lockutils [req-625fb242-f9ba-453a-9409-b10e9a2f4e55 req-91d5c91f-fdb5-4035-8439-50adffe107ef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.051 2 DEBUG nova.compute.manager [req-625fb242-f9ba-453a-9409-b10e9a2f4e55 req-91d5c91f-fdb5-4035-8439-50adffe107ef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.051 2 WARNING nova.compute.manager [req-625fb242-f9ba-453a-9409-b10e9a2f4e55 req-91d5c91f-fdb5-4035-8439-50adffe107ef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.291 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.291 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.291 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.306 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:04.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:04 np0005466031 nova_compute[235803]: 2025-10-02 12:45:04.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:45:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3038036194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:45:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:45:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3038036194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:45:06 np0005466031 podman[287555]: 2025-10-02 12:45:06.013986242 +0000 UTC m=+0.086463841 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:45:06 np0005466031 podman[287556]: 2025-10-02 12:45:06.026220625 +0000 UTC m=+0.094110372 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 08:45:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:06.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:45:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.542 2 DEBUG nova.network.neutron [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updating instance_info_cache with network_info: [{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.560 2 DEBUG oslo_concurrency.lockutils [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.561 2 DEBUG nova.objects.instance [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.562 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.563 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.563 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e289 e289: 3 total, 3 up, 3 in
Oct  2 08:45:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:06 np0005466031 kernel: tap5e2a83a5-11 (unregistering): left promiscuous mode
Oct  2 08:45:06 np0005466031 NetworkManager[44907]: <info>  [1759409106.8245] device (tap5e2a83a5-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:45:06 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:06Z|00449|binding|INFO|Releasing lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe from this chassis (sb_readonly=0)
Oct  2 08:45:06 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:06Z|00450|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe down in Southbound
Oct  2 08:45:06 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:06Z|00451|binding|INFO|Removing iface tap5e2a83a5-11 ovn-installed in OVS
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:06.846 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:09:61 10.100.0.6'], port_security=['fa:16:3e:34:09:61 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '17266fac-3772-4df3-b4d7-c47d8292f6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5e2a83a5-11e1-45b1-82ce-5fee577f67fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:06.848 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5e2a83a5-11e1-45b1-82ce-5fee577f67fe in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis#033[00m
Oct  2 08:45:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:06.849 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:45:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:06.850 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4b411d29-895b-468c-a5db-5bb9d76719a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:06 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:06.850 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore#033[00m
Oct  2 08:45:06 np0005466031 nova_compute[235803]: 2025-10-02 12:45:06.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:06 np0005466031 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct  2 08:45:06 np0005466031 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000075.scope: Consumed 6.091s CPU time.
Oct  2 08:45:06 np0005466031 systemd-machined[192227]: Machine qemu-50-instance-00000075 terminated.
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.026 2 INFO nova.virt.libvirt.driver [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance destroyed successfully.#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.026 2 DEBUG nova.objects.instance [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:07 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287416]: [NOTICE]   (287422) : haproxy version is 2.8.14-c23fe91
Oct  2 08:45:07 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287416]: [NOTICE]   (287422) : path to executable is /usr/sbin/haproxy
Oct  2 08:45:07 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287416]: [WARNING]  (287422) : Exiting Master process...
Oct  2 08:45:07 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287416]: [WARNING]  (287422) : Exiting Master process...
Oct  2 08:45:07 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287416]: [ALERT]    (287422) : Current worker (287424) exited with code 143 (Terminated)
Oct  2 08:45:07 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287416]: [WARNING]  (287422) : All workers exited. Exiting... (0)
Oct  2 08:45:07 np0005466031 systemd[1]: libpod-4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6.scope: Deactivated successfully.
Oct  2 08:45:07 np0005466031 podman[287648]: 2025-10-02 12:45:07.106910836 +0000 UTC m=+0.164917502 container died 4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:45:07 np0005466031 NetworkManager[44907]: <info>  [1759409107.1622] manager: (tap5e2a83a5-11): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct  2 08:45:07 np0005466031 systemd-udevd[287628]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:45:07 np0005466031 kernel: tap5e2a83a5-11: entered promiscuous mode
Oct  2 08:45:07 np0005466031 NetworkManager[44907]: <info>  [1759409107.1757] device (tap5e2a83a5-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:45:07 np0005466031 NetworkManager[44907]: <info>  [1759409107.1771] device (tap5e2a83a5-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:07Z|00452|binding|INFO|Claiming lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe for this chassis.
Oct  2 08:45:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:07Z|00453|binding|INFO|5e2a83a5-11e1-45b1-82ce-5fee577f67fe: Claiming fa:16:3e:34:09:61 10.100.0.6
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.196 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:09:61 10.100.0.6'], port_security=['fa:16:3e:34:09:61 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '17266fac-3772-4df3-b4d7-c47d8292f6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5e2a83a5-11e1-45b1-82ce-5fee577f67fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:07 np0005466031 systemd-machined[192227]: New machine qemu-51-instance-00000075.
Oct  2 08:45:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:07Z|00454|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe ovn-installed in OVS
Oct  2 08:45:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:07Z|00455|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe up in Southbound
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 systemd[1]: Started Virtual Machine qemu-51-instance-00000075.
Oct  2 08:45:07 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6-userdata-shm.mount: Deactivated successfully.
Oct  2 08:45:07 np0005466031 systemd[1]: var-lib-containers-storage-overlay-512dd9dbb44075dd628d6eb4b2c61f7a61902294fee05f2fa12dbc9d10acc518-merged.mount: Deactivated successfully.
Oct  2 08:45:07 np0005466031 podman[287648]: 2025-10-02 12:45:07.302070178 +0000 UTC m=+0.360076834 container cleanup 4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:45:07 np0005466031 systemd[1]: libpod-conmon-4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6.scope: Deactivated successfully.
Oct  2 08:45:07 np0005466031 podman[287705]: 2025-10-02 12:45:07.423840836 +0000 UTC m=+0.069546055 container remove 4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.434 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[20fb8faa-e2ad-448b-83e5-8fcdf4973127]: (4, ('Thu Oct  2 12:45:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6)\n4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6\nThu Oct  2 12:45:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6)\n4d62ed6db68ff59f7e89bd6f907aec6396cac5776561b50e7a68229292a9f7c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.435 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e087011b-ac3e-4446-b704-2995ef742cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.436 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 kernel: tapf3934261-b0: left promiscuous mode
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.459 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2685efca-d35f-48d4-8de5-837dd4c33028]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.482 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9b89461b-1873-4a66-898c-4b7ee9df9924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.484 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bde342e5-2a4b-4e6d-a0c0-5f7b0c3cb057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.509 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e81a06-4a84-49d4-90c9-732113a662b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695605, 'reachable_time': 41451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287720, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.513 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:45:07 np0005466031 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.514 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[a87c2836-6a43-4b53-be72-eefc30e7381a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.515 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5e2a83a5-11e1-45b1-82ce-5fee577f67fe in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.517 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.533 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[695b6992-2b7a-4895-ae7d-28b07c60abbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.535 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.544 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.544 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3dadef7e-25b7-40de-9f62-a43d40ce4e51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.546 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[45f29b2a-46ae-4a04-bd0f-8e7f68ac3c0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.569 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6e0222-5b68-491b-ab4d-6126b201ad36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.587 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1777a368-c3a6-4af7-b5a9-a18e18e89662]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.615 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2d81caca-3e50-42bf-a5b2-17d274c16c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.620 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[45e2a708-7e22-44b0-b101-b6773fd61fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 NetworkManager[44907]: <info>  [1759409107.6220] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.653 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ce201d-2fde-4509-a6a1-9829373159ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.656 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[732dbcae-692e-4cdc-b67e-0f8fa876b47a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.669 2 DEBUG nova.compute.manager [req-77d399fd-f528-49a8-9724-f9cdd1f6d267 req-b0cc9e49-28e2-4802-bf0f-95212b70dac8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-unplugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.670 2 DEBUG oslo_concurrency.lockutils [req-77d399fd-f528-49a8-9724-f9cdd1f6d267 req-b0cc9e49-28e2-4802-bf0f-95212b70dac8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.670 2 DEBUG oslo_concurrency.lockutils [req-77d399fd-f528-49a8-9724-f9cdd1f6d267 req-b0cc9e49-28e2-4802-bf0f-95212b70dac8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.670 2 DEBUG oslo_concurrency.lockutils [req-77d399fd-f528-49a8-9724-f9cdd1f6d267 req-b0cc9e49-28e2-4802-bf0f-95212b70dac8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.671 2 DEBUG nova.compute.manager [req-77d399fd-f528-49a8-9724-f9cdd1f6d267 req-b0cc9e49-28e2-4802-bf0f-95212b70dac8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-unplugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.671 2 WARNING nova.compute.manager [req-77d399fd-f528-49a8-9724-f9cdd1f6d267 req-b0cc9e49-28e2-4802-bf0f-95212b70dac8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-unplugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:45:07 np0005466031 NetworkManager[44907]: <info>  [1759409107.6819] device (tapf3934261-b0): carrier: link connected
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.689 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[83eb04c5-5f4d-421b-a816-5c66caa86a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.706 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2d32e526-fc2e-453e-a65b-06282c7165f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696327, 'reachable_time': 44819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287752, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.720 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[78ff1c65-790d-40d9-9a7a-c2017b34db41]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696327, 'tstamp': 696327}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287762, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.741 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[893b083d-dcc0-44df-be3a-cfd966501cb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696327, 'reachable_time': 44819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287765, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.774 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9c0ff3-caff-4c7c-b27a-229107d7923c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.843 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[51906e01-230c-4faf-8aed-d488584f96d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.844 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.845 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.845 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:07 np0005466031 kernel: tapf3934261-b0: entered promiscuous mode
Oct  2 08:45:07 np0005466031 NetworkManager[44907]: <info>  [1759409107.8488] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.851 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:07Z|00456|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.877 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:45:07 np0005466031 nova_compute[235803]: 2025-10-02 12:45:07.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.878 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e4fd65-b135-49b5-b430-8a47081f8d73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.879 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:45:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:07.881 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:45:08 np0005466031 podman[287822]: 2025-10-02 12:45:08.270620638 +0000 UTC m=+0.054798910 container create 82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.278 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 17266fac-3772-4df3-b4d7-c47d8292f6d6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.279 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409108.2782798, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.279 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:45:08 np0005466031 systemd[1]: Started libpod-conmon-82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b.scope.
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.313 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.326 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:08.335 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:08 np0005466031 podman[287822]: 2025-10-02 12:45:08.242808077 +0000 UTC m=+0.026986369 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:08 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.349 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.350 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409108.2799783, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.350 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:45:08 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86684a76647ad8405887794029db9bc3ddbaad68195de573c487271235bab0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:45:08 np0005466031 podman[287822]: 2025-10-02 12:45:08.37242676 +0000 UTC m=+0.156605042 container init 82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:45:08 np0005466031 podman[287822]: 2025-10-02 12:45:08.38144787 +0000 UTC m=+0.165626142 container start 82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.398 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.408 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:08.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:08 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287852]: [NOTICE]   (287859) : New worker (287861) forked
Oct  2 08:45:08 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287852]: [NOTICE]   (287859) : Loading success.
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.446 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:45:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:08.457 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:45:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.904 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updating instance_info_cache with network_info: [{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.977 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.978 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.979 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.979 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.979 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:08 np0005466031 nova_compute[235803]: 2025-10-02 12:45:08.980 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.033 2 DEBUG nova.compute.manager [None req-4eee851c-a2b0-444d-bd33-b78e21a4a3fb b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.904 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.904 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.905 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.905 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.906 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.906 2 WARNING nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state active and task_state None.#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.907 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.907 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.907 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.908 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.908 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.909 2 WARNING nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state active and task_state None.#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.909 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.909 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.910 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.910 2 DEBUG oslo_concurrency.lockutils [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.911 2 DEBUG nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] No waiting events found dispatching network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:09 np0005466031 nova_compute[235803]: 2025-10-02 12:45:09.911 2 WARNING nova.compute.manager [req-2ec85507-95d1-41af-9a18-77c48b73e03e req-3f6ba36d-27a7-487c-8ff5-f0ee5904b86e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received unexpected event network-vif-plugged-5e2a83a5-11e1-45b1-82ce-5fee577f67fe for instance with vm_state active and task_state None.#033[00m
Oct  2 08:45:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:10.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e290 e290: 3 total, 3 up, 3 in
Oct  2 08:45:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:12.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:12 np0005466031 podman[287874]: 2025-10-02 12:45:12.649512789 +0000 UTC m=+0.073806167 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:45:12 np0005466031 podman[287873]: 2025-10-02 12:45:12.66169557 +0000 UTC m=+0.090312173 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 08:45:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:12.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:14 np0005466031 nova_compute[235803]: 2025-10-02 12:45:14.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:14.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:14.460 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:14 np0005466031 nova_compute[235803]: 2025-10-02 12:45:14.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005466031 nova_compute[235803]: 2025-10-02 12:45:14.733 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:14 np0005466031 nova_compute[235803]: 2025-10-02 12:45:14.734 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:14 np0005466031 nova_compute[235803]: 2025-10-02 12:45:14.734 2 INFO nova.compute.manager [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Unshelving#033[00m
Oct  2 08:45:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:14.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.013 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.013 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.055 2 DEBUG nova.objects.instance [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'pci_requests' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.142 2 DEBUG nova.objects.instance [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'numa_topology' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.158 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.158 2 INFO nova.compute.claims [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.331 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3501430886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.809 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.816 2 DEBUG nova.compute.provider_tree [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.847 2 DEBUG nova.scheduler.client.report [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:15 np0005466031 nova_compute[235803]: 2025-10-02 12:45:15.874 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:16 np0005466031 nova_compute[235803]: 2025-10-02 12:45:16.093 2 INFO nova.network.neutron [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:45:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:16 np0005466031 nova_compute[235803]: 2025-10-02 12:45:16.937 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:16 np0005466031 nova_compute[235803]: 2025-10-02 12:45:16.937 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquired lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:16 np0005466031 nova_compute[235803]: 2025-10-02 12:45:16.938 2 DEBUG nova.network.neutron [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:45:17 np0005466031 nova_compute[235803]: 2025-10-02 12:45:17.070 2 DEBUG nova.compute.manager [req-49d676a1-3e5e-421f-89ec-dddc90b2d7ea req-ad7ba0e5-7d2e-4910-9c52-746fb4660fae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-changed-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:17 np0005466031 nova_compute[235803]: 2025-10-02 12:45:17.070 2 DEBUG nova.compute.manager [req-49d676a1-3e5e-421f-89ec-dddc90b2d7ea req-ad7ba0e5-7d2e-4910-9c52-746fb4660fae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Refreshing instance network info cache due to event network-changed-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:45:17 np0005466031 nova_compute[235803]: 2025-10-02 12:45:17.071 2 DEBUG oslo_concurrency.lockutils [req-49d676a1-3e5e-421f-89ec-dddc90b2d7ea req-ad7ba0e5-7d2e-4910-9c52-746fb4660fae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:45:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/855172294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:45:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:45:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/855172294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:45:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:45:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.032 2 DEBUG nova.network.neutron [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating instance_info_cache with network_info: [{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.057 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Releasing lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.061 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.061 2 INFO nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Creating image(s)#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.092 2 DEBUG nova.storage.rbd_utils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.096 2 DEBUG nova.objects.instance [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.099 2 DEBUG oslo_concurrency.lockutils [req-49d676a1-3e5e-421f-89ec-dddc90b2d7ea req-ad7ba0e5-7d2e-4910-9c52-746fb4660fae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.099 2 DEBUG nova.network.neutron [req-49d676a1-3e5e-421f-89ec-dddc90b2d7ea req-ad7ba0e5-7d2e-4910-9c52-746fb4660fae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Refreshing network info cache for port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.138 2 DEBUG nova.storage.rbd_utils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.166 2 DEBUG nova.storage.rbd_utils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.171 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "432dee9de4f0ceba7bed4d3350a374380aaf4256" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.172 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "432dee9de4f0ceba7bed4d3350a374380aaf4256" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.404 2 DEBUG nova.virt.libvirt.imagebackend [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/3866598c-0b46-42ff-ba05-28dab62cd167/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/3866598c-0b46-42ff-ba05-28dab62cd167/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.477 2 DEBUG nova.virt.libvirt.imagebackend [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/3866598c-0b46-42ff-ba05-28dab62cd167/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.478 2 DEBUG nova.storage.rbd_utils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] cloning images/3866598c-0b46-42ff-ba05-28dab62cd167@snap to None/9aff2d67-195f-4081-9a1c-ba173a39af9d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.738 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "432dee9de4f0ceba7bed4d3350a374380aaf4256" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:19 np0005466031 nova_compute[235803]: 2025-10-02 12:45:19.910 2 DEBUG nova.objects.instance [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'migration_context' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:20 np0005466031 nova_compute[235803]: 2025-10-02 12:45:20.009 2 DEBUG nova.storage.rbd_utils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] flattening vms/9aff2d67-195f-4081-9a1c-ba173a39af9d_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:45:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:20.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:20.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.218 2 DEBUG nova.network.neutron [req-49d676a1-3e5e-421f-89ec-dddc90b2d7ea req-ad7ba0e5-7d2e-4910-9c52-746fb4660fae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updated VIF entry in instance network info cache for port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.219 2 DEBUG nova.network.neutron [req-49d676a1-3e5e-421f-89ec-dddc90b2d7ea req-ad7ba0e5-7d2e-4910-9c52-746fb4660fae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating instance_info_cache with network_info: [{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.237 2 DEBUG oslo_concurrency.lockutils [req-49d676a1-3e5e-421f-89ec-dddc90b2d7ea req-ad7ba0e5-7d2e-4910-9c52-746fb4660fae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-9aff2d67-195f-4081-9a1c-ba173a39af9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.724 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Image rbd:vms/9aff2d67-195f-4081-9a1c-ba173a39af9d_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.725 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.725 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Ensure instance console log exists: /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.725 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.726 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.726 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.728 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Start _get_guest_xml network_info=[{"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:44:41Z,direct_url=<?>,disk_format='raw',id=3866598c-0b46-42ff-ba05-28dab62cd167,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1355261775-shelved',owner='f7e2edef094b4ba5a56a5ec5ffce911e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:45:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.733 2 WARNING nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.737 2 DEBUG nova.virt.libvirt.host [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.738 2 DEBUG nova.virt.libvirt.host [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.742 2 DEBUG nova.virt.libvirt.host [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.742 2 DEBUG nova.virt.libvirt.host [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.743 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.744 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:44:41Z,direct_url=<?>,disk_format='raw',id=3866598c-0b46-42ff-ba05-28dab62cd167,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1355261775-shelved',owner='f7e2edef094b4ba5a56a5ec5ffce911e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:45:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.744 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.744 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.745 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.745 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.745 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.745 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.745 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.746 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.746 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.746 2 DEBUG nova.virt.hardware [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.746 2 DEBUG nova.objects.instance [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:21 np0005466031 nova_compute[235803]: 2025-10-02 12:45:21.768 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:45:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2690838680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.257 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.288 2 DEBUG nova.storage.rbd_utils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.293 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:22.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:22Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:09:61 10.100.0.6
Oct  2 08:45:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:45:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/96683948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.762 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.765 2 DEBUG nova.virt.libvirt.vif [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:43:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1355261775',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1355261775',id=121,image_ref='3866598c-0b46-42ff-ba05-28dab62cd167',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1498791303',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wmoytp73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member',shelved_at='2025-10-02T12:45:01.521455',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3866598c-0b46-42ff-ba05-28dab62cd167'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=9aff2d67-195f-4081-9a1c-ba173a39af9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.765 2 DEBUG nova.network.os_vif_util [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.766 2 DEBUG nova.network.os_vif_util [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.768 2 DEBUG nova.objects.instance [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.791 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <uuid>9aff2d67-195f-4081-9a1c-ba173a39af9d</uuid>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <name>instance-00000079</name>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-1355261775</nova:name>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:45:21</nova:creationTime>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <nova:user uuid="3151966e941f4652ba984616bfa760c7">tempest-AttachVolumeShelveTestJSON-1943710095-project-member</nova:user>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <nova:project uuid="f7e2edef094b4ba5a56a5ec5ffce911e">tempest-AttachVolumeShelveTestJSON-1943710095</nova:project>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="3866598c-0b46-42ff-ba05-28dab62cd167"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <nova:port uuid="fbc8e36a-6d1e-4928-ae02-cc1c07215c0c">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <entry name="serial">9aff2d67-195f-4081-9a1c-ba173a39af9d</entry>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <entry name="uuid">9aff2d67-195f-4081-9a1c-ba173a39af9d</entry>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/9aff2d67-195f-4081-9a1c-ba173a39af9d_disk">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:9c:a6:e5"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <target dev="tapfbc8e36a-6d"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/console.log" append="off"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:45:22 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:45:22 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:45:22 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:45:22 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.793 2 DEBUG nova.compute.manager [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Preparing to wait for external event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.794 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.794 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.794 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.795 2 DEBUG nova.virt.libvirt.vif [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:43:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1355261775',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1355261775',id=121,image_ref='3866598c-0b46-42ff-ba05-28dab62cd167',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1498791303',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wmoytp73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member',shelved_at='2025-10-02T12:45:01.521455',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3866598c-0b46-42ff-ba05-28dab62cd167'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=9aff2d67-195f-4081-9a1c-ba173a39af9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.795 2 DEBUG nova.network.os_vif_util [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.796 2 DEBUG nova.network.os_vif_util [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.796 2 DEBUG os_vif [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbc8e36a-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbc8e36a-6d, col_values=(('external_ids', {'iface-id': 'fbc8e36a-6d1e-4928-ae02-cc1c07215c0c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:a6:e5', 'vm-uuid': '9aff2d67-195f-4081-9a1c-ba173a39af9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:22 np0005466031 NetworkManager[44907]: <info>  [1759409122.8057] manager: (tapfbc8e36a-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.814 2 INFO os_vif [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d')#033[00m
Oct  2 08:45:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:22.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.899 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.899 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.899 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No VIF found with MAC fa:16:3e:9c:a6:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.900 2 INFO nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Using config drive#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.927 2 DEBUG nova.storage.rbd_utils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.948 2 DEBUG nova.objects.instance [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:22 np0005466031 nova_compute[235803]: 2025-10-02 12:45:22.997 2 DEBUG nova.objects.instance [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'keypairs' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.240 2 INFO nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Creating config drive at /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config#033[00m
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.252 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kgtc5dz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.386 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kgtc5dz" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.421 2 DEBUG nova.storage.rbd_utils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.425 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.614 2 DEBUG oslo_concurrency.processutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config 9aff2d67-195f-4081-9a1c-ba173a39af9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.615 2 INFO nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Deleting local config drive /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d/disk.config because it was imported into RBD.#033[00m
Oct  2 08:45:23 np0005466031 NetworkManager[44907]: <info>  [1759409123.6681] manager: (tapfbc8e36a-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Oct  2 08:45:23 np0005466031 kernel: tapfbc8e36a-6d: entered promiscuous mode
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:23Z|00457|binding|INFO|Claiming lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c for this chassis.
Oct  2 08:45:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:23Z|00458|binding|INFO|fbc8e36a-6d1e-4928-ae02-cc1c07215c0c: Claiming fa:16:3e:9c:a6:e5 10.100.0.12
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.682 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:a6:e5 10.100.0.12'], port_security=['fa:16:3e:9c:a6:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9aff2d67-195f-4081-9a1c-ba173a39af9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e5d4b7f8-4549-4722-8356-487047feb0fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.683 141898 INFO neutron.agent.ovn.metadata.agent [-] Port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c in datapath 385a384c-5df0-4b04-b928-517a46df04f4 bound to our chassis#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.684 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 385a384c-5df0-4b04-b928-517a46df04f4#033[00m
Oct  2 08:45:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:23Z|00459|binding|INFO|Setting lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c ovn-installed in OVS
Oct  2 08:45:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:23Z|00460|binding|INFO|Setting lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c up in Southbound
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.703 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[90bf9b9c-ce0f-4cd9-a254-6e149f9fc173]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.704 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap385a384c-51 in ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:45:23 np0005466031 nova_compute[235803]: 2025-10-02 12:45:23.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.707 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap385a384c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.707 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c5d8ff-c2cc-4757-9590-f81ca534f2c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.708 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[03adead3-2107-4fb6-bcff-d032875e1975]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 systemd-machined[192227]: New machine qemu-52-instance-00000079.
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.723 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[530505fe-81c7-4f32-8438-15226eca4494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 systemd[1]: Started Virtual Machine qemu-52-instance-00000079.
Oct  2 08:45:23 np0005466031 systemd-udevd[288340]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.738 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c53c4d85-d69a-4871-b2cc-a869a34daa69]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 NetworkManager[44907]: <info>  [1759409123.7509] device (tapfbc8e36a-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:45:23 np0005466031 NetworkManager[44907]: <info>  [1759409123.7517] device (tapfbc8e36a-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.772 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[7d34dbdc-8754-4069-9ea4-b3c0ebc830f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.784 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0596b82a-f324-49f0-bafc-069241ed90b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 NetworkManager[44907]: <info>  [1759409123.7863] manager: (tap385a384c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Oct  2 08:45:23 np0005466031 systemd-udevd[288346]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.827 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[aa52974c-fb6d-4bbb-afcf-f4bf8c07535b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.830 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ac28c78f-1f5b-43f0-8027-758a4352075e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 NetworkManager[44907]: <info>  [1759409123.8648] device (tap385a384c-50): carrier: link connected
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.871 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[615e81a8-f6c4-4a95-bed7-42ea3f5d6a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.889 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2caca252-962e-448a-a42a-fb127361af26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697945, 'reachable_time': 17573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288371, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.911 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1813b0-1b7d-4259-9efa-11b526f3f467]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:d461'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697945, 'tstamp': 697945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288372, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.930 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[597c60f2-deea-40e8-b1a7-405d3738c934]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697945, 'reachable_time': 17573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288376, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:23.971 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3561ed82-961a-498d-82c9-a8e96df4e8b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.032 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[faa8a2bb-0c69-48d8-b5ac-0bb141212162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.033 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.033 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.034 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap385a384c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:24 np0005466031 kernel: tap385a384c-50: entered promiscuous mode
Oct  2 08:45:24 np0005466031 NetworkManager[44907]: <info>  [1759409124.0824] manager: (tap385a384c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.089 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap385a384c-50, col_values=(('external_ids', {'iface-id': '12496c3c-f50d-4104-bfb7-81f1aa24617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:24 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:24Z|00461|binding|INFO|Releasing lport 12496c3c-f50d-4104-bfb7-81f1aa24617e from this chassis (sb_readonly=0)
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.094 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.095 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c62597-bbba-4644-b798-3b83a41ca548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.096 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-385a384c-5df0-4b04-b928-517a46df04f4
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 385a384c-5df0-4b04-b928-517a46df04f4
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:45:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:24.097 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'env', 'PROCESS_TAG=haproxy-385a384c-5df0-4b04-b928-517a46df04f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/385a384c-5df0-4b04-b928-517a46df04f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:24.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:24 np0005466031 podman[288447]: 2025-10-02 12:45:24.535256456 +0000 UTC m=+0.071408478 container create df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.570 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409124.5695148, 9aff2d67-195f-4081-9a1c-ba173a39af9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.571 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] VM Started (Lifecycle Event)#033[00m
Oct  2 08:45:24 np0005466031 podman[288447]: 2025-10-02 12:45:24.495769638 +0000 UTC m=+0.031921750 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.590 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:24 np0005466031 systemd[1]: Started libpod-conmon-df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106.scope.
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.597 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409124.5697112, 9aff2d67-195f-4081-9a1c-ba173a39af9d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.597 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.616 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:24 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.622 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:24 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ef377adac24efdd2602ee2fb7aabef0f9958557f44783bfed139f64f63c4b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.636 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.637 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.638 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.638 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.638 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.638 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.643 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:45:24 np0005466031 podman[288447]: 2025-10-02 12:45:24.650146755 +0000 UTC m=+0.186298787 container init df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:45:24 np0005466031 podman[288447]: 2025-10-02 12:45:24.658430724 +0000 UTC m=+0.194582746 container start df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.671 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.671 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Image id 3866598c-0b46-42ff-ba05-28dab62cd167 yields fingerprint 432dee9de4f0ceba7bed4d3350a374380aaf4256 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.672 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.672 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Image id 423b8b5f-aab8-418b-8fad-d82c90818bdd yields fingerprint 472c3cad2e339908bc4a8cea12fc22c04fcd93b6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.672 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] image 423b8b5f-aab8-418b-8fad-d82c90818bdd at (/var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6): checking#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.672 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] image 423b8b5f-aab8-418b-8fad-d82c90818bdd at (/var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.674 2 INFO oslo.privsep.daemon [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpyvpekicq/privsep.sock']#033[00m
Oct  2 08:45:24 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[288462]: [NOTICE]   (288466) : New worker (288468) forked
Oct  2 08:45:24 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[288462]: [NOTICE]   (288466) : Loading success.
Oct  2 08:45:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:24.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.954 2 DEBUG nova.compute.manager [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.954 2 DEBUG oslo_concurrency.lockutils [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.955 2 DEBUG oslo_concurrency.lockutils [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.955 2 DEBUG oslo_concurrency.lockutils [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.955 2 DEBUG nova.compute.manager [req-ced91fb0-356b-48b9-818c-5b5019a16b3e req-9d70220f-1e8e-4bc5-89ab-984fd35126d7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Processing event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.956 2 DEBUG nova.compute.manager [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.958 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409124.9585786, 9aff2d67-195f-4081-9a1c-ba173a39af9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.959 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.962 2 DEBUG nova.virt.libvirt.driver [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.965 2 INFO nova.virt.libvirt.driver [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance spawned successfully.#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.980 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:24 np0005466031 nova_compute[235803]: 2025-10-02 12:45:24.988 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.008 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.435 2 INFO oslo.privsep.daemon [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.306 18228 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.310 18228 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.312 18228 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.312 18228 INFO oslo.privsep.daemon [-] privsep daemon running as pid 18228#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.538 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] 17266fac-3772-4df3-b4d7-c47d8292f6d6 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.539 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] 9aff2d67-195f-4081-9a1c-ba173a39af9d is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.539 2 WARNING nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.539 2 WARNING nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.539 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Active base files: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.539 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Removable base files: /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.539 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.540 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.540 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.540 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  2 08:45:25 np0005466031 nova_compute[235803]: 2025-10-02 12:45:25.540 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  2 08:45:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:25.853 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:25.854 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:25.854 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:45:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:45:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:26.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:27 np0005466031 nova_compute[235803]: 2025-10-02 12:45:27.123 2 DEBUG nova.compute.manager [req-1858c5ef-9c89-431a-b81d-cd735f9612de req-46df63d4-ea69-46d3-bba4-ff20d1843629 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:27 np0005466031 nova_compute[235803]: 2025-10-02 12:45:27.124 2 DEBUG oslo_concurrency.lockutils [req-1858c5ef-9c89-431a-b81d-cd735f9612de req-46df63d4-ea69-46d3-bba4-ff20d1843629 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:27 np0005466031 nova_compute[235803]: 2025-10-02 12:45:27.125 2 DEBUG oslo_concurrency.lockutils [req-1858c5ef-9c89-431a-b81d-cd735f9612de req-46df63d4-ea69-46d3-bba4-ff20d1843629 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:27 np0005466031 nova_compute[235803]: 2025-10-02 12:45:27.125 2 DEBUG oslo_concurrency.lockutils [req-1858c5ef-9c89-431a-b81d-cd735f9612de req-46df63d4-ea69-46d3-bba4-ff20d1843629 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:27 np0005466031 nova_compute[235803]: 2025-10-02 12:45:27.125 2 DEBUG nova.compute.manager [req-1858c5ef-9c89-431a-b81d-cd735f9612de req-46df63d4-ea69-46d3-bba4-ff20d1843629 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] No waiting events found dispatching network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:27 np0005466031 nova_compute[235803]: 2025-10-02 12:45:27.126 2 WARNING nova.compute.manager [req-1858c5ef-9c89-431a-b81d-cd735f9612de req-46df63d4-ea69-46d3-bba4-ff20d1843629 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received unexpected event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Oct  2 08:45:27 np0005466031 nova_compute[235803]: 2025-10-02 12:45:27.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e291 e291: 3 total, 3 up, 3 in
Oct  2 08:45:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:28.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:28.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:29 np0005466031 nova_compute[235803]: 2025-10-02 12:45:29.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:29 np0005466031 nova_compute[235803]: 2025-10-02 12:45:29.815 2 DEBUG nova.compute.manager [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:29 np0005466031 nova_compute[235803]: 2025-10-02 12:45:29.907 2 DEBUG oslo_concurrency.lockutils [None req-fe737fd3-35f6-4645-a563-edcb8d8f5b69 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 15.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:30.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:45:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:30.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:45:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:32.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:32 np0005466031 nova_compute[235803]: 2025-10-02 12:45:32.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 08:45:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:32.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 08:45:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:34 np0005466031 nova_compute[235803]: 2025-10-02 12:45:34.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:34Z|00462|binding|INFO|Releasing lport 12496c3c-f50d-4104-bfb7-81f1aa24617e from this chassis (sb_readonly=0)
Oct  2 08:45:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:34Z|00463|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:45:34 np0005466031 nova_compute[235803]: 2025-10-02 12:45:34.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:45:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:34.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:45:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:36.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:36 np0005466031 podman[288489]: 2025-10-02 12:45:36.627736847 +0000 UTC m=+0.054051298 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 08:45:36 np0005466031 podman[288490]: 2025-10-02 12:45:36.668343317 +0000 UTC m=+0.092212247 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:45:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:36.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:37 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:37Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:a6:e5 10.100.0.12
Oct  2 08:45:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 e292: 3 total, 3 up, 3 in
Oct  2 08:45:37 np0005466031 nova_compute[235803]: 2025-10-02 12:45:37.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:38.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:38.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:39 np0005466031 nova_compute[235803]: 2025-10-02 12:45:39.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:40.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:40.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:45:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:42.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:45:42 np0005466031 nova_compute[235803]: 2025-10-02 12:45:42.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:42.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:43 np0005466031 nova_compute[235803]: 2025-10-02 12:45:43.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:43 np0005466031 podman[288585]: 2025-10-02 12:45:43.626918937 +0000 UTC m=+0.060504594 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:45:43 np0005466031 podman[288586]: 2025-10-02 12:45:43.627649148 +0000 UTC m=+0.059247598 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:45:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:44.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:44 np0005466031 nova_compute[235803]: 2025-10-02 12:45:44.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:44.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:46.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.682 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.683 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.683 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.683 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.684 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.684 2 INFO nova.compute.manager [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Terminating instance#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.685 2 DEBUG nova.compute.manager [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:45:46 np0005466031 kernel: tapfbc8e36a-6d (unregistering): left promiscuous mode
Oct  2 08:45:46 np0005466031 NetworkManager[44907]: <info>  [1759409146.7328] device (tapfbc8e36a-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:45:46 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:46Z|00464|binding|INFO|Releasing lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c from this chassis (sb_readonly=0)
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:46 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:46Z|00465|binding|INFO|Setting lport fbc8e36a-6d1e-4928-ae02-cc1c07215c0c down in Southbound
Oct  2 08:45:46 np0005466031 ovn_controller[132413]: 2025-10-02T12:45:46Z|00466|binding|INFO|Removing iface tapfbc8e36a-6d ovn-installed in OVS
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:46.757 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:a6:e5 10.100.0.12'], port_security=['fa:16:3e:9c:a6:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9aff2d67-195f-4081-9a1c-ba173a39af9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e5d4b7f8-4549-4722-8356-487047feb0fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:46.759 141898 INFO neutron.agent.ovn.metadata.agent [-] Port fbc8e36a-6d1e-4928-ae02-cc1c07215c0c in datapath 385a384c-5df0-4b04-b928-517a46df04f4 unbound from our chassis#033[00m
Oct  2 08:45:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:46.761 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 385a384c-5df0-4b04-b928-517a46df04f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:45:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:46.763 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[53838e6b-e067-4fb0-ae24-58e052738587]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:46.763 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace which is not needed anymore#033[00m
Oct  2 08:45:46 np0005466031 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct  2 08:45:46 np0005466031 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000079.scope: Consumed 13.864s CPU time.
Oct  2 08:45:46 np0005466031 systemd-machined[192227]: Machine qemu-52-instance-00000079 terminated.
Oct  2 08:45:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:46.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:46 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[288462]: [NOTICE]   (288466) : haproxy version is 2.8.14-c23fe91
Oct  2 08:45:46 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[288462]: [NOTICE]   (288466) : path to executable is /usr/sbin/haproxy
Oct  2 08:45:46 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[288462]: [WARNING]  (288466) : Exiting Master process...
Oct  2 08:45:46 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[288462]: [ALERT]    (288466) : Current worker (288468) exited with code 143 (Terminated)
Oct  2 08:45:46 np0005466031 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[288462]: [WARNING]  (288466) : All workers exited. Exiting... (0)
Oct  2 08:45:46 np0005466031 systemd[1]: libpod-df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106.scope: Deactivated successfully.
Oct  2 08:45:46 np0005466031 podman[288651]: 2025-10-02 12:45:46.89571875 +0000 UTC m=+0.049203938 container died df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.923 2 INFO nova.virt.libvirt.driver [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Instance destroyed successfully.#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.924 2 DEBUG nova.objects.instance [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'resources' on Instance uuid 9aff2d67-195f-4081-9a1c-ba173a39af9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106-userdata-shm.mount: Deactivated successfully.
Oct  2 08:45:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay-52ef377adac24efdd2602ee2fb7aabef0f9958557f44783bfed139f64f63c4b0-merged.mount: Deactivated successfully.
Oct  2 08:45:46 np0005466031 podman[288651]: 2025-10-02 12:45:46.942843068 +0000 UTC m=+0.096328256 container cleanup df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.943 2 DEBUG nova.virt.libvirt.vif [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:43:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1355261775',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1355261775',id=121,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJqnSC5dEFNVxJNe6UAIIaljTk9QXiRqWs9XkOwP1Uo3z0m7kLVKnpN3LhUWVriRnpghb9/lFHsZ1jstgRcNV8lxwQ1W9dxXw6nQRynMb+rfh9iIE+CgS9POWn5d32lCvQ==',key_name='tempest-keypair-1498791303',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:45:29Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wmoytp73',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:45:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=9aff2d67-195f-4081-9a1c-ba173a39af9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.944 2 DEBUG nova.network.os_vif_util [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "address": "fa:16:3e:9c:a6:e5", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc8e36a-6d", "ovs_interfaceid": "fbc8e36a-6d1e-4928-ae02-cc1c07215c0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.944 2 DEBUG nova.network.os_vif_util [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.945 2 DEBUG os_vif [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.947 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbc8e36a-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:45:46 np0005466031 nova_compute[235803]: 2025-10-02 12:45:46.955 2 INFO os_vif [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:a6:e5,bridge_name='br-int',has_traffic_filtering=True,id=fbc8e36a-6d1e-4928-ae02-cc1c07215c0c,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc8e36a-6d')#033[00m
Oct  2 08:45:46 np0005466031 systemd[1]: libpod-conmon-df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106.scope: Deactivated successfully.
Oct  2 08:45:47 np0005466031 podman[288691]: 2025-10-02 12:45:47.012072252 +0000 UTC m=+0.041802485 container remove df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.018 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3f4dce-e361-41a9-a338-86554355f105]: (4, ('Thu Oct  2 12:45:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106)\ndf85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106\nThu Oct  2 12:45:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (df85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106)\ndf85fe37fcefad5f15214e98bf6992df44931431e6e80ae1e4618158d1cb0106\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.020 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[54d8e255-50a8-42dc-93b4-9d625a94ad12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.021 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:47 np0005466031 nova_compute[235803]: 2025-10-02 12:45:47.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:47 np0005466031 kernel: tap385a384c-50: left promiscuous mode
Oct  2 08:45:47 np0005466031 nova_compute[235803]: 2025-10-02 12:45:47.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.074 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9348cdc0-ea02-43fe-8a74-cfdf3b456d75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.100 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[af4ba052-bd61-4d5a-bee0-7b11cf5824bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.101 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3faca316-5d49-464e-b221-3b963e6791a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.117 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e8314fee-71af-48ab-a40c-5c35960de3e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697935, 'reachable_time': 28889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288725, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.119 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:45:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:45:47.120 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[df6b5ecd-ba4b-463b-8065-f59a465bd2ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:47 np0005466031 systemd[1]: run-netns-ovnmeta\x2d385a384c\x2d5df0\x2d4b04\x2db928\x2d517a46df04f4.mount: Deactivated successfully.
Oct  2 08:45:47 np0005466031 nova_compute[235803]: 2025-10-02 12:45:47.556 2 DEBUG nova.compute.manager [req-6abe8ebf-c70b-497e-bc0f-2086658182b3 req-dd48a98e-0f15-4ecd-9b26-a3b45c243898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-unplugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:47 np0005466031 nova_compute[235803]: 2025-10-02 12:45:47.556 2 DEBUG oslo_concurrency.lockutils [req-6abe8ebf-c70b-497e-bc0f-2086658182b3 req-dd48a98e-0f15-4ecd-9b26-a3b45c243898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:47 np0005466031 nova_compute[235803]: 2025-10-02 12:45:47.556 2 DEBUG oslo_concurrency.lockutils [req-6abe8ebf-c70b-497e-bc0f-2086658182b3 req-dd48a98e-0f15-4ecd-9b26-a3b45c243898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:47 np0005466031 nova_compute[235803]: 2025-10-02 12:45:47.557 2 DEBUG oslo_concurrency.lockutils [req-6abe8ebf-c70b-497e-bc0f-2086658182b3 req-dd48a98e-0f15-4ecd-9b26-a3b45c243898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:47 np0005466031 nova_compute[235803]: 2025-10-02 12:45:47.557 2 DEBUG nova.compute.manager [req-6abe8ebf-c70b-497e-bc0f-2086658182b3 req-dd48a98e-0f15-4ecd-9b26-a3b45c243898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] No waiting events found dispatching network-vif-unplugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:47 np0005466031 nova_compute[235803]: 2025-10-02 12:45:47.557 2 DEBUG nova.compute.manager [req-6abe8ebf-c70b-497e-bc0f-2086658182b3 req-dd48a98e-0f15-4ecd-9b26-a3b45c243898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-unplugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:45:48 np0005466031 nova_compute[235803]: 2025-10-02 12:45:48.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:48.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:48 np0005466031 nova_compute[235803]: 2025-10-02 12:45:48.514 2 INFO nova.virt.libvirt.driver [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Deleting instance files /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d_del#033[00m
Oct  2 08:45:48 np0005466031 nova_compute[235803]: 2025-10-02 12:45:48.515 2 INFO nova.virt.libvirt.driver [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Deletion of /var/lib/nova/instances/9aff2d67-195f-4081-9a1c-ba173a39af9d_del complete#033[00m
Oct  2 08:45:48 np0005466031 nova_compute[235803]: 2025-10-02 12:45:48.594 2 INFO nova.compute.manager [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Took 1.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:45:48 np0005466031 nova_compute[235803]: 2025-10-02 12:45:48.595 2 DEBUG oslo.service.loopingcall [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:45:48 np0005466031 nova_compute[235803]: 2025-10-02 12:45:48.595 2 DEBUG nova.compute.manager [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:45:48 np0005466031 nova_compute[235803]: 2025-10-02 12:45:48.595 2 DEBUG nova.network.neutron [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:45:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:48.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.553 2 DEBUG nova.network.neutron [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.578 2 INFO nova.compute.manager [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Took 0.98 seconds to deallocate network for instance.#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.633 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.633 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.671 2 DEBUG nova.compute.manager [req-ae64c07a-1481-4f42-b2d8-4ec747db432e req-02a1be6a-8bba-48dc-8770-67a92c5644c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-deleted-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.683 2 DEBUG nova.compute.manager [req-735007b2-92d0-4d30-9119-e3d40ed1fd5e req-be98cf3f-c1b4-40e7-8ffd-396cb232e535 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.684 2 DEBUG oslo_concurrency.lockutils [req-735007b2-92d0-4d30-9119-e3d40ed1fd5e req-be98cf3f-c1b4-40e7-8ffd-396cb232e535 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.684 2 DEBUG oslo_concurrency.lockutils [req-735007b2-92d0-4d30-9119-e3d40ed1fd5e req-be98cf3f-c1b4-40e7-8ffd-396cb232e535 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.685 2 DEBUG oslo_concurrency.lockutils [req-735007b2-92d0-4d30-9119-e3d40ed1fd5e req-be98cf3f-c1b4-40e7-8ffd-396cb232e535 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.685 2 DEBUG nova.compute.manager [req-735007b2-92d0-4d30-9119-e3d40ed1fd5e req-be98cf3f-c1b4-40e7-8ffd-396cb232e535 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] No waiting events found dispatching network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.685 2 WARNING nova.compute.manager [req-735007b2-92d0-4d30-9119-e3d40ed1fd5e req-be98cf3f-c1b4-40e7-8ffd-396cb232e535 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Received unexpected event network-vif-plugged-fbc8e36a-6d1e-4928-ae02-cc1c07215c0c for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:45:49 np0005466031 nova_compute[235803]: 2025-10-02 12:45:49.706 2 DEBUG oslo_concurrency.processutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3365978155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:50 np0005466031 nova_compute[235803]: 2025-10-02 12:45:50.134 2 DEBUG oslo_concurrency.processutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:50 np0005466031 nova_compute[235803]: 2025-10-02 12:45:50.141 2 DEBUG nova.compute.provider_tree [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:50 np0005466031 nova_compute[235803]: 2025-10-02 12:45:50.160 2 DEBUG nova.scheduler.client.report [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:50 np0005466031 nova_compute[235803]: 2025-10-02 12:45:50.184 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:50 np0005466031 nova_compute[235803]: 2025-10-02 12:45:50.209 2 INFO nova.scheduler.client.report [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Deleted allocations for instance 9aff2d67-195f-4081-9a1c-ba173a39af9d#033[00m
Oct  2 08:45:50 np0005466031 nova_compute[235803]: 2025-10-02 12:45:50.269 2 DEBUG oslo_concurrency.lockutils [None req-38b04f53-6866-471e-8953-72600bc3cb11 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "9aff2d67-195f-4081-9a1c-ba173a39af9d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:50.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:50.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:51 np0005466031 nova_compute[235803]: 2025-10-02 12:45:51.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:45:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094384174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:45:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:45:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094384174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:45:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:52.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:54.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.675 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.675 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.689 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.751 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.752 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.757 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.758 2 INFO nova.compute.claims [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:45:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:54.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:54 np0005466031 nova_compute[235803]: 2025-10-02 12:45:54.886 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/328033079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.349 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.356 2 DEBUG nova.compute.provider_tree [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.371 2 DEBUG nova.scheduler.client.report [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.391 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.392 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.437 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.438 2 DEBUG nova.network.neutron [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.464 2 INFO nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.484 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.596 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.598 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.598 2 INFO nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Creating image(s)#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.622 2 DEBUG nova.storage.rbd_utils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.650 2 DEBUG nova.storage.rbd_utils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.677 2 DEBUG nova.storage.rbd_utils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.681 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.719 2 DEBUG nova.policy [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b5104e5372994cd19b720862cf1ca2ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.766 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.767 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.767 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.767 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.787 2 DEBUG nova.storage.rbd_utils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:55 np0005466031 nova_compute[235803]: 2025-10-02 12:45:55.790 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:56.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:56 np0005466031 nova_compute[235803]: 2025-10-02 12:45:56.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:57 np0005466031 nova_compute[235803]: 2025-10-02 12:45:57.324 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:57 np0005466031 nova_compute[235803]: 2025-10-02 12:45:57.387 2 DEBUG nova.storage.rbd_utils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] resizing rbd image 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:45:57 np0005466031 nova_compute[235803]: 2025-10-02 12:45:57.547 2 DEBUG nova.network.neutron [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Successfully created port: ca09038c-def5-41c9-a98a-c7837558526f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:45:57 np0005466031 nova_compute[235803]: 2025-10-02 12:45:57.549 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:57 np0005466031 nova_compute[235803]: 2025-10-02 12:45:57.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:45:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:58.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:45:58 np0005466031 nova_compute[235803]: 2025-10-02 12:45:58.649 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:45:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:58.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:58 np0005466031 nova_compute[235803]: 2025-10-02 12:45:58.906 2 DEBUG nova.objects.instance [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:58 np0005466031 nova_compute[235803]: 2025-10-02 12:45:58.921 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:45:58 np0005466031 nova_compute[235803]: 2025-10-02 12:45:58.921 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Ensure instance console log exists: /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:45:58 np0005466031 nova_compute[235803]: 2025-10-02 12:45:58.922 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:58 np0005466031 nova_compute[235803]: 2025-10-02 12:45:58.922 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:58 np0005466031 nova_compute[235803]: 2025-10-02 12:45:58.922 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:59 np0005466031 nova_compute[235803]: 2025-10-02 12:45:59.525 2 DEBUG nova.network.neutron [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Successfully updated port: ca09038c-def5-41c9-a98a-c7837558526f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:45:59 np0005466031 nova_compute[235803]: 2025-10-02 12:45:59.537 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:59 np0005466031 nova_compute[235803]: 2025-10-02 12:45:59.537 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:59 np0005466031 nova_compute[235803]: 2025-10-02 12:45:59.537 2 DEBUG nova.network.neutron [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:45:59 np0005466031 nova_compute[235803]: 2025-10-02 12:45:59.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:59 np0005466031 nova_compute[235803]: 2025-10-02 12:45:59.675 2 DEBUG nova.compute.manager [req-4f88afd4-109e-4029-85d0-3c31b35d4abd req-c26b7283-dc4f-4cc5-affd-dd3cecaf7a74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-changed-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:59 np0005466031 nova_compute[235803]: 2025-10-02 12:45:59.676 2 DEBUG nova.compute.manager [req-4f88afd4-109e-4029-85d0-3c31b35d4abd req-c26b7283-dc4f-4cc5-affd-dd3cecaf7a74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Refreshing instance network info cache due to event network-changed-ca09038c-def5-41c9-a98a-c7837558526f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:45:59 np0005466031 nova_compute[235803]: 2025-10-02 12:45:59.676 2 DEBUG oslo_concurrency.lockutils [req-4f88afd4-109e-4029-85d0-3c31b35d4abd req-c26b7283-dc4f-4cc5-affd-dd3cecaf7a74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:00 np0005466031 nova_compute[235803]: 2025-10-02 12:46:00.363 2 DEBUG nova.network.neutron [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:46:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:00.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:00 np0005466031 nova_compute[235803]: 2025-10-02 12:46:00.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:00 np0005466031 nova_compute[235803]: 2025-10-02 12:46:00.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.416 2 DEBUG nova.network.neutron [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.435 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.436 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance network_info: |[{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.436 2 DEBUG oslo_concurrency.lockutils [req-4f88afd4-109e-4029-85d0-3c31b35d4abd req-c26b7283-dc4f-4cc5-affd-dd3cecaf7a74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.436 2 DEBUG nova.network.neutron [req-4f88afd4-109e-4029-85d0-3c31b35d4abd req-c26b7283-dc4f-4cc5-affd-dd3cecaf7a74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Refreshing network info cache for port ca09038c-def5-41c9-a98a-c7837558526f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.438 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Start _get_guest_xml network_info=[{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.443 2 WARNING nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.447 2 DEBUG nova.virt.libvirt.host [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.448 2 DEBUG nova.virt.libvirt.host [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.455 2 DEBUG nova.virt.libvirt.host [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.455 2 DEBUG nova.virt.libvirt.host [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.457 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.457 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.457 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.458 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.458 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.458 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.458 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.459 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.459 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.459 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.459 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.460 2 DEBUG nova.virt.hardware [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.463 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3153811059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.919 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409146.91861, 9aff2d67-195f-4081-9a1c-ba173a39af9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.920 2 INFO nova.compute.manager [-] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.937 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.970 2 DEBUG nova.storage.rbd_utils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:01 np0005466031 nova_compute[235803]: 2025-10-02 12:46:01.976 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.020 2 DEBUG nova.compute.manager [None req-239ebf3c-55ad-4a75-a0e3-d43f0c881039 - - - - - -] [instance: 9aff2d67-195f-4081-9a1c-ba173a39af9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2267448461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.447 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.448 2 DEBUG nova.virt.libvirt.vif [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.449 2 DEBUG nova.network.os_vif_util [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.449 2 DEBUG nova.network.os_vif_util [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.452 2 DEBUG nova.objects.instance [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.468 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <uuid>2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</uuid>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <name>instance-0000007c</name>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerActionsTestOtherB-server-2011106745</nova:name>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:46:01</nova:creationTime>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <nova:port uuid="ca09038c-def5-41c9-a98a-c7837558526f">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <entry name="serial">2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</entry>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <entry name="uuid">2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4</entry>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk.config">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:21:32:15"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <target dev="tapca09038c-de"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/console.log" append="off"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:46:02 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:46:02 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:46:02 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:46:02 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.470 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Preparing to wait for external event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.470 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.471 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.471 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.472 2 DEBUG nova.virt.libvirt.vif [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.473 2 DEBUG nova.network.os_vif_util [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.474 2 DEBUG nova.network.os_vif_util [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.474 2 DEBUG os_vif [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca09038c-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca09038c-de, col_values=(('external_ids', {'iface-id': 'ca09038c-def5-41c9-a98a-c7837558526f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:32:15', 'vm-uuid': '2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:46:02 np0005466031 NetworkManager[44907]: <info>  [1759409162.4866] manager: (tapca09038c-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.495 2 INFO os_vif [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de')#033[00m
Oct  2 08:46:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.580 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.581 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.581 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:21:32:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.582 2 INFO nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Using config drive#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.604 2 DEBUG nova.storage.rbd_utils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.639 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.639 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.662 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:46:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:02.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.887 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.888 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.888 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:46:02 np0005466031 nova_compute[235803]: 2025-10-02 12:46:02.889 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:03Z|00467|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.046 2 INFO nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Creating config drive at /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/disk.config#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.052 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp19uyd1qs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.189 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp19uyd1qs" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.218 2 DEBUG nova.storage.rbd_utils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.222 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/disk.config 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.511 2 DEBUG nova.network.neutron [req-4f88afd4-109e-4029-85d0-3c31b35d4abd req-c26b7283-dc4f-4cc5-affd-dd3cecaf7a74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updated VIF entry in instance network info cache for port ca09038c-def5-41c9-a98a-c7837558526f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.512 2 DEBUG nova.network.neutron [req-4f88afd4-109e-4029-85d0-3c31b35d4abd req-c26b7283-dc4f-4cc5-affd-dd3cecaf7a74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.524 2 DEBUG oslo_concurrency.lockutils [req-4f88afd4-109e-4029-85d0-3c31b35d4abd req-c26b7283-dc4f-4cc5-affd-dd3cecaf7a74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.861 2 DEBUG oslo_concurrency.processutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/disk.config 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.862 2 INFO nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Deleting local config drive /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/disk.config because it was imported into RBD.#033[00m
Oct  2 08:46:03 np0005466031 kernel: tapca09038c-de: entered promiscuous mode
Oct  2 08:46:03 np0005466031 NetworkManager[44907]: <info>  [1759409163.9063] manager: (tapca09038c-de): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Oct  2 08:46:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:03Z|00468|binding|INFO|Claiming lport ca09038c-def5-41c9-a98a-c7837558526f for this chassis.
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:03Z|00469|binding|INFO|ca09038c-def5-41c9-a98a-c7837558526f: Claiming fa:16:3e:21:32:15 10.100.0.13
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.951 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:32:15 10.100.0.13'], port_security=['fa:16:3e:21:32:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=ca09038c-def5-41c9-a98a-c7837558526f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.952 141898 INFO neutron.agent.ovn.metadata.agent [-] Port ca09038c-def5-41c9-a98a-c7837558526f in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis#033[00m
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.953 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4#033[00m
Oct  2 08:46:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:03Z|00470|binding|INFO|Setting lport ca09038c-def5-41c9-a98a-c7837558526f up in Southbound
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.965 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb76609-874c-4163-97c6-5ab0301f6a02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.966 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9266ebd7-31 in ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:46:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:03Z|00471|binding|INFO|Setting lport ca09038c-def5-41c9-a98a-c7837558526f ovn-installed in OVS
Oct  2 08:46:03 np0005466031 nova_compute[235803]: 2025-10-02 12:46:03.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.969 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9266ebd7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.969 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[12d4509c-d6d1-4b9c-88e8-46923370315f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.970 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b33847-0415-4ae4-ace9-47f1d34a5829]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:03 np0005466031 systemd-udevd[289132]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:46:03 np0005466031 NetworkManager[44907]: <info>  [1759409163.9880] device (tapca09038c-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:46:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:03.987 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[7d37121a-7a9d-441e-9a91-7afe04a92b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:03 np0005466031 systemd-machined[192227]: New machine qemu-53-instance-0000007c.
Oct  2 08:46:03 np0005466031 NetworkManager[44907]: <info>  [1759409163.9901] device (tapca09038c-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:46:04 np0005466031 systemd[1]: Started Virtual Machine qemu-53-instance-0000007c.
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.001 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2f2ef5-66ed-4dfa-b00e-8279d7656eb4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.032 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0d5bb9-bdaa-4033-9f47-0e752c7cfa51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.037 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7edc18c8-57ce-41c4-a850-159a44318b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 NetworkManager[44907]: <info>  [1759409164.0388] manager: (tap9266ebd7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.069 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb64647-3d94-428c-aed7-edeb2b94bbc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.072 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d28f6957-f510-4085-8cad-06d36d646b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 NetworkManager[44907]: <info>  [1759409164.0965] device (tap9266ebd7-30): carrier: link connected
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.101 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f7db7076-ea5a-4bcf-8a58-0085a7b571df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.121 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[acfa9d52-5fd2-488a-b4e0-5ad113775311]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701968, 'reachable_time': 16099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289165, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.143 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dae27f12-a3f4-4d1e-bdb5-69f70c7190d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:6593'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701968, 'tstamp': 701968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289166, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.160 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ec66874f-ba51-4ff7-bce7-702df1b9835a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701968, 'reachable_time': 16099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289167, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.190 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ea024579-5bc5-4d1b-90f7-08e875c43e8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.248 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[073a784d-b3c5-4ffd-be86-a5fb5e023caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.250 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.251 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.251 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:04 np0005466031 kernel: tap9266ebd7-30: entered promiscuous mode
Oct  2 08:46:04 np0005466031 NetworkManager[44907]: <info>  [1759409164.2545] manager: (tap9266ebd7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.259 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:04 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:04Z|00472|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.263 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.266 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[557f0e96-c0ba-43ef-8677-d745db080d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.267 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:46:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:04.267 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'env', 'PROCESS_TAG=haproxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.395 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updating instance_info_cache with network_info: [{"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.421 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-17266fac-3772-4df3-b4d7-c47d8292f6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.421 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.422 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.451 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.451 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.452 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.452 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.452 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:04.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.518 2 DEBUG nova.compute.manager [req-cf697a2c-5003-4498-b5d5-0344d07f4f62 req-45e85ca4-9ca8-4a9f-8482-e618cda8db34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.519 2 DEBUG oslo_concurrency.lockutils [req-cf697a2c-5003-4498-b5d5-0344d07f4f62 req-45e85ca4-9ca8-4a9f-8482-e618cda8db34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.520 2 DEBUG oslo_concurrency.lockutils [req-cf697a2c-5003-4498-b5d5-0344d07f4f62 req-45e85ca4-9ca8-4a9f-8482-e618cda8db34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.520 2 DEBUG oslo_concurrency.lockutils [req-cf697a2c-5003-4498-b5d5-0344d07f4f62 req-45e85ca4-9ca8-4a9f-8482-e618cda8db34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.520 2 DEBUG nova.compute.manager [req-cf697a2c-5003-4498-b5d5-0344d07f4f62 req-45e85ca4-9ca8-4a9f-8482-e618cda8db34 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Processing event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466031 podman[289243]: 2025-10-02 12:46:04.672212731 +0000 UTC m=+0.083519657 container create 97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:46:04 np0005466031 podman[289243]: 2025-10-02 12:46:04.615394914 +0000 UTC m=+0.026701860 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:46:04 np0005466031 systemd[1]: Started libpod-conmon-97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62.scope.
Oct  2 08:46:04 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:46:04 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8285ef4d4a4c90555f83bec9ecd25c0dc17f1bcb9685e07e079368d14f519e56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:46:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:04.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:04 np0005466031 podman[289243]: 2025-10-02 12:46:04.869650258 +0000 UTC m=+0.280957204 container init 97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:46:04 np0005466031 podman[289243]: 2025-10-02 12:46:04.87594811 +0000 UTC m=+0.287255056 container start 97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:46:04 np0005466031 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[289273]: [NOTICE]   (289277) : New worker (289279) forked
Oct  2 08:46:04 np0005466031 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[289273]: [NOTICE]   (289277) : Loading success.
Oct  2 08:46:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1696941197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:04 np0005466031 nova_compute[235803]: 2025-10-02 12:46:04.997 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.065 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.065 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.068 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.068 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.090 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409165.0895905, 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.090 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] VM Started (Lifecycle Event)#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.092 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.095 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.097 2 INFO nova.virt.libvirt.driver [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance spawned successfully.#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.097 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.110 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.116 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.121 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.121 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.122 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.122 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.123 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.123 2 DEBUG nova.virt.libvirt.driver [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.135 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.136 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409165.0898066, 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.136 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.165 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.168 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409165.094776, 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.168 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.181 2 INFO nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Took 9.58 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.181 2 DEBUG nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.192 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.196 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.218 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.242 2 INFO nova.compute.manager [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Took 10.51 seconds to build instance.#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.257 2 DEBUG oslo_concurrency.lockutils [None req-4686c02d-88d2-4e81-9b2b-ff47ce3153c3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.322 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.323 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4065MB free_disk=20.743602752685547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.324 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.324 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.556 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 17266fac-3772-4df3-b4d7-c47d8292f6d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.557 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.557 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.558 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:46:05 np0005466031 nova_compute[235803]: 2025-10-02 12:46:05.907 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/221783826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.399 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.406 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:46:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.516 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.751 2 DEBUG nova.compute.manager [req-6a61a39a-127d-4f84-af4e-e60713e8c851 req-b20f22fb-6d3c-492c-b996-3ac4f5104eaa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.752 2 DEBUG oslo_concurrency.lockutils [req-6a61a39a-127d-4f84-af4e-e60713e8c851 req-b20f22fb-6d3c-492c-b996-3ac4f5104eaa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.752 2 DEBUG oslo_concurrency.lockutils [req-6a61a39a-127d-4f84-af4e-e60713e8c851 req-b20f22fb-6d3c-492c-b996-3ac4f5104eaa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.752 2 DEBUG oslo_concurrency.lockutils [req-6a61a39a-127d-4f84-af4e-e60713e8c851 req-b20f22fb-6d3c-492c-b996-3ac4f5104eaa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.753 2 DEBUG nova.compute.manager [req-6a61a39a-127d-4f84-af4e-e60713e8c851 req-b20f22fb-6d3c-492c-b996-3ac4f5104eaa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] No waiting events found dispatching network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.753 2 WARNING nova.compute.manager [req-6a61a39a-127d-4f84-af4e-e60713e8c851 req-b20f22fb-6d3c-492c-b996-3ac4f5104eaa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received unexpected event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:46:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:46:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:06.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.859 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:46:06 np0005466031 nova_compute[235803]: 2025-10-02 12:46:06.860 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:07 np0005466031 nova_compute[235803]: 2025-10-02 12:46:07.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:07 np0005466031 podman[289445]: 2025-10-02 12:46:07.617201266 +0000 UTC m=+0.046675776 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:46:07 np0005466031 podman[289446]: 2025-10-02 12:46:07.658563528 +0000 UTC m=+0.086654428 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:46:08 np0005466031 nova_compute[235803]: 2025-10-02 12:46:08.075 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:08 np0005466031 nova_compute[235803]: 2025-10-02 12:46:08.075 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:08 np0005466031 nova_compute[235803]: 2025-10-02 12:46:08.076 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:46:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 10K writes, 51K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1682 writes, 8320 keys, 1682 commit groups, 1.0 writes per commit group, ingest: 16.55 MB, 0.03 MB/s#012Interval WAL: 1682 writes, 1682 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     94.1      0.65              0.17        30    0.022       0      0       0.0       0.0#012  L6      1/0    8.83 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4    136.1    114.1      2.38              0.80        29    0.082    172K    16K       0.0       0.0#012 Sum      1/0    8.83 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4    106.9    109.8      3.03              0.97        59    0.051    172K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7    124.6    124.6      0.58              0.22        12    0.049     45K   3138       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    136.1    114.1      2.38              0.80        29    0.082    172K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     94.4      0.65              0.17        29    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.060, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.33 GB write, 0.09 MB/s write, 0.32 GB read, 0.09 MB/s read, 3.0 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 35.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000265 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2096,34.62 MB,11.3893%) FilterBlock(59,493.48 KB,0.158526%) IndexBlock(59,849.36 KB,0.272846%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:46:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:08.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:08 np0005466031 nova_compute[235803]: 2025-10-02 12:46:08.926 2 DEBUG nova.compute.manager [req-e342f1a9-6f7b-4720-9da1-df97b46beaa2 req-dab590a0-edda-4a1c-a959-48af8a3047db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-changed-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:08 np0005466031 nova_compute[235803]: 2025-10-02 12:46:08.926 2 DEBUG nova.compute.manager [req-e342f1a9-6f7b-4720-9da1-df97b46beaa2 req-dab590a0-edda-4a1c-a959-48af8a3047db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Refreshing instance network info cache due to event network-changed-ca09038c-def5-41c9-a98a-c7837558526f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:46:08 np0005466031 nova_compute[235803]: 2025-10-02 12:46:08.927 2 DEBUG oslo_concurrency.lockutils [req-e342f1a9-6f7b-4720-9da1-df97b46beaa2 req-dab590a0-edda-4a1c-a959-48af8a3047db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:08 np0005466031 nova_compute[235803]: 2025-10-02 12:46:08.927 2 DEBUG oslo_concurrency.lockutils [req-e342f1a9-6f7b-4720-9da1-df97b46beaa2 req-dab590a0-edda-4a1c-a959-48af8a3047db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:08 np0005466031 nova_compute[235803]: 2025-10-02 12:46:08.927 2 DEBUG nova.network.neutron [req-e342f1a9-6f7b-4720-9da1-df97b46beaa2 req-dab590a0-edda-4a1c-a959-48af8a3047db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Refreshing network info cache for port ca09038c-def5-41c9-a98a-c7837558526f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:09 np0005466031 nova_compute[235803]: 2025-10-02 12:46:09.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:09 np0005466031 nova_compute[235803]: 2025-10-02 12:46:09.930 2 DEBUG nova.network.neutron [req-e342f1a9-6f7b-4720-9da1-df97b46beaa2 req-dab590a0-edda-4a1c-a959-48af8a3047db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updated VIF entry in instance network info cache for port ca09038c-def5-41c9-a98a-c7837558526f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:46:09 np0005466031 nova_compute[235803]: 2025-10-02 12:46:09.930 2 DEBUG nova.network.neutron [req-e342f1a9-6f7b-4720-9da1-df97b46beaa2 req-dab590a0-edda-4a1c-a959-48af8a3047db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:09 np0005466031 nova_compute[235803]: 2025-10-02 12:46:09.955 2 DEBUG oslo_concurrency.lockutils [req-e342f1a9-6f7b-4720-9da1-df97b46beaa2 req-dab590a0-edda-4a1c-a959-48af8a3047db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:10.650 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:10 np0005466031 nova_compute[235803]: 2025-10-02 12:46:10.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:10.652 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:46:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:10.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:12 np0005466031 nova_compute[235803]: 2025-10-02 12:46:12.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:12.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:12.654 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:46:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2788448323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:46:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:46:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2788448323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:46:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:14 np0005466031 podman[289495]: 2025-10-02 12:46:14.626953712 +0000 UTC m=+0.056599301 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:46:14 np0005466031 nova_compute[235803]: 2025-10-02 12:46:14.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:14 np0005466031 podman[289494]: 2025-10-02 12:46:14.655481424 +0000 UTC m=+0.085559926 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:46:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:46:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:14.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:46:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:16.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:16.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:16 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct  2 08:46:17 np0005466031 nova_compute[235803]: 2025-10-02 12:46:17.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:17 np0005466031 nova_compute[235803]: 2025-10-02 12:46:17.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:17 np0005466031 nova_compute[235803]: 2025-10-02 12:46:17.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:46:17 np0005466031 nova_compute[235803]: 2025-10-02 12:46:17.660 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:46:17 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:17Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:32:15 10.100.0.13
Oct  2 08:46:17 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:17Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:32:15 10.100.0.13
Oct  2 08:46:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:18.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:18.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:19 np0005466031 nova_compute[235803]: 2025-10-02 12:46:19.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:20.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:20.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:22 np0005466031 nova_compute[235803]: 2025-10-02 12:46:22.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:22.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:22 np0005466031 nova_compute[235803]: 2025-10-02 12:46:22.657 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:22.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:24.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:24 np0005466031 nova_compute[235803]: 2025-10-02 12:46:24.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:24.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:25 np0005466031 nova_compute[235803]: 2025-10-02 12:46:25.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:25 np0005466031 nova_compute[235803]: 2025-10-02 12:46:25.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:46:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:25.854 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:25.855 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:25.855 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:26.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:26.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:27 np0005466031 nova_compute[235803]: 2025-10-02 12:46:27.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:27 np0005466031 nova_compute[235803]: 2025-10-02 12:46:27.920 2 DEBUG nova.compute.manager [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:27 np0005466031 nova_compute[235803]: 2025-10-02 12:46:27.951 2 INFO nova.compute.manager [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] instance snapshotting#033[00m
Oct  2 08:46:27 np0005466031 nova_compute[235803]: 2025-10-02 12:46:27.952 2 DEBUG nova.objects.instance [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'flavor' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:28 np0005466031 nova_compute[235803]: 2025-10-02 12:46:28.282 2 INFO nova.virt.libvirt.driver [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Beginning live snapshot process#033[00m
Oct  2 08:46:28 np0005466031 nova_compute[235803]: 2025-10-02 12:46:28.436 2 DEBUG nova.virt.libvirt.imagebackend [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:46:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:28.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:28 np0005466031 nova_compute[235803]: 2025-10-02 12:46:28.670 2 DEBUG nova.storage.rbd_utils [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(4e31e66e31e2471191f8b31a81f4b776) on rbd image(2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:46:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:28.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e293 e293: 3 total, 3 up, 3 in
Oct  2 08:46:29 np0005466031 nova_compute[235803]: 2025-10-02 12:46:29.449 2 DEBUG nova.storage.rbd_utils [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] cloning vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk@4e31e66e31e2471191f8b31a81f4b776 to images/732edfd9-86ec-4c72-a1b2-37aec374791c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:46:29 np0005466031 nova_compute[235803]: 2025-10-02 12:46:29.579 2 DEBUG nova.storage.rbd_utils [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] flattening images/732edfd9-86ec-4c72-a1b2-37aec374791c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:46:29 np0005466031 nova_compute[235803]: 2025-10-02 12:46:29.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:30 np0005466031 nova_compute[235803]: 2025-10-02 12:46:30.033 2 DEBUG nova.storage.rbd_utils [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] removing snapshot(4e31e66e31e2471191f8b31a81f4b776) on rbd image(2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:46:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e294 e294: 3 total, 3 up, 3 in
Oct  2 08:46:30 np0005466031 nova_compute[235803]: 2025-10-02 12:46:30.468 2 DEBUG nova.storage.rbd_utils [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(snap) on rbd image(732edfd9-86ec-4c72-a1b2-37aec374791c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:46:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:30.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:30.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:31 np0005466031 nova_compute[235803]: 2025-10-02 12:46:31.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e295 e295: 3 total, 3 up, 3 in
Oct  2 08:46:32 np0005466031 nova_compute[235803]: 2025-10-02 12:46:32.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:32.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:33 np0005466031 nova_compute[235803]: 2025-10-02 12:46:33.304 2 INFO nova.virt.libvirt.driver [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Snapshot image upload complete#033[00m
Oct  2 08:46:33 np0005466031 nova_compute[235803]: 2025-10-02 12:46:33.305 2 INFO nova.compute.manager [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Took 5.34 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:46:33 np0005466031 nova_compute[235803]: 2025-10-02 12:46:33.609 2 DEBUG nova.compute.manager [None req-62f75c94-e081-40ff-8997-295259c5e55f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct  2 08:46:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:34.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:34 np0005466031 nova_compute[235803]: 2025-10-02 12:46:34.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:35 np0005466031 nova_compute[235803]: 2025-10-02 12:46:35.666 2 DEBUG nova.compute.manager [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:35 np0005466031 nova_compute[235803]: 2025-10-02 12:46:35.725 2 INFO nova.compute.manager [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] instance snapshotting#033[00m
Oct  2 08:46:35 np0005466031 nova_compute[235803]: 2025-10-02 12:46:35.727 2 DEBUG nova.objects.instance [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'flavor' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:36 np0005466031 nova_compute[235803]: 2025-10-02 12:46:36.138 2 INFO nova.virt.libvirt.driver [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Beginning live snapshot process#033[00m
Oct  2 08:46:36 np0005466031 nova_compute[235803]: 2025-10-02 12:46:36.322 2 DEBUG nova.virt.libvirt.imagebackend [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:46:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:36.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:36 np0005466031 nova_compute[235803]: 2025-10-02 12:46:36.718 2 DEBUG nova.storage.rbd_utils [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(112fd2b456394aa1b1d9148cdc269da0) on rbd image(2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:46:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:36.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e296 e296: 3 total, 3 up, 3 in
Oct  2 08:46:36 np0005466031 nova_compute[235803]: 2025-10-02 12:46:36.948 2 DEBUG nova.storage.rbd_utils [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] cloning vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk@112fd2b456394aa1b1d9148cdc269da0 to images/ab4ee30b-1b9c-4a28-9b76-c933be04775b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:46:37 np0005466031 nova_compute[235803]: 2025-10-02 12:46:37.068 2 DEBUG nova.storage.rbd_utils [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] flattening images/ab4ee30b-1b9c-4a28-9b76-c933be04775b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:46:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e297 e297: 3 total, 3 up, 3 in
Oct  2 08:46:37 np0005466031 nova_compute[235803]: 2025-10-02 12:46:37.467 2 DEBUG nova.storage.rbd_utils [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] removing snapshot(112fd2b456394aa1b1d9148cdc269da0) on rbd image(2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:46:37 np0005466031 nova_compute[235803]: 2025-10-02 12:46:37.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e298 e298: 3 total, 3 up, 3 in
Oct  2 08:46:38 np0005466031 nova_compute[235803]: 2025-10-02 12:46:38.297 2 DEBUG nova.storage.rbd_utils [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(snap) on rbd image(ab4ee30b-1b9c-4a28-9b76-c933be04775b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:46:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:38.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:38 np0005466031 podman[289930]: 2025-10-02 12:46:38.632388165 +0000 UTC m=+0.055392547 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:46:38 np0005466031 podman[289931]: 2025-10-02 12:46:38.681920132 +0000 UTC m=+0.101606118 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:46:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:38.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e299 e299: 3 total, 3 up, 3 in
Oct  2 08:46:39 np0005466031 nova_compute[235803]: 2025-10-02 12:46:39.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:40.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:40.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:41 np0005466031 nova_compute[235803]: 2025-10-02 12:46:41.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 44K writes, 177K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.05 MB/s#012Cumulative WAL: 44K writes, 16K syncs, 2.75 writes per sync, written: 0.18 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 43.22 MB, 0.07 MB/s#012Interval WAL: 10K writes, 3912 syncs, 2.60 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:46:42 np0005466031 nova_compute[235803]: 2025-10-02 12:46:42.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:42.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:42.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:43 np0005466031 nova_compute[235803]: 2025-10-02 12:46:43.408 2 INFO nova.virt.libvirt.driver [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Snapshot image upload complete#033[00m
Oct  2 08:46:43 np0005466031 nova_compute[235803]: 2025-10-02 12:46:43.409 2 INFO nova.compute.manager [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Took 7.63 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:46:43 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 08:46:43 np0005466031 nova_compute[235803]: 2025-10-02 12:46:43.949 2 DEBUG nova.compute.manager [None req-7cf5dabf-bf4e-4076-b12f-57c25bedf7cb b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct  2 08:46:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:44.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:44 np0005466031 nova_compute[235803]: 2025-10-02 12:46:44.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:44.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:45 np0005466031 podman[290027]: 2025-10-02 12:46:45.624600724 +0000 UTC m=+0.052827943 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  2 08:46:45 np0005466031 podman[290026]: 2025-10-02 12:46:45.62482145 +0000 UTC m=+0.055451738 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 08:46:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:46 np0005466031 nova_compute[235803]: 2025-10-02 12:46:46.282 2 DEBUG nova.compute.manager [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:46 np0005466031 nova_compute[235803]: 2025-10-02 12:46:46.338 2 INFO nova.compute.manager [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] instance snapshotting#033[00m
Oct  2 08:46:46 np0005466031 nova_compute[235803]: 2025-10-02 12:46:46.340 2 DEBUG nova.objects.instance [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'flavor' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:46.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:46.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:46 np0005466031 nova_compute[235803]: 2025-10-02 12:46:46.926 2 INFO nova.virt.libvirt.driver [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Beginning live snapshot process#033[00m
Oct  2 08:46:47 np0005466031 nova_compute[235803]: 2025-10-02 12:46:47.181 2 DEBUG nova.virt.libvirt.imagebackend [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:46:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e300 e300: 3 total, 3 up, 3 in
Oct  2 08:46:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:46:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1020510815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:46:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:46:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1020510815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:46:47 np0005466031 nova_compute[235803]: 2025-10-02 12:46:47.498 2 DEBUG nova.storage.rbd_utils [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(eb78a6ab351a4497a6305023684e4541) on rbd image(2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:46:47 np0005466031 nova_compute[235803]: 2025-10-02 12:46:47.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.108 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.175 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Triggering sync for uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.176 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Triggering sync for uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.177 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.178 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.178 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.179 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.179 2 INFO nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] During sync_power_state the instance has a pending task (image_uploading). Skip.#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.179 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.203 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e301 e301: 3 total, 3 up, 3 in
Oct  2 08:46:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:48.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:48 np0005466031 nova_compute[235803]: 2025-10-02 12:46:48.935 2 DEBUG nova.storage.rbd_utils [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] cloning vms/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk@eb78a6ab351a4497a6305023684e4541 to images/849a855e-6143-42a6-8855-a1bf327357a7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:46:49 np0005466031 nova_compute[235803]: 2025-10-02 12:46:49.142 2 DEBUG nova.storage.rbd_utils [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] flattening images/849a855e-6143-42a6-8855-a1bf327357a7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:46:49 np0005466031 nova_compute[235803]: 2025-10-02 12:46:49.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:49 np0005466031 nova_compute[235803]: 2025-10-02 12:46:49.652 2 DEBUG nova.storage.rbd_utils [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] removing snapshot(eb78a6ab351a4497a6305023684e4541) on rbd image(2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:46:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:46:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:46:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e302 e302: 3 total, 3 up, 3 in
Oct  2 08:46:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:50.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:51 np0005466031 nova_compute[235803]: 2025-10-02 12:46:51.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:51 np0005466031 nova_compute[235803]: 2025-10-02 12:46:51.225 2 DEBUG nova.storage.rbd_utils [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(snap) on rbd image(849a855e-6143-42a6-8855-a1bf327357a7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:46:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e303 e303: 3 total, 3 up, 3 in
Oct  2 08:46:52 np0005466031 nova_compute[235803]: 2025-10-02 12:46:52.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:52.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:54 np0005466031 nova_compute[235803]: 2025-10-02 12:46:54.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:54 np0005466031 nova_compute[235803]: 2025-10-02 12:46:54.658 2 INFO nova.virt.libvirt.driver [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Snapshot image upload complete#033[00m
Oct  2 08:46:54 np0005466031 nova_compute[235803]: 2025-10-02 12:46:54.659 2 INFO nova.compute.manager [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Took 8.30 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:46:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:54.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:55 np0005466031 nova_compute[235803]: 2025-10-02 12:46:55.271 2 DEBUG nova.compute.manager [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct  2 08:46:55 np0005466031 nova_compute[235803]: 2025-10-02 12:46:55.272 2 DEBUG nova.compute.manager [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Oct  2 08:46:55 np0005466031 nova_compute[235803]: 2025-10-02 12:46:55.272 2 DEBUG nova.compute.manager [None req-4cfc4c8c-d16a-4e6e-bee0-93515838fe8f b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Deleting image 732edfd9-86ec-4c72-a1b2-37aec374791c _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Oct  2 08:46:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:56.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:56 np0005466031 nova_compute[235803]: 2025-10-02 12:46:56.702 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e304 e304: 3 total, 3 up, 3 in
Oct  2 08:46:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:56.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.842 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.842 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.842 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.843 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.843 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.844 2 INFO nova.compute.manager [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Terminating instance#033[00m
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.845 2 DEBUG nova.compute.manager [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:46:57 np0005466031 kernel: tap5e2a83a5-11 (unregistering): left promiscuous mode
Oct  2 08:46:57 np0005466031 NetworkManager[44907]: <info>  [1759409217.9352] device (tap5e2a83a5-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:57 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:57Z|00473|binding|INFO|Releasing lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe from this chassis (sb_readonly=0)
Oct  2 08:46:57 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:57Z|00474|binding|INFO|Setting lport 5e2a83a5-11e1-45b1-82ce-5fee577f67fe down in Southbound
Oct  2 08:46:57 np0005466031 ovn_controller[132413]: 2025-10-02T12:46:57Z|00475|binding|INFO|Removing iface tap5e2a83a5-11 ovn-installed in OVS
Oct  2 08:46:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:57.950 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:09:61 10.100.0.6'], port_security=['fa:16:3e:34:09:61 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '17266fac-3772-4df3-b4d7-c47d8292f6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7035a43e-de6a-4b86-a3b2-d2e40c9755d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5e2a83a5-11e1-45b1-82ce-5fee577f67fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:57.952 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5e2a83a5-11e1-45b1-82ce-5fee577f67fe in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis#033[00m
Oct  2 08:46:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:57.955 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:46:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:57.957 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2134c371-f3c7-4fef-8f2f-1483efa7e0c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:57 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:57.959 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore#033[00m
Oct  2 08:46:57 np0005466031 nova_compute[235803]: 2025-10-02 12:46:57.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:57 np0005466031 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct  2 08:46:57 np0005466031 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000075.scope: Consumed 17.588s CPU time.
Oct  2 08:46:57 np0005466031 systemd-machined[192227]: Machine qemu-51-instance-00000075 terminated.
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.084 2 INFO nova.virt.libvirt.driver [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Instance destroyed successfully.#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.084 2 DEBUG nova.objects.instance [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'resources' on Instance uuid 17266fac-3772-4df3-b4d7-c47d8292f6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.108 2 DEBUG nova.virt.libvirt.vif [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-377097009',display_name='tempest-ServerRescueNegativeTestJSON-server-377097009',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-377097009',id=117,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:45:01Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-msakr1ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:45:09Z,user_data=None,user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=17266fac-3772-4df3-b4d7-c47d8292f6d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.108 2 DEBUG nova.network.os_vif_util [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "address": "fa:16:3e:34:09:61", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e2a83a5-11", "ovs_interfaceid": "5e2a83a5-11e1-45b1-82ce-5fee577f67fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.109 2 DEBUG nova.network.os_vif_util [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:09:61,bridge_name='br-int',has_traffic_filtering=True,id=5e2a83a5-11e1-45b1-82ce-5fee577f67fe,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e2a83a5-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.109 2 DEBUG os_vif [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:09:61,bridge_name='br-int',has_traffic_filtering=True,id=5e2a83a5-11e1-45b1-82ce-5fee577f67fe,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e2a83a5-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2a83a5-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.117 2 INFO os_vif [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:09:61,bridge_name='br-int',has_traffic_filtering=True,id=5e2a83a5-11e1-45b1-82ce-5fee577f67fe,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e2a83a5-11')#033[00m
Oct  2 08:46:58 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287852]: [NOTICE]   (287859) : haproxy version is 2.8.14-c23fe91
Oct  2 08:46:58 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287852]: [NOTICE]   (287859) : path to executable is /usr/sbin/haproxy
Oct  2 08:46:58 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287852]: [WARNING]  (287859) : Exiting Master process...
Oct  2 08:46:58 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287852]: [ALERT]    (287859) : Current worker (287861) exited with code 143 (Terminated)
Oct  2 08:46:58 np0005466031 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[287852]: [WARNING]  (287859) : All workers exited. Exiting... (0)
Oct  2 08:46:58 np0005466031 systemd[1]: libpod-82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b.scope: Deactivated successfully.
Oct  2 08:46:58 np0005466031 podman[290236]: 2025-10-02 12:46:58.130563336 +0000 UTC m=+0.049638331 container died 82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:46:58 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b-userdata-shm.mount: Deactivated successfully.
Oct  2 08:46:58 np0005466031 systemd[1]: var-lib-containers-storage-overlay-c86684a76647ad8405887794029db9bc3ddbaad68195de573c487271235bab0c-merged.mount: Deactivated successfully.
Oct  2 08:46:58 np0005466031 podman[290236]: 2025-10-02 12:46:58.180321789 +0000 UTC m=+0.099396784 container cleanup 82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:46:58 np0005466031 systemd[1]: libpod-conmon-82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b.scope: Deactivated successfully.
Oct  2 08:46:58 np0005466031 podman[290290]: 2025-10-02 12:46:58.240441711 +0000 UTC m=+0.039266382 container remove 82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.246 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[eef00aa1-6945-4e5b-a0e3-79f8913ae960]: (4, ('Thu Oct  2 12:46:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b)\n82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b\nThu Oct  2 12:46:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b)\n82e7ff63ea93d94f5620fa051e2d5cbd46a6c42cdfb262795a44bade5059ed8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.248 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[87dcdcf0-d4d8-43aa-98ab-a21a08a8bb2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.249 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005466031 kernel: tapf3934261-b0: left promiscuous mode
Oct  2 08:46:58 np0005466031 nova_compute[235803]: 2025-10-02 12:46:58.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.268 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[26d90afc-c9c4-4636-a7a5-4fdf7d8ad217]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.309 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f33e3641-5bd4-4270-a46c-8373167a157e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.310 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[25e37d18-3e57-4fdf-ade4-0bee6d35aab2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.326 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2e30cf-a64f-47b5-9148-4886e020bdb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696320, 'reachable_time': 25569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290305, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005466031 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.332 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:46:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:46:58.333 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[42a25c1b-cad3-40b2-bd8e-ee38f6ce525d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:58.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:46:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:46:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:58.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:46:59 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct  2 08:46:59 np0005466031 nova_compute[235803]: 2025-10-02 12:46:59.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:59 np0005466031 nova_compute[235803]: 2025-10-02 12:46:59.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:59 np0005466031 nova_compute[235803]: 2025-10-02 12:46:59.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:00 np0005466031 nova_compute[235803]: 2025-10-02 12:47:00.035 2 INFO nova.virt.libvirt.driver [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Deleting instance files /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6_del#033[00m
Oct  2 08:47:00 np0005466031 nova_compute[235803]: 2025-10-02 12:47:00.036 2 INFO nova.virt.libvirt.driver [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Deletion of /var/lib/nova/instances/17266fac-3772-4df3-b4d7-c47d8292f6d6_del complete#033[00m
Oct  2 08:47:00 np0005466031 nova_compute[235803]: 2025-10-02 12:47:00.101 2 INFO nova.compute.manager [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Took 2.26 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:47:00 np0005466031 nova_compute[235803]: 2025-10-02 12:47:00.102 2 DEBUG oslo.service.loopingcall [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:47:00 np0005466031 nova_compute[235803]: 2025-10-02 12:47:00.102 2 DEBUG nova.compute.manager [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:47:00 np0005466031 nova_compute[235803]: 2025-10-02 12:47:00.102 2 DEBUG nova.network.neutron [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:47:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:00.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:00.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:01 np0005466031 nova_compute[235803]: 2025-10-02 12:47:01.464 2 DEBUG nova.network.neutron [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:01 np0005466031 nova_compute[235803]: 2025-10-02 12:47:01.492 2 INFO nova.compute.manager [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct  2 08:47:01 np0005466031 nova_compute[235803]: 2025-10-02 12:47:01.539 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:01 np0005466031 nova_compute[235803]: 2025-10-02 12:47:01.540 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:01 np0005466031 nova_compute[235803]: 2025-10-02 12:47:01.590 2 DEBUG nova.compute.manager [req-3daccdb8-58be-453e-bc5c-7e9b62089b03 req-958b3e7f-a367-477a-9eb1-58d33f335061 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Received event network-vif-deleted-5e2a83a5-11e1-45b1-82ce-5fee577f67fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:01 np0005466031 nova_compute[235803]: 2025-10-02 12:47:01.634 2 DEBUG oslo_concurrency.processutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:01 np0005466031 nova_compute[235803]: 2025-10-02 12:47:01.667 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:01 np0005466031 nova_compute[235803]: 2025-10-02 12:47:01.669 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1797470337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.098 2 DEBUG oslo_concurrency.processutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.104 2 DEBUG nova.compute.provider_tree [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.122 2 DEBUG nova.scheduler.client.report [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.161 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.220 2 INFO nova.scheduler.client.report [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Deleted allocations for instance 17266fac-3772-4df3-b4d7-c47d8292f6d6#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.312 2 DEBUG oslo_concurrency.lockutils [None req-867da128-0365-4dd0-a770-be9dd5ff45e6 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "17266fac-3772-4df3-b4d7-c47d8292f6d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e305 e305: 3 total, 3 up, 3 in
Oct  2 08:47:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:02.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.856 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.856 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.856 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:47:02 np0005466031 nova_compute[235803]: 2025-10-02 12:47:02.856 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:02.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:03 np0005466031 nova_compute[235803]: 2025-10-02 12:47:03.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e306 e306: 3 total, 3 up, 3 in
Oct  2 08:47:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:04.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:04 np0005466031 nova_compute[235803]: 2025-10-02 12:47:04.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:04.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e307 e307: 3 total, 3 up, 3 in
Oct  2 08:47:05 np0005466031 nova_compute[235803]: 2025-10-02 12:47:05.772 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:05 np0005466031 nova_compute[235803]: 2025-10-02 12:47:05.960 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:05 np0005466031 nova_compute[235803]: 2025-10-02 12:47:05.961 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:47:05 np0005466031 nova_compute[235803]: 2025-10-02 12:47:05.961 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:05 np0005466031 nova_compute[235803]: 2025-10-02 12:47:05.961 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:05 np0005466031 nova_compute[235803]: 2025-10-02 12:47:05.962 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:47:05 np0005466031 nova_compute[235803]: 2025-10-02 12:47:05.962 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.040 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.042 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.042 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.042 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.043 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2285883654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.519 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.596 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.597 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:47:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:06.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.770 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.771 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4155MB free_disk=20.909137725830078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.771 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.772 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:06.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.989 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.990 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:47:06 np0005466031 nova_compute[235803]: 2025-10-02 12:47:06.990 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:47:07 np0005466031 nova_compute[235803]: 2025-10-02 12:47:07.043 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e308 e308: 3 total, 3 up, 3 in
Oct  2 08:47:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2976661906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:07 np0005466031 nova_compute[235803]: 2025-10-02 12:47:07.485 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:07 np0005466031 nova_compute[235803]: 2025-10-02 12:47:07.491 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:07 np0005466031 nova_compute[235803]: 2025-10-02 12:47:07.515 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:07 np0005466031 nova_compute[235803]: 2025-10-02 12:47:07.551 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:47:07 np0005466031 nova_compute[235803]: 2025-10-02 12:47:07.551 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:08 np0005466031 nova_compute[235803]: 2025-10-02 12:47:08.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:08 np0005466031 ovn_controller[132413]: 2025-10-02T12:47:08Z|00476|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:47:08 np0005466031 nova_compute[235803]: 2025-10-02 12:47:08.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:08.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:08.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:09 np0005466031 podman[290430]: 2025-10-02 12:47:09.636467619 +0000 UTC m=+0.060998188 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:47:09 np0005466031 nova_compute[235803]: 2025-10-02 12:47:09.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:09 np0005466031 podman[290431]: 2025-10-02 12:47:09.673941178 +0000 UTC m=+0.098615532 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:47:10 np0005466031 nova_compute[235803]: 2025-10-02 12:47:10.227 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:10.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:10.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e309 e309: 3 total, 3 up, 3 in
Oct  2 08:47:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:12.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:47:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:12.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:47:13 np0005466031 nova_compute[235803]: 2025-10-02 12:47:13.082 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409218.0809512, 17266fac-3772-4df3-b4d7-c47d8292f6d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:13 np0005466031 nova_compute[235803]: 2025-10-02 12:47:13.083 2 INFO nova.compute.manager [-] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:47:13 np0005466031 nova_compute[235803]: 2025-10-02 12:47:13.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:13 np0005466031 nova_compute[235803]: 2025-10-02 12:47:13.314 2 DEBUG nova.compute.manager [None req-61952c98-e38c-4a71-acdb-70a1c30231eb - - - - - -] [instance: 17266fac-3772-4df3-b4d7-c47d8292f6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:14.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:14 np0005466031 nova_compute[235803]: 2025-10-02 12:47:14.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:14.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:15 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 08:47:15 np0005466031 podman[290504]: 2025-10-02 12:47:15.91943241 +0000 UTC m=+0.067190367 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:47:15 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  2 08:47:15 np0005466031 podman[290505]: 2025-10-02 12:47:15.943195564 +0000 UTC m=+0.085938816 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:47:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:16 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  2 08:47:16 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  2 08:47:16 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  2 08:47:16 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  2 08:47:16 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct  2 08:47:16 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct  2 08:47:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 08:47:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 08:47:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:16.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.126084) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237126116, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2554, "num_deletes": 262, "total_data_size": 5869521, "memory_usage": 5938880, "flush_reason": "Manual Compaction"}
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237153287, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3845464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50010, "largest_seqno": 52558, "table_properties": {"data_size": 3834771, "index_size": 6931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22643, "raw_average_key_size": 21, "raw_value_size": 3813334, "raw_average_value_size": 3573, "num_data_blocks": 297, "num_entries": 1067, "num_filter_entries": 1067, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409047, "oldest_key_time": 1759409047, "file_creation_time": 1759409237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 27267 microseconds, and 7866 cpu microseconds.
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.153344) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3845464 bytes OK
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.153367) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.156134) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.156188) EVENT_LOG_v1 {"time_micros": 1759409237156179, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.156211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5858101, prev total WAL file size 5879160, number of live WAL files 2.
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.157937) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3755KB)], [99(9043KB)]
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237157973, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13105858, "oldest_snapshot_seqno": -1}
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 7670 keys, 11159261 bytes, temperature: kUnknown
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237265112, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11159261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11107986, "index_size": 31003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198358, "raw_average_key_size": 25, "raw_value_size": 10971171, "raw_average_value_size": 1430, "num_data_blocks": 1219, "num_entries": 7670, "num_filter_entries": 7670, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.265357) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11159261 bytes
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.271256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.3 rd, 104.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8206, records dropped: 536 output_compression: NoCompression
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.271292) EVENT_LOG_v1 {"time_micros": 1759409237271278, "job": 62, "event": "compaction_finished", "compaction_time_micros": 107202, "compaction_time_cpu_micros": 25763, "output_level": 6, "num_output_files": 1, "total_output_size": 11159261, "num_input_records": 8206, "num_output_records": 7670, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237272313, "job": 62, "event": "table_file_deletion", "file_number": 101}
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237273864, "job": 62, "event": "table_file_deletion", "file_number": 99}
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.157720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.273985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.273991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.273993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.273994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:47:17.273996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:17 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:18 np0005466031 nova_compute[235803]: 2025-10-02 12:47:18.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:19 np0005466031 nova_compute[235803]: 2025-10-02 12:47:19.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:20.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:20.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:47:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:47:21 np0005466031 nova_compute[235803]: 2025-10-02 12:47:21.749 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "184f3992-03ad-4908-aeb5-b14e562fa846" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:21 np0005466031 nova_compute[235803]: 2025-10-02 12:47:21.751 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:21 np0005466031 nova_compute[235803]: 2025-10-02 12:47:21.797 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:47:21 np0005466031 nova_compute[235803]: 2025-10-02 12:47:21.952 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:21 np0005466031 nova_compute[235803]: 2025-10-02 12:47:21.953 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:21 np0005466031 nova_compute[235803]: 2025-10-02 12:47:21.965 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:47:21 np0005466031 nova_compute[235803]: 2025-10-02 12:47:21.966 2 INFO nova.compute.claims [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.040 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "2793fe28-695e-4652-b12a-bee14e192d06" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.041 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "2793fe28-695e-4652-b12a-bee14e192d06" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.091 2 DEBUG nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.203 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.279 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:22.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3250697552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.716 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.722 2 DEBUG nova.compute.provider_tree [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.750 2 DEBUG nova.scheduler.client.report [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.798 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.799 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.802 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.818 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.819 2 INFO nova.compute.claims [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.910 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.910 2 DEBUG nova.network.neutron [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:47:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:22.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.953 2 INFO nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:47:22 np0005466031 nova_compute[235803]: 2025-10-02 12:47:22.988 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:47:23 np0005466031 nova_compute[235803]: 2025-10-02 12:47:23.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:24.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.676 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.677 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.678 2 INFO nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Creating image(s)#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.707 2 DEBUG nova.storage.rbd_utils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 184f3992-03ad-4908-aeb5-b14e562fa846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.730 2 DEBUG nova.storage.rbd_utils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 184f3992-03ad-4908-aeb5-b14e562fa846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.755 2 DEBUG nova.storage.rbd_utils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 184f3992-03ad-4908-aeb5-b14e562fa846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.759 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.823 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.823 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.824 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.824 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.846 2 DEBUG nova.storage.rbd_utils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 184f3992-03ad-4908-aeb5-b14e562fa846_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:24 np0005466031 nova_compute[235803]: 2025-10-02 12:47:24.849 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 184f3992-03ad-4908-aeb5-b14e562fa846_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:24.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.132 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 184f3992-03ad-4908-aeb5-b14e562fa846_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.195 2 DEBUG nova.storage.rbd_utils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] resizing rbd image 184f3992-03ad-4908-aeb5-b14e562fa846_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.226 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.312 2 DEBUG nova.objects.instance [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid 184f3992-03ad-4908-aeb5-b14e562fa846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.356 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.357 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Ensure instance console log exists: /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.358 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.359 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.359 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.646 2 DEBUG nova.policy [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b5104e5372994cd19b720862cf1ca2ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:47:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:25 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2809155184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.688 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.692 2 DEBUG nova.compute.provider_tree [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.741 2 DEBUG nova.scheduler.client.report [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.788 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.788 2 DEBUG nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:47:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:25.856 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:25.856 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:25.857 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.872 2 DEBUG nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.908 2 INFO nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:47:25 np0005466031 nova_compute[235803]: 2025-10-02 12:47:25.930 2 DEBUG nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:47:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.110 2 DEBUG nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.111 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.111 2 INFO nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Creating image(s)#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.137 2 DEBUG nova.storage.rbd_utils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image 2793fe28-695e-4652-b12a-bee14e192d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.162 2 DEBUG nova.storage.rbd_utils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image 2793fe28-695e-4652-b12a-bee14e192d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.186 2 DEBUG nova.storage.rbd_utils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image 2793fe28-695e-4652-b12a-bee14e192d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.189 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.252 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.253 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.254 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.254 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.279 2 DEBUG nova.storage.rbd_utils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image 2793fe28-695e-4652-b12a-bee14e192d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.282 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2793fe28-695e-4652-b12a-bee14e192d06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:26.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:26.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:26 np0005466031 nova_compute[235803]: 2025-10-02 12:47:26.929 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2793fe28-695e-4652-b12a-bee14e192d06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.005 2 DEBUG nova.storage.rbd_utils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] resizing rbd image 2793fe28-695e-4652-b12a-bee14e192d06_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.607 2 DEBUG nova.objects.instance [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'migration_context' on Instance uuid 2793fe28-695e-4652-b12a-bee14e192d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.644 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.644 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Ensure instance console log exists: /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.645 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.645 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.645 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.646 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.650 2 WARNING nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.655 2 DEBUG nova.virt.libvirt.host [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.655 2 DEBUG nova.virt.libvirt.host [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.664 2 DEBUG nova.virt.libvirt.host [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.665 2 DEBUG nova.virt.libvirt.host [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.666 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.666 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.666 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.667 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.667 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.667 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.667 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.668 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.668 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.668 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.668 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.668 2 DEBUG nova.virt.hardware [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.671 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:27.867 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:27.867 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:47:27 np0005466031 nova_compute[235803]: 2025-10-02 12:47:27.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.035 2 DEBUG nova.network.neutron [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Successfully created port: dc6c1baa-6ec8-4649-bfbc-c6720e954f7b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:47:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1989105194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.109 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.132 2 DEBUG nova.storage.rbd_utils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image 2793fe28-695e-4652-b12a-bee14e192d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.135 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2696433292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.582 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.583 2 DEBUG nova.objects.instance [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2793fe28-695e-4652-b12a-bee14e192d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.622 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <uuid>2793fe28-695e-4652-b12a-bee14e192d06</uuid>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <name>instance-00000080</name>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerShowV247Test-server-1155819512</nova:name>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:47:27</nova:creationTime>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <nova:user uuid="537a49488e284c9ab1330c64e8072747">tempest-ServerShowV247Test-568202848-project-member</nova:user>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <nova:project uuid="9768ac969bcb49a08f0cf2563ecd3980">tempest-ServerShowV247Test-568202848</nova:project>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <entry name="serial">2793fe28-695e-4652-b12a-bee14e192d06</entry>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <entry name="uuid">2793fe28-695e-4652-b12a-bee14e192d06</entry>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2793fe28-695e-4652-b12a-bee14e192d06_disk">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/2793fe28-695e-4652-b12a-bee14e192d06_disk.config">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06/console.log" append="off"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:47:28 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:47:28 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:47:28 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:47:28 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:47:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:28.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:28.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.931 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.931 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.932 2 INFO nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Using config drive#033[00m
Oct  2 08:47:28 np0005466031 nova_compute[235803]: 2025-10-02 12:47:28.959 2 DEBUG nova.storage.rbd_utils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image 2793fe28-695e-4652-b12a-bee14e192d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:29 np0005466031 nova_compute[235803]: 2025-10-02 12:47:29.435 2 INFO nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Creating config drive at /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06/disk.config#033[00m
Oct  2 08:47:29 np0005466031 nova_compute[235803]: 2025-10-02 12:47:29.439 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8x_gnwct execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:29 np0005466031 nova_compute[235803]: 2025-10-02 12:47:29.578 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8x_gnwct" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:29 np0005466031 nova_compute[235803]: 2025-10-02 12:47:29.615 2 DEBUG nova.storage.rbd_utils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] rbd image 2793fe28-695e-4652-b12a-bee14e192d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:29 np0005466031 nova_compute[235803]: 2025-10-02 12:47:29.620 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06/disk.config 2793fe28-695e-4652-b12a-bee14e192d06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:29 np0005466031 nova_compute[235803]: 2025-10-02 12:47:29.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:29 np0005466031 nova_compute[235803]: 2025-10-02 12:47:29.958 2 DEBUG oslo_concurrency.processutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06/disk.config 2793fe28-695e-4652-b12a-bee14e192d06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:29 np0005466031 nova_compute[235803]: 2025-10-02 12:47:29.959 2 INFO nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Deleting local config drive /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06/disk.config because it was imported into RBD.#033[00m
Oct  2 08:47:30 np0005466031 systemd-machined[192227]: New machine qemu-54-instance-00000080.
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.071 2 DEBUG nova.network.neutron [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Successfully updated port: dc6c1baa-6ec8-4649-bfbc-c6720e954f7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:47:30 np0005466031 systemd[1]: Started Virtual Machine qemu-54-instance-00000080.
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.104 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.104 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.105 2 DEBUG nova.network.neutron [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.439 2 DEBUG nova.network.neutron [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:47:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.959 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409250.9593022, 2793fe28-695e-4652-b12a-bee14e192d06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.961 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.963 2 DEBUG nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.963 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.967 2 INFO nova.virt.libvirt.driver [-] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Instance spawned successfully.#033[00m
Oct  2 08:47:30 np0005466031 nova_compute[235803]: 2025-10-02 12:47:30.967 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:47:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.354 2 DEBUG nova.compute.manager [req-10e75872-3066-4291-930d-cb4f79a26512 req-841aaf63-9658-42a6-a6dc-2b4e3fe63f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received event network-changed-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.355 2 DEBUG nova.compute.manager [req-10e75872-3066-4291-930d-cb4f79a26512 req-841aaf63-9658-42a6-a6dc-2b4e3fe63f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Refreshing instance network info cache due to event network-changed-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.355 2 DEBUG oslo_concurrency.lockutils [req-10e75872-3066-4291-930d-cb4f79a26512 req-841aaf63-9658-42a6-a6dc-2b4e3fe63f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.386 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.390 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.391 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.391 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.391 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.392 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.392 2 DEBUG nova.virt.libvirt.driver [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.397 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.794 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.794 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409250.9600313, 2793fe28-695e-4652-b12a-bee14e192d06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.794 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] VM Started (Lifecycle Event)#033[00m
Oct  2 08:47:31 np0005466031 nova_compute[235803]: 2025-10-02 12:47:31.842 2 DEBUG nova.network.neutron [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updating instance_info_cache with network_info: [{"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.423 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.427 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.450 2 INFO nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Took 6.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.450 2 DEBUG nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.624 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.625 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Instance network_info: |[{"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.625 2 DEBUG oslo_concurrency.lockutils [req-10e75872-3066-4291-930d-cb4f79a26512 req-841aaf63-9658-42a6-a6dc-2b4e3fe63f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.626 2 DEBUG nova.network.neutron [req-10e75872-3066-4291-930d-cb4f79a26512 req-841aaf63-9658-42a6-a6dc-2b4e3fe63f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Refreshing network info cache for port dc6c1baa-6ec8-4649-bfbc-c6720e954f7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.628 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Start _get_guest_xml network_info=[{"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.631 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.634 2 WARNING nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.640 2 DEBUG nova.virt.libvirt.host [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.640 2 DEBUG nova.virt.libvirt.host [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.644 2 DEBUG nova.virt.libvirt.host [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.645 2 DEBUG nova.virt.libvirt.host [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.647 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.647 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.649 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:47:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:32.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.649 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.650 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.651 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.651 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.652 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.652 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.652 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.653 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.653 2 DEBUG nova.virt.hardware [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.658 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.740 2 INFO nova.compute.manager [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Took 10.58 seconds to build instance.#033[00m
Oct  2 08:47:32 np0005466031 nova_compute[235803]: 2025-10-02 12:47:32.866 2 DEBUG oslo_concurrency.lockutils [None req-78fd24f9-06ca-4f63-9f2a-a94d929d6278 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "2793fe28-695e-4652-b12a-bee14e192d06" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:32.870 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2906456310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.099 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.129 2 DEBUG nova.storage.rbd_utils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 184f3992-03ad-4908-aeb5-b14e562fa846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.133 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1188429841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.604 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.605 2 DEBUG nova.virt.libvirt.vif [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1510354407',display_name='tempest-ServerActionsTestOtherB-server-1510354407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1510354407',id=127,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-qwaoankd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:23Z,user_data=None,user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=184f3992-03ad-4908-aeb5-b14e562fa846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.606 2 DEBUG nova.network.os_vif_util [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.607 2 DEBUG nova.network.os_vif_util [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:bf:36,bridge_name='br-int',has_traffic_filtering=True,id=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc6c1baa-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.608 2 DEBUG nova.objects.instance [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid 184f3992-03ad-4908-aeb5-b14e562fa846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.640 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <uuid>184f3992-03ad-4908-aeb5-b14e562fa846</uuid>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <name>instance-0000007f</name>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerActionsTestOtherB-server-1510354407</nova:name>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:47:32</nova:creationTime>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <nova:port uuid="dc6c1baa-6ec8-4649-bfbc-c6720e954f7b">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <entry name="serial">184f3992-03ad-4908-aeb5-b14e562fa846</entry>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <entry name="uuid">184f3992-03ad-4908-aeb5-b14e562fa846</entry>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/184f3992-03ad-4908-aeb5-b14e562fa846_disk">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/184f3992-03ad-4908-aeb5-b14e562fa846_disk.config">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:1e:bf:36"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <target dev="tapdc6c1baa-6e"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846/console.log" append="off"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:47:33 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:47:33 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:47:33 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:47:33 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.646 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Preparing to wait for external event network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.646 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.647 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.647 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.647 2 DEBUG nova.virt.libvirt.vif [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1510354407',display_name='tempest-ServerActionsTestOtherB-server-1510354407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1510354407',id=127,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-qwaoankd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:23Z,user_data=None,user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=184f3992-03ad-4908-aeb5-b14e562fa846,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.648 2 DEBUG nova.network.os_vif_util [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.649 2 DEBUG nova.network.os_vif_util [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:bf:36,bridge_name='br-int',has_traffic_filtering=True,id=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc6c1baa-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.649 2 DEBUG os_vif [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:bf:36,bridge_name='br-int',has_traffic_filtering=True,id=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc6c1baa-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc6c1baa-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc6c1baa-6e, col_values=(('external_ids', {'iface-id': 'dc6c1baa-6ec8-4649-bfbc-c6720e954f7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:bf:36', 'vm-uuid': '184f3992-03ad-4908-aeb5-b14e562fa846'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:33 np0005466031 NetworkManager[44907]: <info>  [1759409253.6560] manager: (tapdc6c1baa-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.662 2 INFO os_vif [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:bf:36,bridge_name='br-int',has_traffic_filtering=True,id=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc6c1baa-6e')#033[00m
Oct  2 08:47:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e310 e310: 3 total, 3 up, 3 in
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.742 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.743 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.743 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:1e:bf:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.744 2 INFO nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Using config drive#033[00m
Oct  2 08:47:33 np0005466031 nova_compute[235803]: 2025-10-02 12:47:33.808 2 DEBUG nova.storage.rbd_utils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 184f3992-03ad-4908-aeb5-b14e562fa846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.503 2 INFO nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Creating config drive at /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846/disk.config#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.511 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46hy28v8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.648 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46hy28v8" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.685 2 DEBUG nova.storage.rbd_utils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image 184f3992-03ad-4908-aeb5-b14e562fa846_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.689 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846/disk.config 184f3992-03ad-4908-aeb5-b14e562fa846_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.778 2 DEBUG nova.network.neutron [req-10e75872-3066-4291-930d-cb4f79a26512 req-841aaf63-9658-42a6-a6dc-2b4e3fe63f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updated VIF entry in instance network info cache for port dc6c1baa-6ec8-4649-bfbc-c6720e954f7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.779 2 DEBUG nova.network.neutron [req-10e75872-3066-4291-930d-cb4f79a26512 req-841aaf63-9658-42a6-a6dc-2b4e3fe63f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updating instance_info_cache with network_info: [{"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.858 2 DEBUG oslo_concurrency.processutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846/disk.config 184f3992-03ad-4908-aeb5-b14e562fa846_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.859 2 INFO nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Deleting local config drive /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846/disk.config because it was imported into RBD.#033[00m
Oct  2 08:47:34 np0005466031 kernel: tapdc6c1baa-6e: entered promiscuous mode
Oct  2 08:47:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:47:34Z|00477|binding|INFO|Claiming lport dc6c1baa-6ec8-4649-bfbc-c6720e954f7b for this chassis.
Oct  2 08:47:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:47:34Z|00478|binding|INFO|dc6c1baa-6ec8-4649-bfbc-c6720e954f7b: Claiming fa:16:3e:1e:bf:36 10.100.0.5
Oct  2 08:47:34 np0005466031 NetworkManager[44907]: <info>  [1759409254.9067] manager: (tapdc6c1baa-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:34 np0005466031 ovn_controller[132413]: 2025-10-02T12:47:34Z|00479|binding|INFO|Setting lport dc6c1baa-6ec8-4649-bfbc-c6720e954f7b ovn-installed in OVS
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:34 np0005466031 nova_compute[235803]: 2025-10-02 12:47:34.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:34.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:34 np0005466031 systemd-udevd[291568]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:47:34 np0005466031 systemd-machined[192227]: New machine qemu-55-instance-0000007f.
Oct  2 08:47:34 np0005466031 NetworkManager[44907]: <info>  [1759409254.9661] device (tapdc6c1baa-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:47:34 np0005466031 NetworkManager[44907]: <info>  [1759409254.9672] device (tapdc6c1baa-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:47:34 np0005466031 systemd[1]: Started Virtual Machine qemu-55-instance-0000007f.
Oct  2 08:47:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:47:35Z|00480|binding|INFO|Setting lport dc6c1baa-6ec8-4649-bfbc-c6720e954f7b up in Southbound
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.378 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:bf:36 10.100.0.5'], port_security=['fa:16:3e:1e:bf:36 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '184f3992-03ad-4908-aeb5-b14e562fa846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c38185a-c389-4d04-8fc6-53a62e6c5352', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.380 141898 INFO neutron.agent.ovn.metadata.agent [-] Port dc6c1baa-6ec8-4649-bfbc-c6720e954f7b in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.382 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.400 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ec31db80-a783-4b35-9ba3-c6e9e4a0e225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.432 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a23fa103-a0b5-4d4e-8a70-804ff6cb6359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.434 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[62c08061-efa6-41f3-a120-9858544bc55d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.460 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bc46cc84-11f8-4d5d-bef2-b8a78a990d01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.478 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e7733a-8649-44cd-842c-c2aa99432b89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701968, 'reachable_time': 16099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291624, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.495 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5e3547-17e2-455c-ae94-9bd74ebba988]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701980, 'tstamp': 701980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291625, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701983, 'tstamp': 701983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291625, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.497 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:35 np0005466031 nova_compute[235803]: 2025-10-02 12:47:35.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.500 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.500 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.501 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:47:35.501 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:47:35 np0005466031 nova_compute[235803]: 2025-10-02 12:47:35.901 2 DEBUG oslo_concurrency.lockutils [req-10e75872-3066-4291-930d-cb4f79a26512 req-841aaf63-9658-42a6-a6dc-2b4e3fe63f7f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:35 np0005466031 nova_compute[235803]: 2025-10-02 12:47:35.903 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409255.90282, 184f3992-03ad-4908-aeb5-b14e562fa846 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:35 np0005466031 nova_compute[235803]: 2025-10-02 12:47:35.903 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] VM Started (Lifecycle Event)#033[00m
Oct  2 08:47:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:36.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:38 np0005466031 nova_compute[235803]: 2025-10-02 12:47:38.379 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:38 np0005466031 nova_compute[235803]: 2025-10-02 12:47:38.385 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409255.903587, 184f3992-03ad-4908-aeb5-b14e562fa846 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:38 np0005466031 nova_compute[235803]: 2025-10-02 12:47:38.386 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:47:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:38 np0005466031 nova_compute[235803]: 2025-10-02 12:47:38.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:38.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:39 np0005466031 nova_compute[235803]: 2025-10-02 12:47:39.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:39 np0005466031 podman[291628]: 2025-10-02 12:47:39.835861317 +0000 UTC m=+0.061980817 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:47:39 np0005466031 podman[291629]: 2025-10-02 12:47:39.866302934 +0000 UTC m=+0.095049359 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:47:40 np0005466031 nova_compute[235803]: 2025-10-02 12:47:40.095 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:40 np0005466031 nova_compute[235803]: 2025-10-02 12:47:40.099 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:40 np0005466031 nova_compute[235803]: 2025-10-02 12:47:40.232 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:47:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:47:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:40.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:47:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:40.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.827 2 DEBUG nova.compute.manager [req-40612402-84be-42d1-a247-77c6aebf905f req-b86e0692-886e-4dd2-bb28-fab4a346ffd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received event network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.828 2 DEBUG oslo_concurrency.lockutils [req-40612402-84be-42d1-a247-77c6aebf905f req-b86e0692-886e-4dd2-bb28-fab4a346ffd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.828 2 DEBUG oslo_concurrency.lockutils [req-40612402-84be-42d1-a247-77c6aebf905f req-b86e0692-886e-4dd2-bb28-fab4a346ffd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.829 2 DEBUG oslo_concurrency.lockutils [req-40612402-84be-42d1-a247-77c6aebf905f req-b86e0692-886e-4dd2-bb28-fab4a346ffd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.829 2 DEBUG nova.compute.manager [req-40612402-84be-42d1-a247-77c6aebf905f req-b86e0692-886e-4dd2-bb28-fab4a346ffd3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Processing event network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.830 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.833 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409261.8328793, 184f3992-03ad-4908-aeb5-b14e562fa846 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.833 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.834 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.837 2 INFO nova.virt.libvirt.driver [-] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Instance spawned successfully.#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.838 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.927 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.931 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.932 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.932 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.933 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.933 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.934 2 DEBUG nova.virt.libvirt.driver [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:41 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.938 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:42 np0005466031 nova_compute[235803]: 2025-10-02 12:47:41.999 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:47:42 np0005466031 nova_compute[235803]: 2025-10-02 12:47:42.121 2 INFO nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Took 17.44 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:47:42 np0005466031 nova_compute[235803]: 2025-10-02 12:47:42.121 2 DEBUG nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:42 np0005466031 nova_compute[235803]: 2025-10-02 12:47:42.349 2 INFO nova.compute.manager [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Took 20.46 seconds to build instance.#033[00m
Oct  2 08:47:42 np0005466031 nova_compute[235803]: 2025-10-02 12:47:42.416 2 DEBUG oslo_concurrency.lockutils [None req-85ef79f1-a34c-4243-9bf6-2cda2d9202b0 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:42.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:43 np0005466031 nova_compute[235803]: 2025-10-02 12:47:43.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:44.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:44 np0005466031 nova_compute[235803]: 2025-10-02 12:47:44.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:44.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:46 np0005466031 podman[291725]: 2025-10-02 12:47:46.633077731 +0000 UTC m=+0.061044309 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:47:46 np0005466031 podman[291726]: 2025-10-02 12:47:46.655495237 +0000 UTC m=+0.082889519 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  2 08:47:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:46.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:46.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:48 np0005466031 nova_compute[235803]: 2025-10-02 12:47:48.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:48.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:49 np0005466031 nova_compute[235803]: 2025-10-02 12:47:49.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:49 np0005466031 nova_compute[235803]: 2025-10-02 12:47:49.799 2 DEBUG nova.compute.manager [req-9afc219f-95d0-4247-8f16-9c2ca3704cbe req-0ddb1ffe-a9a8-4b6e-943a-b76487ca4bc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received event network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:49 np0005466031 nova_compute[235803]: 2025-10-02 12:47:49.799 2 DEBUG oslo_concurrency.lockutils [req-9afc219f-95d0-4247-8f16-9c2ca3704cbe req-0ddb1ffe-a9a8-4b6e-943a-b76487ca4bc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:49 np0005466031 nova_compute[235803]: 2025-10-02 12:47:49.800 2 DEBUG oslo_concurrency.lockutils [req-9afc219f-95d0-4247-8f16-9c2ca3704cbe req-0ddb1ffe-a9a8-4b6e-943a-b76487ca4bc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:49 np0005466031 nova_compute[235803]: 2025-10-02 12:47:49.800 2 DEBUG oslo_concurrency.lockutils [req-9afc219f-95d0-4247-8f16-9c2ca3704cbe req-0ddb1ffe-a9a8-4b6e-943a-b76487ca4bc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:49 np0005466031 nova_compute[235803]: 2025-10-02 12:47:49.800 2 DEBUG nova.compute.manager [req-9afc219f-95d0-4247-8f16-9c2ca3704cbe req-0ddb1ffe-a9a8-4b6e-943a-b76487ca4bc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] No waiting events found dispatching network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:49 np0005466031 nova_compute[235803]: 2025-10-02 12:47:49.800 2 WARNING nova.compute.manager [req-9afc219f-95d0-4247-8f16-9c2ca3704cbe req-0ddb1ffe-a9a8-4b6e-943a-b76487ca4bc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received unexpected event network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b for instance with vm_state active and task_state None.#033[00m
Oct  2 08:47:50 np0005466031 nova_compute[235803]: 2025-10-02 12:47:50.044 2 INFO nova.compute.manager [None req-db862039-38a1-4190-8e7b-a8982a85f187 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Get console output#033[00m
Oct  2 08:47:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e311 e311: 3 total, 3 up, 3 in
Oct  2 08:47:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:52.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:52.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:53 np0005466031 nova_compute[235803]: 2025-10-02 12:47:53.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:54.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:54 np0005466031 nova_compute[235803]: 2025-10-02 12:47:54.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:54.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:56 np0005466031 nova_compute[235803]: 2025-10-02 12:47:56.254 2 DEBUG nova.compute.manager [None req-3d4b9379-2daa-411a-bafb-77ec15d4fa3d b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Oct  2 08:47:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:56.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e312 e312: 3 total, 3 up, 3 in
Oct  2 08:47:57 np0005466031 ovn_controller[132413]: 2025-10-02T12:47:57Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:bf:36 10.100.0.5
Oct  2 08:47:57 np0005466031 ovn_controller[132413]: 2025-10-02T12:47:57Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:bf:36 10.100.0.5
Oct  2 08:47:57 np0005466031 nova_compute[235803]: 2025-10-02 12:47:57.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:58 np0005466031 nova_compute[235803]: 2025-10-02 12:47:58.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:58.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:47:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:47:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:47:59 np0005466031 nova_compute[235803]: 2025-10-02 12:47:59.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:00 np0005466031 nova_compute[235803]: 2025-10-02 12:48:00.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:00.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:00.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:02 np0005466031 nova_compute[235803]: 2025-10-02 12:48:02.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:02 np0005466031 nova_compute[235803]: 2025-10-02 12:48:02.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:02.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:02.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:03 np0005466031 nova_compute[235803]: 2025-10-02 12:48:03.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:03 np0005466031 nova_compute[235803]: 2025-10-02 12:48:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:03 np0005466031 nova_compute[235803]: 2025-10-02 12:48:03.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:03 np0005466031 nova_compute[235803]: 2025-10-02 12:48:03.799 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:03 np0005466031 nova_compute[235803]: 2025-10-02 12:48:03.800 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:03 np0005466031 nova_compute[235803]: 2025-10-02 12:48:03.801 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:03 np0005466031 nova_compute[235803]: 2025-10-02 12:48:03.801 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:48:03 np0005466031 nova_compute[235803]: 2025-10-02 12:48:03.802 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:48:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977171546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:48:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:48:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977171546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:48:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1389528664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:04 np0005466031 nova_compute[235803]: 2025-10-02 12:48:04.267 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:04 np0005466031 nova_compute[235803]: 2025-10-02 12:48:04.434 2 DEBUG oslo_concurrency.lockutils [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:04 np0005466031 nova_compute[235803]: 2025-10-02 12:48:04.435 2 DEBUG oslo_concurrency.lockutils [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:04 np0005466031 nova_compute[235803]: 2025-10-02 12:48:04.435 2 DEBUG nova.compute.manager [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:04 np0005466031 nova_compute[235803]: 2025-10-02 12:48:04.440 2 DEBUG nova.compute.manager [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:48:04 np0005466031 nova_compute[235803]: 2025-10-02 12:48:04.441 2 DEBUG nova.objects.instance [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'flavor' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:04 np0005466031 nova_compute[235803]: 2025-10-02 12:48:04.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:04.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.155 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.155 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.156 2 DEBUG nova.virt.libvirt.driver [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.161 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.162 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.167 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.167 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.318 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.319 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3731MB free_disk=20.83069610595703GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.319 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.319 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.629 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.630 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 184f3992-03ad-4908-aeb5-b14e562fa846 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.630 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 2793fe28-695e-4652-b12a-bee14e192d06 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.630 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.630 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:48:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:06.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:06 np0005466031 nova_compute[235803]: 2025-10-02 12:48:06.723 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/659319244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:07 np0005466031 nova_compute[235803]: 2025-10-02 12:48:07.145 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:07 np0005466031 nova_compute[235803]: 2025-10-02 12:48:07.151 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:07 np0005466031 nova_compute[235803]: 2025-10-02 12:48:07.177 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:07 np0005466031 nova_compute[235803]: 2025-10-02 12:48:07.222 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:48:07 np0005466031 nova_compute[235803]: 2025-10-02 12:48:07.223 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2255234982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:08 np0005466031 nova_compute[235803]: 2025-10-02 12:48:08.224 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:08 np0005466031 nova_compute[235803]: 2025-10-02 12:48:08.224 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:48:08 np0005466031 nova_compute[235803]: 2025-10-02 12:48:08.224 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:48:08 np0005466031 nova_compute[235803]: 2025-10-02 12:48:08.264 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:08 np0005466031 nova_compute[235803]: 2025-10-02 12:48:08.264 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:08 np0005466031 nova_compute[235803]: 2025-10-02 12:48:08.265 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:48:08 np0005466031 nova_compute[235803]: 2025-10-02 12:48:08.265 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:08 np0005466031 nova_compute[235803]: 2025-10-02 12:48:08.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:08.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:09 np0005466031 nova_compute[235803]: 2025-10-02 12:48:09.176 2 INFO nova.virt.libvirt.driver [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:48:09 np0005466031 nova_compute[235803]: 2025-10-02 12:48:09.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:09 np0005466031 kernel: tapca09038c-de (unregistering): left promiscuous mode
Oct  2 08:48:09 np0005466031 NetworkManager[44907]: <info>  [1759409289.8261] device (tapca09038c-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:48:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:09Z|00481|binding|INFO|Releasing lport ca09038c-def5-41c9-a98a-c7837558526f from this chassis (sb_readonly=0)
Oct  2 08:48:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:09Z|00482|binding|INFO|Setting lport ca09038c-def5-41c9-a98a-c7837558526f down in Southbound
Oct  2 08:48:09 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:09Z|00483|binding|INFO|Removing iface tapca09038c-de ovn-installed in OVS
Oct  2 08:48:09 np0005466031 nova_compute[235803]: 2025-10-02 12:48:09.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:09 np0005466031 nova_compute[235803]: 2025-10-02 12:48:09.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:09 np0005466031 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct  2 08:48:09 np0005466031 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000007c.scope: Consumed 18.134s CPU time.
Oct  2 08:48:09 np0005466031 systemd-machined[192227]: Machine qemu-53-instance-0000007c terminated.
Oct  2 08:48:09 np0005466031 podman[291872]: 2025-10-02 12:48:09.925671259 +0000 UTC m=+0.053284846 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:48:09 np0005466031 podman[291885]: 2025-10-02 12:48:09.981823466 +0000 UTC m=+0.071728737 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.013 2 INFO nova.virt.libvirt.driver [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance destroyed successfully.#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.014 2 DEBUG nova.objects.instance [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.022 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:32:15 10.100.0.13'], port_security=['fa:16:3e:21:32:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=ca09038c-def5-41c9-a98a-c7837558526f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.023 141898 INFO neutron.agent.ovn.metadata.agent [-] Port ca09038c-def5-41c9-a98a-c7837558526f in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.025 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.041 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d49e87-37f8-4eca-9545-9f8b51d07e91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.044 2 DEBUG nova.compute.manager [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.071 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[11bc2f7f-13fc-4170-803c-00eb6ca2e7b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.074 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[20c44cff-7a19-4778-964c-7423713c14eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.100 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[44a6a134-9f51-4780-976a-6d2401c56963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.114 2 DEBUG oslo_concurrency.lockutils [None req-e461f3b1-8147-4a0d-94d0-222ac03e557e b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 5.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.117 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e0831c-910c-4932-9824-378881813a49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701968, 'reachable_time': 16099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291937, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.132 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5641c3-40cf-4eb4-b5c7-68345858971b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701980, 'tstamp': 701980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291938, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701983, 'tstamp': 701983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291938, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.133 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.179 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.179 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.179 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:10.180 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.336 2 DEBUG nova.compute.manager [req-3d9559d6-9951-4b54-8dfe-3a43fa75de14 req-d5e5beda-2183-44d6-a2ac-3512a05ed8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-unplugged-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.336 2 DEBUG oslo_concurrency.lockutils [req-3d9559d6-9951-4b54-8dfe-3a43fa75de14 req-d5e5beda-2183-44d6-a2ac-3512a05ed8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.337 2 DEBUG oslo_concurrency.lockutils [req-3d9559d6-9951-4b54-8dfe-3a43fa75de14 req-d5e5beda-2183-44d6-a2ac-3512a05ed8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.337 2 DEBUG oslo_concurrency.lockutils [req-3d9559d6-9951-4b54-8dfe-3a43fa75de14 req-d5e5beda-2183-44d6-a2ac-3512a05ed8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.337 2 DEBUG nova.compute.manager [req-3d9559d6-9951-4b54-8dfe-3a43fa75de14 req-d5e5beda-2183-44d6-a2ac-3512a05ed8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] No waiting events found dispatching network-vif-unplugged-ca09038c-def5-41c9-a98a-c7837558526f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.337 2 WARNING nova.compute.manager [req-3d9559d6-9951-4b54-8dfe-3a43fa75de14 req-d5e5beda-2183-44d6-a2ac-3512a05ed8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received unexpected event network-vif-unplugged-ca09038c-def5-41c9-a98a-c7837558526f for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:48:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:10Z|00484|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.549 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.575 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.575 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.575 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.576 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:10 np0005466031 nova_compute[235803]: 2025-10-02 12:48:10.576 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:48:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:10.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/867284691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:12 np0005466031 nova_compute[235803]: 2025-10-02 12:48:12.494 2 DEBUG nova.compute.manager [req-c1d8cc26-ed7f-4c6a-84d9-28a3fdef159d req-5af1f056-51da-49a4-9b06-d27a9fc31ad6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:12 np0005466031 nova_compute[235803]: 2025-10-02 12:48:12.494 2 DEBUG oslo_concurrency.lockutils [req-c1d8cc26-ed7f-4c6a-84d9-28a3fdef159d req-5af1f056-51da-49a4-9b06-d27a9fc31ad6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:12 np0005466031 nova_compute[235803]: 2025-10-02 12:48:12.494 2 DEBUG oslo_concurrency.lockutils [req-c1d8cc26-ed7f-4c6a-84d9-28a3fdef159d req-5af1f056-51da-49a4-9b06-d27a9fc31ad6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:12 np0005466031 nova_compute[235803]: 2025-10-02 12:48:12.495 2 DEBUG oslo_concurrency.lockutils [req-c1d8cc26-ed7f-4c6a-84d9-28a3fdef159d req-5af1f056-51da-49a4-9b06-d27a9fc31ad6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:12 np0005466031 nova_compute[235803]: 2025-10-02 12:48:12.495 2 DEBUG nova.compute.manager [req-c1d8cc26-ed7f-4c6a-84d9-28a3fdef159d req-5af1f056-51da-49a4-9b06-d27a9fc31ad6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] No waiting events found dispatching network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:12 np0005466031 nova_compute[235803]: 2025-10-02 12:48:12.495 2 WARNING nova.compute.manager [req-c1d8cc26-ed7f-4c6a-84d9-28a3fdef159d req-5af1f056-51da-49a4-9b06-d27a9fc31ad6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received unexpected event network-vif-plugged-ca09038c-def5-41c9-a98a-c7837558526f for instance with vm_state stopped and task_state resize_prep.#033[00m
Oct  2 08:48:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:12.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:12.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:13 np0005466031 nova_compute[235803]: 2025-10-02 12:48:13.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:14.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:14 np0005466031 nova_compute[235803]: 2025-10-02 12:48:14.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:15 np0005466031 nova_compute[235803]: 2025-10-02 12:48:15.031 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:15 np0005466031 nova_compute[235803]: 2025-10-02 12:48:15.032 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:15 np0005466031 nova_compute[235803]: 2025-10-02 12:48:15.032 2 DEBUG nova.network.neutron [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:48:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:16 np0005466031 nova_compute[235803]: 2025-10-02 12:48:16.512 2 DEBUG nova.network.neutron [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:16.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:16 np0005466031 nova_compute[235803]: 2025-10-02 12:48:16.715 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:16.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:17 np0005466031 podman[291944]: 2025-10-02 12:48:17.621834866 +0000 UTC m=+0.050959199 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:48:17 np0005466031 podman[291943]: 2025-10-02 12:48:17.623765982 +0000 UTC m=+0.054885812 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.450 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "2793fe28-695e-4652-b12a-bee14e192d06" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.451 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "2793fe28-695e-4652-b12a-bee14e192d06" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.452 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "2793fe28-695e-4652-b12a-bee14e192d06-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.452 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "2793fe28-695e-4652-b12a-bee14e192d06-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.452 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "2793fe28-695e-4652-b12a-bee14e192d06-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.453 2 INFO nova.compute.manager [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Terminating instance#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.454 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "refresh_cache-2793fe28-695e-4652-b12a-bee14e192d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.454 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquired lock "refresh_cache-2793fe28-695e-4652-b12a-bee14e192d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.454 2 DEBUG nova.network.neutron [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.490 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.491 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Creating file /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/c5a517a6760e4b368e0bff3b4ae77a07.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.491 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/c5a517a6760e4b368e0bff3b4ae77a07.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:18.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.868 2 DEBUG nova.network.neutron [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.941 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/c5a517a6760e4b368e0bff3b4ae77a07.tmp" returned: 1 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.942 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4/c5a517a6760e4b368e0bff3b4ae77a07.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.942 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Creating directory /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:48:18 np0005466031 nova_compute[235803]: 2025-10-02 12:48:18.942 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:48:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:18.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.150 2 DEBUG oslo_concurrency.processutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.154 2 INFO nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance already shutdown.#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.161 2 INFO nova.virt.libvirt.driver [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Instance destroyed successfully.#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.162 2 DEBUG nova.virt.libvirt.vif [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:21:32:15"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.163 2 DEBUG nova.network.os_vif_util [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:21:32:15"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.163 2 DEBUG nova.network.os_vif_util [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.164 2 DEBUG os_vif [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca09038c-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.174 2 INFO os_vif [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de')#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.181 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.182 2 DEBUG nova.virt.libvirt.driver [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.384 2 DEBUG nova.network.neutron [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.406 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Releasing lock "refresh_cache-2793fe28-695e-4652-b12a-bee14e192d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.407 2 DEBUG nova.compute.manager [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.474 2 DEBUG neutronclient.v2_0.client [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ca09038c-def5-41c9-a98a-c7837558526f for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:48:19 np0005466031 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct  2 08:48:19 np0005466031 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000080.scope: Consumed 13.996s CPU time.
Oct  2 08:48:19 np0005466031 systemd-machined[192227]: Machine qemu-54-instance-00000080 terminated.
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.586 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.586 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.586 2 DEBUG oslo_concurrency.lockutils [None req-e053652d-6167-4195-bc42-ab60bc501c99 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.630 2 INFO nova.virt.libvirt.driver [-] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Instance destroyed successfully.#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.631 2 DEBUG nova.objects.instance [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lazy-loading 'resources' on Instance uuid 2793fe28-695e-4652-b12a-bee14e192d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:19 np0005466031 nova_compute[235803]: 2025-10-02 12:48:19.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:20.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:21 np0005466031 nova_compute[235803]: 2025-10-02 12:48:21.268 2 DEBUG nova.compute.manager [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Received event network-changed-ca09038c-def5-41c9-a98a-c7837558526f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:21 np0005466031 nova_compute[235803]: 2025-10-02 12:48:21.269 2 DEBUG nova.compute.manager [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Refreshing instance network info cache due to event network-changed-ca09038c-def5-41c9-a98a-c7837558526f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:21 np0005466031 nova_compute[235803]: 2025-10-02 12:48:21.269 2 DEBUG oslo_concurrency.lockutils [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:21 np0005466031 nova_compute[235803]: 2025-10-02 12:48:21.269 2 DEBUG oslo_concurrency.lockutils [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:21 np0005466031 nova_compute[235803]: 2025-10-02 12:48:21.269 2 DEBUG nova.network.neutron [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Refreshing network info cache for port ca09038c-def5-41c9-a98a-c7837558526f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:22 np0005466031 nova_compute[235803]: 2025-10-02 12:48:22.337 2 INFO nova.virt.libvirt.driver [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Deleting instance files /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06_del#033[00m
Oct  2 08:48:22 np0005466031 nova_compute[235803]: 2025-10-02 12:48:22.338 2 INFO nova.virt.libvirt.driver [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Deletion of /var/lib/nova/instances/2793fe28-695e-4652-b12a-bee14e192d06_del complete#033[00m
Oct  2 08:48:22 np0005466031 nova_compute[235803]: 2025-10-02 12:48:22.406 2 INFO nova.compute.manager [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Took 3.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:48:22 np0005466031 nova_compute[235803]: 2025-10-02 12:48:22.406 2 DEBUG oslo.service.loopingcall [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:48:22 np0005466031 nova_compute[235803]: 2025-10-02 12:48:22.407 2 DEBUG nova.compute.manager [-] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:48:22 np0005466031 nova_compute[235803]: 2025-10-02 12:48:22.407 2 DEBUG nova.network.neutron [-] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:48:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:22.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:23 np0005466031 nova_compute[235803]: 2025-10-02 12:48:23.438 2 DEBUG nova.network.neutron [-] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.261 2 DEBUG nova.network.neutron [-] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.295 2 INFO nova.compute.manager [-] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Took 1.89 seconds to deallocate network for instance.#033[00m
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.381 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.381 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.547 2 DEBUG oslo_concurrency.processutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:24.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4231190045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.975 2 DEBUG oslo_concurrency.processutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:24 np0005466031 nova_compute[235803]: 2025-10-02 12:48:24.980 2 DEBUG nova.compute.provider_tree [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:24.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.007 2 DEBUG nova.scheduler.client.report [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.012 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409290.0113478, 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.012 2 INFO nova.compute.manager [-] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.033 2 DEBUG nova.compute.manager [None req-4cd1d8b3-baba-4b06-a498-09a33fdeca50 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.035 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.040 2 DEBUG nova.compute.manager [None req-4cd1d8b3-baba-4b06-a498-09a33fdeca50 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_migrated, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.079 2 INFO nova.compute.manager [None req-4cd1d8b3-baba-4b06-a498-09a33fdeca50 - - - - - -] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.110 2 INFO nova.scheduler.client.report [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Deleted allocations for instance 2793fe28-695e-4652-b12a-bee14e192d06#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.387 2 DEBUG oslo_concurrency.lockutils [None req-e83317f7-42ca-4cab-8cf6-6964d246bf03 537a49488e284c9ab1330c64e8072747 9768ac969bcb49a08f0cf2563ecd3980 - - default default] Lock "2793fe28-695e-4652-b12a-bee14e192d06" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:25.857 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:25.858 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:25.858 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:25 np0005466031 nova_compute[235803]: 2025-10-02 12:48:25.983 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:26 np0005466031 nova_compute[235803]: 2025-10-02 12:48:26.397 2 DEBUG nova.network.neutron [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updated VIF entry in instance network info cache for port ca09038c-def5-41c9-a98a-c7837558526f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:26 np0005466031 nova_compute[235803]: 2025-10-02 12:48:26.398 2 DEBUG nova.network.neutron [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:26 np0005466031 nova_compute[235803]: 2025-10-02 12:48:26.655 2 DEBUG oslo_concurrency.lockutils [req-4a7856d5-2eb1-446c-ba99-8f5c9696a585 req-483374cd-f0bd-4bf5-84f0-775c120b07be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:26.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:26.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e313 e313: 3 total, 3 up, 3 in
Oct  2 08:48:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:28.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:28.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:28 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:29 np0005466031 podman[292356]: 2025-10-02 12:48:29.045281477 +0000 UTC m=+0.022246392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:48:29 np0005466031 nova_compute[235803]: 2025-10-02 12:48:29.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2481788006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:29 np0005466031 podman[292356]: 2025-10-02 12:48:29.194724572 +0000 UTC m=+0.171689467 container create 9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 08:48:29 np0005466031 systemd[1]: Started libpod-conmon-9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365.scope.
Oct  2 08:48:29 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:48:29 np0005466031 podman[292356]: 2025-10-02 12:48:29.427963331 +0000 UTC m=+0.404928246 container init 9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:48:29 np0005466031 podman[292356]: 2025-10-02 12:48:29.436324472 +0000 UTC m=+0.413289367 container start 9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 08:48:29 np0005466031 silly_bardeen[292373]: 167 167
Oct  2 08:48:29 np0005466031 systemd[1]: libpod-9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365.scope: Deactivated successfully.
Oct  2 08:48:29 np0005466031 conmon[292373]: conmon 9b6d32bbc8698c854e73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365.scope/container/memory.events
Oct  2 08:48:29 np0005466031 podman[292356]: 2025-10-02 12:48:29.547726941 +0000 UTC m=+0.524691866 container attach 9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 08:48:29 np0005466031 podman[292356]: 2025-10-02 12:48:29.548133443 +0000 UTC m=+0.525098338 container died 9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 08:48:29 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b490e34402852756ea081410e48624975ad1bb0eb698c8efb09abc4d8a9891ba-merged.mount: Deactivated successfully.
Oct  2 08:48:29 np0005466031 nova_compute[235803]: 2025-10-02 12:48:29.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:29 np0005466031 podman[292356]: 2025-10-02 12:48:29.997067915 +0000 UTC m=+0.974032810 container remove 9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 08:48:30 np0005466031 systemd[1]: libpod-conmon-9b6d32bbc8698c854e7336831c5339fc910ff52591a91429d4199fe702bc7365.scope: Deactivated successfully.
Oct  2 08:48:30 np0005466031 podman[292399]: 2025-10-02 12:48:30.138632853 +0000 UTC m=+0.023726334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:48:30 np0005466031 podman[292399]: 2025-10-02 12:48:30.281686634 +0000 UTC m=+0.166780095 container create 2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 08:48:30 np0005466031 systemd[1]: Started libpod-conmon-2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9.scope.
Oct  2 08:48:30 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:48:30 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d2da1498582da1e26912e2c0ba90682dc44d58fcc8c54c56cdc0347feb21da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 08:48:30 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d2da1498582da1e26912e2c0ba90682dc44d58fcc8c54c56cdc0347feb21da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 08:48:30 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d2da1498582da1e26912e2c0ba90682dc44d58fcc8c54c56cdc0347feb21da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 08:48:30 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d2da1498582da1e26912e2c0ba90682dc44d58fcc8c54c56cdc0347feb21da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 08:48:30 np0005466031 podman[292399]: 2025-10-02 12:48:30.557785137 +0000 UTC m=+0.442878618 container init 2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 08:48:30 np0005466031 podman[292399]: 2025-10-02 12:48:30.564695036 +0000 UTC m=+0.449788497 container start 2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 08:48:30 np0005466031 podman[292399]: 2025-10-02 12:48:30.739834792 +0000 UTC m=+0.624928273 container attach 2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 08:48:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:30.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:30.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:31 np0005466031 strange_napier[292415]: [
Oct  2 08:48:31 np0005466031 strange_napier[292415]:    {
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        "available": false,
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        "ceph_device": false,
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        "lsm_data": {},
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        "lvs": [],
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        "path": "/dev/sr0",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        "rejected_reasons": [
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "Has a FileSystem",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "Insufficient space (<5GB)"
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        ],
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        "sys_api": {
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "actuators": null,
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "device_nodes": "sr0",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "devname": "sr0",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "human_readable_size": "482.00 KB",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "id_bus": "ata",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "model": "QEMU DVD-ROM",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "nr_requests": "2",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "parent": "/dev/sr0",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "partitions": {},
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "path": "/dev/sr0",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "removable": "1",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "rev": "2.5+",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "ro": "0",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "rotational": "0",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "sas_address": "",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "sas_device_handle": "",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "scheduler_mode": "mq-deadline",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "sectors": 0,
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "sectorsize": "2048",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "size": 493568.0,
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "support_discard": "2048",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "type": "disk",
Oct  2 08:48:31 np0005466031 strange_napier[292415]:            "vendor": "QEMU"
Oct  2 08:48:31 np0005466031 strange_napier[292415]:        }
Oct  2 08:48:31 np0005466031 strange_napier[292415]:    }
Oct  2 08:48:31 np0005466031 strange_napier[292415]: ]
Oct  2 08:48:31 np0005466031 systemd[1]: libpod-2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9.scope: Deactivated successfully.
Oct  2 08:48:31 np0005466031 systemd[1]: libpod-2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9.scope: Consumed 1.196s CPU time.
Oct  2 08:48:31 np0005466031 podman[292399]: 2025-10-02 12:48:31.814613671 +0000 UTC m=+1.699707152 container died 2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  2 08:48:31 np0005466031 systemd[1]: var-lib-containers-storage-overlay-06d2da1498582da1e26912e2c0ba90682dc44d58fcc8c54c56cdc0347feb21da-merged.mount: Deactivated successfully.
Oct  2 08:48:31 np0005466031 podman[292399]: 2025-10-02 12:48:31.870999566 +0000 UTC m=+1.756093027 container remove 2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 08:48:31 np0005466031 systemd[1]: libpod-conmon-2cb0ab1ae112c65eaa004b6c5cba2a447c4e91b06acac5731dcc76cadc37deb9.scope: Deactivated successfully.
Oct  2 08:48:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:32.683 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:32 np0005466031 nova_compute[235803]: 2025-10-02 12:48:32.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:32.685 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:48:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:32.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:32.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:48:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:48:34 np0005466031 nova_compute[235803]: 2025-10-02 12:48:34.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:34 np0005466031 nova_compute[235803]: 2025-10-02 12:48:34.429 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:34 np0005466031 nova_compute[235803]: 2025-10-02 12:48:34.430 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:34 np0005466031 nova_compute[235803]: 2025-10-02 12:48:34.431 2 DEBUG nova.compute.manager [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Going to confirm migration 17 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:48:34 np0005466031 nova_compute[235803]: 2025-10-02 12:48:34.630 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409299.6291592, 2793fe28-695e-4652-b12a-bee14e192d06 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:34 np0005466031 nova_compute[235803]: 2025-10-02 12:48:34.631 2 INFO nova.compute.manager [-] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:48:34 np0005466031 nova_compute[235803]: 2025-10-02 12:48:34.673 2 DEBUG nova.compute.manager [None req-dba18aa0-513b-4024-985d-5e339484f5b5 - - - - - -] [instance: 2793fe28-695e-4652-b12a-bee14e192d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:34.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:34 np0005466031 nova_compute[235803]: 2025-10-02 12:48:34.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:34.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:36 np0005466031 nova_compute[235803]: 2025-10-02 12:48:36.100 2 DEBUG neutronclient.v2_0.client [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ca09038c-def5-41c9-a98a-c7837558526f for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:48:36 np0005466031 nova_compute[235803]: 2025-10-02 12:48:36.101 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:36 np0005466031 nova_compute[235803]: 2025-10-02 12:48:36.101 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:36 np0005466031 nova_compute[235803]: 2025-10-02 12:48:36.101 2 DEBUG nova.network.neutron [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:48:36 np0005466031 nova_compute[235803]: 2025-10-02 12:48:36.101 2 DEBUG nova.objects.instance [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'info_cache' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:36.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:36 np0005466031 nova_compute[235803]: 2025-10-02 12:48:36.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:36.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:37.688 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:38.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:39.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:39 np0005466031 nova_compute[235803]: 2025-10-02 12:48:39.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:39 np0005466031 nova_compute[235803]: 2025-10-02 12:48:39.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:40 np0005466031 nova_compute[235803]: 2025-10-02 12:48:40.184 2 DEBUG nova.network.neutron [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Updating instance_info_cache with network_info: [{"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:40 np0005466031 nova_compute[235803]: 2025-10-02 12:48:40.408 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:40 np0005466031 nova_compute[235803]: 2025-10-02 12:48:40.409 2 DEBUG nova.objects.instance [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:40 np0005466031 podman[293726]: 2025-10-02 12:48:40.628728355 +0000 UTC m=+0.055616703 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:48:40 np0005466031 podman[293727]: 2025-10-02 12:48:40.660513611 +0000 UTC m=+0.086796592 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:48:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:40.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:41.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:41 np0005466031 nova_compute[235803]: 2025-10-02 12:48:41.694 2 DEBUG nova.storage.rbd_utils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] removing snapshot(nova-resize) on rbd image(2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:48:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:42.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e314 e314: 3 total, 3 up, 3 in
Oct  2 08:48:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.517 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "12d6a33b-0e31-429b-8cb5-395f0571e11f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.518 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.598 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.714 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.715 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.727 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.728 2 INFO nova.compute.claims [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.900 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "5bc565ce-21fe-4607-b264-009e95abac90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.900 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "5bc565ce-21fe-4607-b264-009e95abac90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.931 2 DEBUG nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:48:43 np0005466031 nova_compute[235803]: 2025-10-02 12:48:43.977 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.046 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.082 2 DEBUG nova.virt.libvirt.vif [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2011106745',display_name='tempest-ServerActionsTestOtherB-server-2011106745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2011106745',id=124,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-q33tewf6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.083 2 DEBUG nova.network.os_vif_util [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "ca09038c-def5-41c9-a98a-c7837558526f", "address": "fa:16:3e:21:32:15", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca09038c-de", "ovs_interfaceid": "ca09038c-def5-41c9-a98a-c7837558526f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.084 2 DEBUG nova.network.os_vif_util [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.084 2 DEBUG os_vif [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca09038c-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.090 2 INFO os_vif [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:32:15,bridge_name='br-int',has_traffic_filtering=True,id=ca09038c-def5-41c9-a98a-c7837558526f,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca09038c-de')#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.090 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4245125267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.426 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.432 2 DEBUG nova.compute.provider_tree [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.566 2 DEBUG nova.scheduler.client.report [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:44.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.825 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.826 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.829 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.835 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.836 2 INFO nova.compute.claims [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:48:44 np0005466031 nova_compute[235803]: 2025-10-02 12:48:44.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:45.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.085 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.086 2 DEBUG nova.network.neutron [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.235 2 INFO nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.259 2 DEBUG nova.policy [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05ca431bf8724851be4667d4ba4ed232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18d7b58e1d284072a8871e112ae7b16a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.348 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.470 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.471 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.472 2 INFO nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Creating image(s)#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.498 2 DEBUG nova.storage.rbd_utils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] rbd image 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.523 2 DEBUG nova.storage.rbd_utils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] rbd image 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.549 2 DEBUG nova.storage.rbd_utils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] rbd image 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.552 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.618 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.649 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.650 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.651 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.651 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.679 2 DEBUG nova.storage.rbd_utils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] rbd image 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:45 np0005466031 nova_compute[235803]: 2025-10-02 12:48:45.683 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.088 2 DEBUG nova.network.neutron [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Successfully created port: 2bf94cc1-652e-4ef0-812e-63d700173f4d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:48:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:46 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3566954741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.139 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.145 2 DEBUG nova.compute.provider_tree [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.189 2 DEBUG nova.scheduler.client.report [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.232 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.233 2 DEBUG nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.235 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 2.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.304 2 DEBUG nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.330 2 INFO nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.353 2 DEBUG nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.405 2 DEBUG oslo_concurrency.processutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.495 2 DEBUG nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.497 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.497 2 INFO nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Creating image(s)#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.527 2 DEBUG nova.storage.rbd_utils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.562 2 DEBUG nova.storage.rbd_utils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.591 2 DEBUG nova.storage.rbd_utils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.594 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.662 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.663 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.664 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.664 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.687 2 DEBUG nova.storage.rbd_utils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.691 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5bc565ce-21fe-4607-b264-009e95abac90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:46.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:46 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2530885159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.881 2 DEBUG oslo_concurrency.processutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.887 2 DEBUG nova.compute.provider_tree [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:46 np0005466031 nova_compute[235803]: 2025-10-02 12:48:46.934 2 DEBUG nova.scheduler.client.report [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:47 np0005466031 nova_compute[235803]: 2025-10-02 12:48:47.004 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:47 np0005466031 nova_compute[235803]: 2025-10-02 12:48:47.004 2 DEBUG nova.compute.manager [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805#033[00m
Oct  2 08:48:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:47.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:47 np0005466031 nova_compute[235803]: 2025-10-02 12:48:47.145 2 INFO nova.scheduler.client.report [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Deleted allocation for migration 2484406b-c08a-4a1e-845a-5539b8eb7a58#033[00m
Oct  2 08:48:47 np0005466031 nova_compute[235803]: 2025-10-02 12:48:47.224 2 DEBUG oslo_concurrency.lockutils [None req-9ce8f063-bdfd-4a75-915c-5bfc2c709db5 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "2c1bf1aa-8fcd-4688-b50c-1b331a3bc8d4" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 12.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.022 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.112 2 DEBUG nova.storage.rbd_utils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] resizing rbd image 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.414 2 DEBUG nova.network.neutron [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Successfully updated port: 2bf94cc1-652e-4ef0-812e-63d700173f4d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.416 2 DEBUG nova.compute.manager [req-af5dfe42-3863-4622-9ddf-8fae4495ae34 req-f23840a9-46d5-428a-8ff5-38c3234320a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received event network-changed-2bf94cc1-652e-4ef0-812e-63d700173f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.416 2 DEBUG nova.compute.manager [req-af5dfe42-3863-4622-9ddf-8fae4495ae34 req-f23840a9-46d5-428a-8ff5-38c3234320a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Refreshing instance network info cache due to event network-changed-2bf94cc1-652e-4ef0-812e-63d700173f4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.417 2 DEBUG oslo_concurrency.lockutils [req-af5dfe42-3863-4622-9ddf-8fae4495ae34 req-f23840a9-46d5-428a-8ff5-38c3234320a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-12d6a33b-0e31-429b-8cb5-395f0571e11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.417 2 DEBUG oslo_concurrency.lockutils [req-af5dfe42-3863-4622-9ddf-8fae4495ae34 req-f23840a9-46d5-428a-8ff5-38c3234320a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-12d6a33b-0e31-429b-8cb5-395f0571e11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.417 2 DEBUG nova.network.neutron [req-af5dfe42-3863-4622-9ddf-8fae4495ae34 req-f23840a9-46d5-428a-8ff5-38c3234320a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Refreshing network info cache for port 2bf94cc1-652e-4ef0-812e-63d700173f4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.449 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "refresh_cache-12d6a33b-0e31-429b-8cb5-395f0571e11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:48 np0005466031 podman[294219]: 2025-10-02 12:48:48.622499676 +0000 UTC m=+0.056265982 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:48:48 np0005466031 podman[294220]: 2025-10-02 12:48:48.629875329 +0000 UTC m=+0.057548059 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 08:48:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:48.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:48 np0005466031 nova_compute[235803]: 2025-10-02 12:48:48.902 2 DEBUG nova.network.neutron [req-af5dfe42-3863-4622-9ddf-8fae4495ae34 req-f23840a9-46d5-428a-8ff5-38c3234320a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:48:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:49.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.161 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5bc565ce-21fe-4607-b264-009e95abac90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.327 2 DEBUG nova.storage.rbd_utils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] resizing rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.501 2 DEBUG nova.network.neutron [req-af5dfe42-3863-4622-9ddf-8fae4495ae34 req-f23840a9-46d5-428a-8ff5-38c3234320a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.532 2 DEBUG oslo_concurrency.lockutils [req-af5dfe42-3863-4622-9ddf-8fae4495ae34 req-f23840a9-46d5-428a-8ff5-38c3234320a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-12d6a33b-0e31-429b-8cb5-395f0571e11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.532 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquired lock "refresh_cache-12d6a33b-0e31-429b-8cb5-395f0571e11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.532 2 DEBUG nova.network.neutron [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.769 2 DEBUG nova.network.neutron [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:49 np0005466031 nova_compute[235803]: 2025-10-02 12:48:49.984 2 DEBUG nova.objects.instance [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lazy-loading 'migration_context' on Instance uuid 12d6a33b-0e31-429b-8cb5-395f0571e11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.003 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.004 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Ensure instance console log exists: /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.004 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.005 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.005 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.109 2 DEBUG nova.objects.instance [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'migration_context' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.127 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.127 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Ensure instance console log exists: /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.127 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.128 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.128 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.129 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.133 2 WARNING nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.138 2 DEBUG nova.virt.libvirt.host [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.139 2 DEBUG nova.virt.libvirt.host [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.142 2 DEBUG nova.virt.libvirt.host [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.143 2 DEBUG nova.virt.libvirt.host [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.144 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.145 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.145 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.145 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.146 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.146 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.146 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.146 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.147 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.147 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.147 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.148 2 DEBUG nova.virt.hardware [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.151 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1118640825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.638 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.666 2 DEBUG nova.storage.rbd_utils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.670 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:50.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:50 np0005466031 nova_compute[235803]: 2025-10-02 12:48:50.910 2 DEBUG nova.network.neutron [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Updating instance_info_cache with network_info: [{"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:51.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.030 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Releasing lock "refresh_cache-12d6a33b-0e31-429b-8cb5-395f0571e11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.030 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Instance network_info: |[{"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.033 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Start _get_guest_xml network_info=[{"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.038 2 WARNING nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.045 2 DEBUG nova.virt.libvirt.host [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.045 2 DEBUG nova.virt.libvirt.host [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.048 2 DEBUG nova.virt.libvirt.host [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.048 2 DEBUG nova.virt.libvirt.host [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.050 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.050 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.050 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.051 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.051 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.051 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.051 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.052 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.052 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.052 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.052 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.052 2 DEBUG nova.virt.hardware [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.055 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2709375719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.112 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.114 2 DEBUG nova.objects.instance [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.150 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <uuid>5bc565ce-21fe-4607-b264-009e95abac90</uuid>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <name>instance-00000086</name>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerShowV257Test-server-1342522148</nova:name>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:48:50</nova:creationTime>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <nova:user uuid="30f1ea0145af4353ae1a243777d0e0d9">tempest-ServerShowV257Test-62462017-project-member</nova:user>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <nova:project uuid="272016746c594508b846776ac1682e86">tempest-ServerShowV257Test-62462017</nova:project>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <entry name="serial">5bc565ce-21fe-4607-b264-009e95abac90</entry>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <entry name="uuid">5bc565ce-21fe-4607-b264-009e95abac90</entry>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/5bc565ce-21fe-4607-b264-009e95abac90_disk">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/5bc565ce-21fe-4607-b264-009e95abac90_disk.config">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/console.log" append="off"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:48:51 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:48:51 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:48:51 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:48:51 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:48:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/67991890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.506 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.531 2 DEBUG nova.storage.rbd_utils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] rbd image 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.535 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.572 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.572 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.573 2 INFO nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Using config drive#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.594 2 DEBUG nova.storage.rbd_utils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1656398606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.981 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.982 2 DEBUG nova.virt.libvirt.vif [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:48:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1002083302',display_name='tempest-NoVNCConsoleTestJSON-server-1002083302',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1002083302',id=133,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18d7b58e1d284072a8871e112ae7b16a',ramdisk_id='',reservation_id='r-j58a8on0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-1394987189',owner_user_name='tempest-NoVNCConsoleTestJSON-1394987189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='05ca431bf8724851be4667d4ba4ed232',uuid=12d6a33b-0e31-429b-8cb5-395f0571e11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.982 2 DEBUG nova.network.os_vif_util [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Converting VIF {"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.983 2 DEBUG nova.network.os_vif_util [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:73:05,bridge_name='br-int',has_traffic_filtering=True,id=2bf94cc1-652e-4ef0-812e-63d700173f4d,network=Network(5b8ec150-4cb3-483a-913d-587f105b83ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf94cc1-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:51 np0005466031 nova_compute[235803]: 2025-10-02 12:48:51.984 2 DEBUG nova.objects.instance [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lazy-loading 'pci_devices' on Instance uuid 12d6a33b-0e31-429b-8cb5-395f0571e11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.006 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <uuid>12d6a33b-0e31-429b-8cb5-395f0571e11f</uuid>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <name>instance-00000085</name>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <nova:name>tempest-NoVNCConsoleTestJSON-server-1002083302</nova:name>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:48:51</nova:creationTime>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <nova:user uuid="05ca431bf8724851be4667d4ba4ed232">tempest-NoVNCConsoleTestJSON-1394987189-project-member</nova:user>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <nova:project uuid="18d7b58e1d284072a8871e112ae7b16a">tempest-NoVNCConsoleTestJSON-1394987189</nova:project>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <nova:port uuid="2bf94cc1-652e-4ef0-812e-63d700173f4d">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <entry name="serial">12d6a33b-0e31-429b-8cb5-395f0571e11f</entry>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <entry name="uuid">12d6a33b-0e31-429b-8cb5-395f0571e11f</entry>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/12d6a33b-0e31-429b-8cb5-395f0571e11f_disk">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/12d6a33b-0e31-429b-8cb5-395f0571e11f_disk.config">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:df:73:05"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <target dev="tap2bf94cc1-65"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f/console.log" append="off"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:48:52 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:48:52 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:48:52 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:48:52 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.008 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Preparing to wait for external event network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.009 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.009 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.009 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.010 2 DEBUG nova.virt.libvirt.vif [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:48:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1002083302',display_name='tempest-NoVNCConsoleTestJSON-server-1002083302',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1002083302',id=133,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18d7b58e1d284072a8871e112ae7b16a',ramdisk_id='',reservation_id='r-j58a8on0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-1394987189',owner_user_name='tempest-NoVNCConsoleTestJSON-1394987189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:45Z,user_data=None,user_id='05ca431bf8724851be4667d4ba4ed232',uuid=12d6a33b-0e31-429b-8cb5-395f0571e11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.010 2 DEBUG nova.network.os_vif_util [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Converting VIF {"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.011 2 DEBUG nova.network.os_vif_util [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:73:05,bridge_name='br-int',has_traffic_filtering=True,id=2bf94cc1-652e-4ef0-812e-63d700173f4d,network=Network(5b8ec150-4cb3-483a-913d-587f105b83ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf94cc1-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.011 2 DEBUG os_vif [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:73:05,bridge_name='br-int',has_traffic_filtering=True,id=2bf94cc1-652e-4ef0-812e-63d700173f4d,network=Network(5b8ec150-4cb3-483a-913d-587f105b83ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf94cc1-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bf94cc1-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bf94cc1-65, col_values=(('external_ids', {'iface-id': '2bf94cc1-652e-4ef0-812e-63d700173f4d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:73:05', 'vm-uuid': '12d6a33b-0e31-429b-8cb5-395f0571e11f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:52 np0005466031 NetworkManager[44907]: <info>  [1759409332.0182] manager: (tap2bf94cc1-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.023 2 INFO os_vif [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:73:05,bridge_name='br-int',has_traffic_filtering=True,id=2bf94cc1-652e-4ef0-812e-63d700173f4d,network=Network(5b8ec150-4cb3-483a-913d-587f105b83ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf94cc1-65')#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.113 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.114 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.114 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] No VIF found with MAC fa:16:3e:df:73:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.115 2 INFO nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Using config drive#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.140 2 DEBUG nova.storage.rbd_utils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] rbd image 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.536 2 INFO nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Creating config drive at /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.541 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi49ki5qm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.673 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi49ki5qm" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 e315: 3 total, 3 up, 3 in
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.724 2 DEBUG nova.storage.rbd_utils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:52 np0005466031 nova_compute[235803]: 2025-10-02 12:48:52.729 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config 5bc565ce-21fe-4607-b264-009e95abac90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:48:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:48:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:53.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:53 np0005466031 nova_compute[235803]: 2025-10-02 12:48:53.465 2 DEBUG oslo_concurrency.processutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config 5bc565ce-21fe-4607-b264-009e95abac90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:53 np0005466031 nova_compute[235803]: 2025-10-02 12:48:53.466 2 INFO nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Deleting local config drive /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config because it was imported into RBD.#033[00m
Oct  2 08:48:53 np0005466031 systemd-machined[192227]: New machine qemu-56-instance-00000086.
Oct  2 08:48:53 np0005466031 systemd[1]: Started Virtual Machine qemu-56-instance-00000086.
Oct  2 08:48:53 np0005466031 nova_compute[235803]: 2025-10-02 12:48:53.562 2 INFO nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Creating config drive at /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f/disk.config#033[00m
Oct  2 08:48:53 np0005466031 nova_compute[235803]: 2025-10-02 12:48:53.567 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2kdzyz4m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:53 np0005466031 nova_compute[235803]: 2025-10-02 12:48:53.703 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2kdzyz4m" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:53 np0005466031 nova_compute[235803]: 2025-10-02 12:48:53.731 2 DEBUG nova.storage.rbd_utils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] rbd image 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:53 np0005466031 nova_compute[235803]: 2025-10-02 12:48:53.735 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f/disk.config 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:54.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.881 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409334.8807282, 5bc565ce-21fe-4607-b264-009e95abac90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.881 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.884 2 DEBUG nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.884 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.888 2 INFO nova.virt.libvirt.driver [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance spawned successfully.#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.888 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.908 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.909 2 DEBUG oslo_concurrency.processutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f/disk.config 12d6a33b-0e31-429b-8cb5-395f0571e11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.910 2 INFO nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Deleting local config drive /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f/disk.config because it was imported into RBD.#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.913 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.924 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.924 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.925 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.925 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.926 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.926 2 DEBUG nova.virt.libvirt.driver [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.943 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.944 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409334.883712, 5bc565ce-21fe-4607-b264-009e95abac90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.944 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] VM Started (Lifecycle Event)#033[00m
Oct  2 08:48:54 np0005466031 kernel: tap2bf94cc1-65: entered promiscuous mode
Oct  2 08:48:54 np0005466031 NetworkManager[44907]: <info>  [1759409334.9693] manager: (tap2bf94cc1-65): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Oct  2 08:48:54 np0005466031 systemd-udevd[294649]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:48:54 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:54Z|00485|binding|INFO|Claiming lport 2bf94cc1-652e-4ef0-812e-63d700173f4d for this chassis.
Oct  2 08:48:54 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:54Z|00486|binding|INFO|2bf94cc1-652e-4ef0-812e-63d700173f4d: Claiming fa:16:3e:df:73:05 10.100.0.11
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:54 np0005466031 NetworkManager[44907]: <info>  [1759409334.9843] device (tap2bf94cc1-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:48:54 np0005466031 NetworkManager[44907]: <info>  [1759409334.9857] device (tap2bf94cc1-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:48:54 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.985 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:54.987 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:73:05 10.100.0.11'], port_security=['fa:16:3e:df:73:05 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '12d6a33b-0e31-429b-8cb5-395f0571e11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b8ec150-4cb3-483a-913d-587f105b83ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18d7b58e1d284072a8871e112ae7b16a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bce7bf72-338b-41f0-9ed3-5e0c6461bd8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11bff0fe-20e2-4f54-a342-0964e9ff3c7c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=2bf94cc1-652e-4ef0-812e-63d700173f4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:54.989 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 2bf94cc1-652e-4ef0-812e-63d700173f4d in datapath 5b8ec150-4cb3-483a-913d-587f105b83ee bound to our chassis#033[00m
Oct  2 08:48:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:54.991 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b8ec150-4cb3-483a-913d-587f105b83ee#033[00m
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.999 2 INFO nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Took 8.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:54.999 2 DEBUG nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:54Z|00487|binding|INFO|Setting lport 2bf94cc1-652e-4ef0-812e-63d700173f4d ovn-installed in OVS
Oct  2 08:48:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:54Z|00488|binding|INFO|Setting lport 2bf94cc1-652e-4ef0-812e-63d700173f4d up in Southbound
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.009 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7ae1e9-1d15-46d9-9f8b-661f8d2540e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.009 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.011 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b8ec150-41 in ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.013 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b8ec150-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.013 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cba851a6-6a32-4d89-b0a3-c8cf266e8e98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.014 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f384d759-cf8b-4b9c-9ffc-88dcb4c36c41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:55 np0005466031 systemd-machined[192227]: New machine qemu-57-instance-00000085.
Oct  2 08:48:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:55.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:55 np0005466031 systemd[1]: Started Virtual Machine qemu-57-instance-00000085.
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.037 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d3864666-60f8-4291-a973-0194fbe5d793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.050 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.052 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[806973bc-4277-423e-9622-998554c942ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.067 2 INFO nova.compute.manager [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Took 11.05 seconds to build instance.#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.084 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4941fbdc-a74c-434e-9079-e913ce435ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.090 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[15f8225f-4c93-4209-a06a-c9d6425bc6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 NetworkManager[44907]: <info>  [1759409335.0918] manager: (tap5b8ec150-40): new Veth device (/org/freedesktop/NetworkManager/Devices/229)
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.098 2 DEBUG oslo_concurrency.lockutils [None req-f92ba329-2c0f-44de-aed1-e2f8ed93ae0a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "5bc565ce-21fe-4607-b264-009e95abac90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.132 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[187a4b07-3c7d-4947-8837-946a567a29e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.139 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b796eb9b-87cd-4bb6-99dd-1edfc4a119a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 NetworkManager[44907]: <info>  [1759409335.1652] device (tap5b8ec150-40): carrier: link connected
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.173 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfda29f-504b-4b4d-bd92-822e5886fcf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.191 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[13d0b6a8-9044-45b6-b596-05ce716b6c6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b8ec150-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:23:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719075, 'reachable_time': 18117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294700, 'error': None, 'target': 'ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.207 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[732e8fff-f78e-46d9-babf-b9d75128f4a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:230e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719075, 'tstamp': 719075}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294701, 'error': None, 'target': 'ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.231 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[db07127a-8e20-44c4-8b2c-73501c5b6307]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b8ec150-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:23:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719075, 'reachable_time': 18117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294702, 'error': None, 'target': 'ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.280 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a7a24e-3218-4849-a031-7aedf41c7317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.355 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[747f16d1-220e-4f7e-b838-67c59eca8341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.357 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b8ec150-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.357 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.359 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b8ec150-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:55 np0005466031 NetworkManager[44907]: <info>  [1759409335.3616] manager: (tap5b8ec150-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Oct  2 08:48:55 np0005466031 kernel: tap5b8ec150-40: entered promiscuous mode
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.366 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b8ec150-40, col_values=(('external_ids', {'iface-id': '0928ff0b-fe81-493c-9bda-12e5e8f90411'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:48:55Z|00489|binding|INFO|Releasing lport 0928ff0b-fe81-493c-9bda-12e5e8f90411 from this chassis (sb_readonly=0)
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:55 np0005466031 nova_compute[235803]: 2025-10-02 12:48:55.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.387 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b8ec150-4cb3-483a-913d-587f105b83ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b8ec150-4cb3-483a-913d-587f105b83ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.388 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd0be7e-1f5f-4516-829a-732c34134848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.388 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-5b8ec150-4cb3-483a-913d-587f105b83ee
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/5b8ec150-4cb3-483a-913d-587f105b83ee.pid.haproxy
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 5b8ec150-4cb3-483a-913d-587f105b83ee
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:48:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:48:55.390 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee', 'env', 'PROCESS_TAG=haproxy-5b8ec150-4cb3-483a-913d-587f105b83ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b8ec150-4cb3-483a-913d-587f105b83ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:48:55 np0005466031 podman[294734]: 2025-10-02 12:48:55.791053247 +0000 UTC m=+0.023220230 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:48:56 np0005466031 podman[294734]: 2025-10-02 12:48:56.048844553 +0000 UTC m=+0.281011516 container create 4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:48:56 np0005466031 systemd[1]: Started libpod-conmon-4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f.scope.
Oct  2 08:48:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:56 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:48:56 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fb9324fa1db6f7a0a112841884c36d6e027fd97ffaa004402e4948549c7c61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:48:56 np0005466031 podman[294734]: 2025-10-02 12:48:56.178256681 +0000 UTC m=+0.410423644 container init 4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:48:56 np0005466031 podman[294734]: 2025-10-02 12:48:56.18933575 +0000 UTC m=+0.421502713 container start 4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:48:56 np0005466031 neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee[294791]: [NOTICE]   (294797) : New worker (294799) forked
Oct  2 08:48:56 np0005466031 neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee[294791]: [NOTICE]   (294797) : Loading success.
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.230 2 DEBUG nova.compute.manager [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received event network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.231 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.231 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.232 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.232 2 DEBUG nova.compute.manager [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Processing event network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.232 2 DEBUG nova.compute.manager [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received event network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.232 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.233 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.233 2 DEBUG oslo_concurrency.lockutils [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.233 2 DEBUG nova.compute.manager [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] No waiting events found dispatching network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.234 2 WARNING nova.compute.manager [req-4a1828b0-e3ba-4603-a512-c011eccc15e7 req-14cda412-31bc-4160-8649-d47ada49bf92 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received unexpected event network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.621 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409336.621098, 12d6a33b-0e31-429b-8cb5-395f0571e11f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.622 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] VM Started (Lifecycle Event)#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.623 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.636 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.642 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.643 2 INFO nova.virt.libvirt.driver [-] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Instance spawned successfully.#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.644 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.649 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.678 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.678 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409336.6212552, 12d6a33b-0e31-429b-8cb5-395f0571e11f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.678 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.685 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.685 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.686 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.686 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.687 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.687 2 DEBUG nova.virt.libvirt.driver [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.717 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.722 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409336.6302688, 12d6a33b-0e31-429b-8cb5-395f0571e11f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.722 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:48:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:56.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.855 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.859 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.899 2 INFO nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Took 11.43 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.900 2 DEBUG nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:56 np0005466031 nova_compute[235803]: 2025-10-02 12:48:56.909 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:48:57 np0005466031 nova_compute[235803]: 2025-10-02 12:48:57.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:57.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:57 np0005466031 nova_compute[235803]: 2025-10-02 12:48:57.118 2 INFO nova.compute.manager [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Took 13.45 seconds to build instance.#033[00m
Oct  2 08:48:57 np0005466031 nova_compute[235803]: 2025-10-02 12:48:57.205 2 DEBUG oslo_concurrency.lockutils [None req-a6db978e-587f-4e40-a00c-c5a90bea0ec8 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:58 np0005466031 nova_compute[235803]: 2025-10-02 12:48:58.595 2 INFO nova.compute.manager [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Rebuilding instance#033[00m
Oct  2 08:48:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:58.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:58 np0005466031 nova_compute[235803]: 2025-10-02 12:48:58.847 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:58 np0005466031 nova_compute[235803]: 2025-10-02 12:48:58.921 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:58 np0005466031 nova_compute[235803]: 2025-10-02 12:48:58.966 2 DEBUG nova.compute.manager [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:48:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:59.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:59 np0005466031 nova_compute[235803]: 2025-10-02 12:48:59.028 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:59 np0005466031 nova_compute[235803]: 2025-10-02 12:48:59.049 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:59 np0005466031 nova_compute[235803]: 2025-10-02 12:48:59.084 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'resources' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:59 np0005466031 nova_compute[235803]: 2025-10-02 12:48:59.111 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'migration_context' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:59 np0005466031 nova_compute[235803]: 2025-10-02 12:48:59.130 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:48:59 np0005466031 nova_compute[235803]: 2025-10-02 12:48:59.135 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:48:59 np0005466031 nova_compute[235803]: 2025-10-02 12:48:59.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:01 np0005466031 nova_compute[235803]: 2025-10-02 12:49:01.596 2 DEBUG nova.compute.manager [None req-90672a5d-9235-4df7-ac69-7be9f4272e6e 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Oct  2 08:49:01 np0005466031 nova_compute[235803]: 2025-10-02 12:49:01.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/832610661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:02 np0005466031 nova_compute[235803]: 2025-10-02 12:49:02.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:02 np0005466031 nova_compute[235803]: 2025-10-02 12:49:02.294 2 DEBUG nova.compute.manager [None req-19bb3672-52be-47e0-8c6d-2bf29fd4891c 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Oct  2 08:49:02 np0005466031 nova_compute[235803]: 2025-10-02 12:49:02.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:02.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:03.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.126 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "12d6a33b-0e31-429b-8cb5-395f0571e11f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.127 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.127 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.127 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.128 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.129 2 INFO nova.compute.manager [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Terminating instance#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.129 2 DEBUG nova.compute.manager [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:49:03 np0005466031 kernel: tap2bf94cc1-65 (unregistering): left promiscuous mode
Oct  2 08:49:03 np0005466031 NetworkManager[44907]: <info>  [1759409343.1999] device (tap2bf94cc1-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:49:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:49:03Z|00490|binding|INFO|Releasing lport 2bf94cc1-652e-4ef0-812e-63d700173f4d from this chassis (sb_readonly=0)
Oct  2 08:49:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:49:03Z|00491|binding|INFO|Setting lport 2bf94cc1-652e-4ef0-812e-63d700173f4d down in Southbound
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:49:03Z|00492|binding|INFO|Removing iface tap2bf94cc1-65 ovn-installed in OVS
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.224 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:73:05 10.100.0.11'], port_security=['fa:16:3e:df:73:05 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '12d6a33b-0e31-429b-8cb5-395f0571e11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b8ec150-4cb3-483a-913d-587f105b83ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18d7b58e1d284072a8871e112ae7b16a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bce7bf72-338b-41f0-9ed3-5e0c6461bd8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11bff0fe-20e2-4f54-a342-0964e9ff3c7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=2bf94cc1-652e-4ef0-812e-63d700173f4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.226 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 2bf94cc1-652e-4ef0-812e-63d700173f4d in datapath 5b8ec150-4cb3-483a-913d-587f105b83ee unbound from our chassis#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.227 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b8ec150-4cb3-483a-913d-587f105b83ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.228 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[11d3f682-4617-4d8d-88a4-ab23c7a5da29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.229 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee namespace which is not needed anymore#033[00m
Oct  2 08:49:03 np0005466031 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct  2 08:49:03 np0005466031 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000085.scope: Consumed 8.059s CPU time.
Oct  2 08:49:03 np0005466031 systemd-machined[192227]: Machine qemu-57-instance-00000085 terminated.
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.361 2 INFO nova.virt.libvirt.driver [-] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Instance destroyed successfully.#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.361 2 DEBUG nova.objects.instance [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lazy-loading 'resources' on Instance uuid 12d6a33b-0e31-429b-8cb5-395f0571e11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.379 2 DEBUG nova.virt.libvirt.vif [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1002083302',display_name='tempest-NoVNCConsoleTestJSON-server-1002083302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1002083302',id=133,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18d7b58e1d284072a8871e112ae7b16a',ramdisk_id='',reservation_id='r-j58a8on0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-NoVNCConsoleTestJSON-1394987189',owner_user_name='tempest-NoVNCConsoleTestJSON-1394987189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:57Z,user_data=None,user_id='05ca431bf8724851be4667d4ba4ed232',uuid=12d6a33b-0e31-429b-8cb5-395f0571e11f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.381 2 DEBUG nova.network.os_vif_util [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Converting VIF {"id": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "address": "fa:16:3e:df:73:05", "network": {"id": "5b8ec150-4cb3-483a-913d-587f105b83ee", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-33327562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18d7b58e1d284072a8871e112ae7b16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf94cc1-65", "ovs_interfaceid": "2bf94cc1-652e-4ef0-812e-63d700173f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.382 2 DEBUG nova.network.os_vif_util [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:73:05,bridge_name='br-int',has_traffic_filtering=True,id=2bf94cc1-652e-4ef0-812e-63d700173f4d,network=Network(5b8ec150-4cb3-483a-913d-587f105b83ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf94cc1-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.382 2 DEBUG os_vif [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:73:05,bridge_name='br-int',has_traffic_filtering=True,id=2bf94cc1-652e-4ef0-812e-63d700173f4d,network=Network(5b8ec150-4cb3-483a-913d-587f105b83ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf94cc1-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bf94cc1-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.392 2 INFO os_vif [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:73:05,bridge_name='br-int',has_traffic_filtering=True,id=2bf94cc1-652e-4ef0-812e-63d700173f4d,network=Network(5b8ec150-4cb3-483a-913d-587f105b83ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf94cc1-65')#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.484 2 DEBUG nova.compute.manager [req-4051e329-0140-47ca-b727-dae371fcbfb1 req-ca61ada9-c669-4c93-86b7-fd5f054f310c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received event network-vif-unplugged-2bf94cc1-652e-4ef0-812e-63d700173f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.485 2 DEBUG oslo_concurrency.lockutils [req-4051e329-0140-47ca-b727-dae371fcbfb1 req-ca61ada9-c669-4c93-86b7-fd5f054f310c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.485 2 DEBUG oslo_concurrency.lockutils [req-4051e329-0140-47ca-b727-dae371fcbfb1 req-ca61ada9-c669-4c93-86b7-fd5f054f310c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.485 2 DEBUG oslo_concurrency.lockutils [req-4051e329-0140-47ca-b727-dae371fcbfb1 req-ca61ada9-c669-4c93-86b7-fd5f054f310c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.485 2 DEBUG nova.compute.manager [req-4051e329-0140-47ca-b727-dae371fcbfb1 req-ca61ada9-c669-4c93-86b7-fd5f054f310c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] No waiting events found dispatching network-vif-unplugged-2bf94cc1-652e-4ef0-812e-63d700173f4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.485 2 DEBUG nova.compute.manager [req-4051e329-0140-47ca-b727-dae371fcbfb1 req-ca61ada9-c669-4c93-86b7-fd5f054f310c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received event network-vif-unplugged-2bf94cc1-652e-4ef0-812e-63d700173f4d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:49:03 np0005466031 neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee[294791]: [NOTICE]   (294797) : haproxy version is 2.8.14-c23fe91
Oct  2 08:49:03 np0005466031 neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee[294791]: [NOTICE]   (294797) : path to executable is /usr/sbin/haproxy
Oct  2 08:49:03 np0005466031 neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee[294791]: [WARNING]  (294797) : Exiting Master process...
Oct  2 08:49:03 np0005466031 neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee[294791]: [WARNING]  (294797) : Exiting Master process...
Oct  2 08:49:03 np0005466031 neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee[294791]: [ALERT]    (294797) : Current worker (294799) exited with code 143 (Terminated)
Oct  2 08:49:03 np0005466031 neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee[294791]: [WARNING]  (294797) : All workers exited. Exiting... (0)
Oct  2 08:49:03 np0005466031 systemd[1]: libpod-4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f.scope: Deactivated successfully.
Oct  2 08:49:03 np0005466031 podman[294833]: 2025-10-02 12:49:03.518485296 +0000 UTC m=+0.187297066 container died 4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:03 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f-userdata-shm.mount: Deactivated successfully.
Oct  2 08:49:03 np0005466031 systemd[1]: var-lib-containers-storage-overlay-e2fb9324fa1db6f7a0a112841884c36d6e027fd97ffaa004402e4948549c7c61-merged.mount: Deactivated successfully.
Oct  2 08:49:03 np0005466031 podman[294833]: 2025-10-02 12:49:03.667636813 +0000 UTC m=+0.336448573 container cleanup 4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.672 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.672 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.672 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.673 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.673 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:03 np0005466031 systemd[1]: libpod-conmon-4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f.scope: Deactivated successfully.
Oct  2 08:49:03 np0005466031 podman[294888]: 2025-10-02 12:49:03.739702049 +0000 UTC m=+0.050688511 container remove 4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.746 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6593ebc9-45c0-4173-a00b-c7ba98441c47]: (4, ('Thu Oct  2 12:49:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee (4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f)\n4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f\nThu Oct  2 12:49:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee (4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f)\n4040ca292b7a1dfab22a3b888f7d3a033ccc3a3fffd3982628d780d0ff5ea55f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.749 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2def4b7f-2ada-4b38-bb8c-5ff87e5bb8dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.750 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b8ec150-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:03 np0005466031 kernel: tap5b8ec150-40: left promiscuous mode
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 nova_compute[235803]: 2025-10-02 12:49:03.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.773 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0248081c-2ea0-4031-9e0b-a5dc0b5241a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.800 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1eb1e2-638c-4a15-ad84-6a99b19783cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.804 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[896c4871-ada0-49ab-aa3a-18a1255c77e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.820 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7fdbd9-a208-4bfd-bb5a-d298eeb97229]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719067, 'reachable_time': 30696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294945, 'error': None, 'target': 'ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:03 np0005466031 systemd[1]: run-netns-ovnmeta\x2d5b8ec150\x2d4cb3\x2d483a\x2d913d\x2d587f105b83ee.mount: Deactivated successfully.
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.831 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b8ec150-4cb3-483a-913d-587f105b83ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:49:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:03.831 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4efa9d-49c7-4cd2-ae8a-db46f4929d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1560321428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.142 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.275 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.277 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.284 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.284 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.291 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.291 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.504 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.505 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3914MB free_disk=20.900550842285156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.506 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.508 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.600 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 184f3992-03ad-4908-aeb5-b14e562fa846 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.601 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 12d6a33b-0e31-429b-8cb5-395f0571e11f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.601 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 5bc565ce-21fe-4607-b264-009e95abac90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.601 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.602 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.635 2 INFO nova.virt.libvirt.driver [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Deleting instance files /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f_del#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.636 2 INFO nova.virt.libvirt.driver [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Deletion of /var/lib/nova/instances/12d6a33b-0e31-429b-8cb5-395f0571e11f_del complete#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.707 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.745 2 INFO nova.compute.manager [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Took 1.61 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.746 2 DEBUG oslo.service.loopingcall [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.746 2 DEBUG nova.compute.manager [-] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.746 2 DEBUG nova.network.neutron [-] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:49:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:04 np0005466031 nova_compute[235803]: 2025-10-02 12:49:04.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1519361877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.144 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.149 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.163 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:49:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1372453133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.195 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.195 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.449 2 DEBUG nova.network.neutron [-] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.486 2 INFO nova.compute.manager [-] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Took 0.74 seconds to deallocate network for instance.#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.567 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.568 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.579 2 DEBUG nova.compute.manager [req-fa5c3ace-8f3c-4445-9cba-2a9968e80428 req-96066d41-ec10-4f58-9deb-5acc1263c046 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received event network-vif-deleted-2bf94cc1-652e-4ef0-812e-63d700173f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.654 2 DEBUG oslo_concurrency.processutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.686 2 DEBUG nova.compute.manager [req-b2bb08d2-45ed-4118-b396-c63e65fd4afd req-dfb6cfbc-3622-43c4-a7dc-91a566ca3c46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received event network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.687 2 DEBUG oslo_concurrency.lockutils [req-b2bb08d2-45ed-4118-b396-c63e65fd4afd req-dfb6cfbc-3622-43c4-a7dc-91a566ca3c46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.687 2 DEBUG oslo_concurrency.lockutils [req-b2bb08d2-45ed-4118-b396-c63e65fd4afd req-dfb6cfbc-3622-43c4-a7dc-91a566ca3c46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.688 2 DEBUG oslo_concurrency.lockutils [req-b2bb08d2-45ed-4118-b396-c63e65fd4afd req-dfb6cfbc-3622-43c4-a7dc-91a566ca3c46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.688 2 DEBUG nova.compute.manager [req-b2bb08d2-45ed-4118-b396-c63e65fd4afd req-dfb6cfbc-3622-43c4-a7dc-91a566ca3c46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] No waiting events found dispatching network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:49:05 np0005466031 nova_compute[235803]: 2025-10-02 12:49:05.688 2 WARNING nova.compute.manager [req-b2bb08d2-45ed-4118-b396-c63e65fd4afd req-dfb6cfbc-3622-43c4-a7dc-91a566ca3c46 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Received unexpected event network-vif-plugged-2bf94cc1-652e-4ef0-812e-63d700173f4d for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:49:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1743317268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:06 np0005466031 nova_compute[235803]: 2025-10-02 12:49:06.102 2 DEBUG oslo_concurrency.processutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:06 np0005466031 nova_compute[235803]: 2025-10-02 12:49:06.108 2 DEBUG nova.compute.provider_tree [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:49:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:06 np0005466031 nova_compute[235803]: 2025-10-02 12:49:06.129 2 DEBUG nova.scheduler.client.report [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:49:06 np0005466031 nova_compute[235803]: 2025-10-02 12:49:06.182 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:06 np0005466031 nova_compute[235803]: 2025-10-02 12:49:06.252 2 INFO nova.scheduler.client.report [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Deleted allocations for instance 12d6a33b-0e31-429b-8cb5-395f0571e11f#033[00m
Oct  2 08:49:06 np0005466031 nova_compute[235803]: 2025-10-02 12:49:06.410 2 DEBUG oslo_concurrency.lockutils [None req-fd1a8bc6-6904-43a6-af01-50f958da53bb 05ca431bf8724851be4667d4ba4ed232 18d7b58e1d284072a8871e112ae7b16a - - default default] Lock "12d6a33b-0e31-429b-8cb5-395f0571e11f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:49:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:49:07 np0005466031 nova_compute[235803]: 2025-10-02 12:49:07.195 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:07 np0005466031 nova_compute[235803]: 2025-10-02 12:49:07.196 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:49:07 np0005466031 nova_compute[235803]: 2025-10-02 12:49:07.585 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:07 np0005466031 nova_compute[235803]: 2025-10-02 12:49:07.586 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:07 np0005466031 nova_compute[235803]: 2025-10-02 12:49:07.586 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:49:08 np0005466031 nova_compute[235803]: 2025-10-02 12:49:08.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:09 np0005466031 nova_compute[235803]: 2025-10-02 12:49:09.186 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:49:09 np0005466031 nova_compute[235803]: 2025-10-02 12:49:09.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:10 np0005466031 nova_compute[235803]: 2025-10-02 12:49:10.597 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updating instance_info_cache with network_info: [{"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:10.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:10 np0005466031 nova_compute[235803]: 2025-10-02 12:49:10.806 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:49:10 np0005466031 nova_compute[235803]: 2025-10-02 12:49:10.807 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:49:10 np0005466031 nova_compute[235803]: 2025-10-02 12:49:10.807 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:10 np0005466031 nova_compute[235803]: 2025-10-02 12:49:10.808 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:10 np0005466031 nova_compute[235803]: 2025-10-02 12:49:10.808 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:10 np0005466031 nova_compute[235803]: 2025-10-02 12:49:10.808 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:49:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:11 np0005466031 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct  2 08:49:11 np0005466031 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000086.scope: Consumed 13.976s CPU time.
Oct  2 08:49:11 np0005466031 systemd-machined[192227]: Machine qemu-56-instance-00000086 terminated.
Oct  2 08:49:11 np0005466031 podman[295031]: 2025-10-02 12:49:11.610523968 +0000 UTC m=+0.055315224 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 08:49:11 np0005466031 podman[295032]: 2025-10-02 12:49:11.635532709 +0000 UTC m=+0.080162061 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 08:49:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:49:11Z|00493|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:49:11 np0005466031 nova_compute[235803]: 2025-10-02 12:49:11.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:12 np0005466031 ovn_controller[132413]: 2025-10-02T12:49:12Z|00494|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.200 2 INFO nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.206 2 INFO nova.virt.libvirt.driver [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance destroyed successfully.#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.210 2 INFO nova.virt.libvirt.driver [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance destroyed successfully.#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.642 2 INFO nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Deleting instance files /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90_del#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.643 2 INFO nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Deletion of /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90_del complete#033[00m
Oct  2 08:49:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:12.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:12 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:49:12 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.897 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.898 2 INFO nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Creating image(s)#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.923 2 DEBUG nova.storage.rbd_utils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.951 2 DEBUG nova.storage.rbd_utils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.981 2 DEBUG nova.storage.rbd_utils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:12 np0005466031 nova_compute[235803]: 2025-10-02 12:49:12.984 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:13.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.052 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.052 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.053 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.054 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.077 2 DEBUG nova.storage.rbd_utils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.080 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 5bc565ce-21fe-4607-b264-009e95abac90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.573 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 5bc565ce-21fe-4607-b264-009e95abac90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.650 2 DEBUG nova.storage.rbd_utils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] resizing rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.757 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.758 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Ensure instance console log exists: /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.758 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.758 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.759 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.760 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.763 2 WARNING nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.768 2 DEBUG nova.virt.libvirt.host [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.768 2 DEBUG nova.virt.libvirt.host [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.773 2 DEBUG nova.virt.libvirt.host [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.773 2 DEBUG nova.virt.libvirt.host [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.775 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.775 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.776 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.776 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.776 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.777 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.777 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.777 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.777 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.778 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.778 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.778 2 DEBUG nova.virt.hardware [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.779 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:13 np0005466031 nova_compute[235803]: 2025-10-02 12:49:13.817 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3924265383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.285 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.308 2 DEBUG nova.storage.rbd_utils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.312 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2830854995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:14.725 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:14.726 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.742 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.745 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <uuid>5bc565ce-21fe-4607-b264-009e95abac90</uuid>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <name>instance-00000086</name>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerShowV257Test-server-1342522148</nova:name>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:49:13</nova:creationTime>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <nova:user uuid="30f1ea0145af4353ae1a243777d0e0d9">tempest-ServerShowV257Test-62462017-project-member</nova:user>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <nova:project uuid="272016746c594508b846776ac1682e86">tempest-ServerShowV257Test-62462017</nova:project>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <nova:ports/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <entry name="serial">5bc565ce-21fe-4607-b264-009e95abac90</entry>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <entry name="uuid">5bc565ce-21fe-4607-b264-009e95abac90</entry>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/5bc565ce-21fe-4607-b264-009e95abac90_disk">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/5bc565ce-21fe-4607-b264-009e95abac90_disk.config">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/console.log" append="off"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:49:14 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:49:14 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:49:14 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:49:14 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:49:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.834 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.835 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.835 2 INFO nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Using config drive#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.874 2 DEBUG nova.storage.rbd_utils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.901 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:14 np0005466031 nova_compute[235803]: 2025-10-02 12:49:14.960 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'keypairs' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:15 np0005466031 nova_compute[235803]: 2025-10-02 12:49:15.303 2 INFO nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Creating config drive at /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config#033[00m
Oct  2 08:49:15 np0005466031 nova_compute[235803]: 2025-10-02 12:49:15.310 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb71ccr5g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:15 np0005466031 nova_compute[235803]: 2025-10-02 12:49:15.447 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb71ccr5g" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:15 np0005466031 nova_compute[235803]: 2025-10-02 12:49:15.477 2 DEBUG nova.storage.rbd_utils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] rbd image 5bc565ce-21fe-4607-b264-009e95abac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:15 np0005466031 nova_compute[235803]: 2025-10-02 12:49:15.483 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config 5bc565ce-21fe-4607-b264-009e95abac90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:15 np0005466031 nova_compute[235803]: 2025-10-02 12:49:15.668 2 DEBUG oslo_concurrency.processutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config 5bc565ce-21fe-4607-b264-009e95abac90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:15 np0005466031 nova_compute[235803]: 2025-10-02 12:49:15.669 2 INFO nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Deleting local config drive /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90/disk.config because it was imported into RBD.#033[00m
Oct  2 08:49:15 np0005466031 systemd-machined[192227]: New machine qemu-58-instance-00000086.
Oct  2 08:49:15 np0005466031 systemd[1]: Started Virtual Machine qemu-58-instance-00000086.
Oct  2 08:49:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:49:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:16.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.890 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 5bc565ce-21fe-4607-b264-009e95abac90 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.891 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409356.8903608, 5bc565ce-21fe-4607-b264-009e95abac90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.892 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.895 2 DEBUG nova.compute.manager [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.896 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.899 2 INFO nova.virt.libvirt.driver [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance spawned successfully.#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.900 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.924 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.930 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.934 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.935 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.936 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.936 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.936 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.937 2 DEBUG nova.virt.libvirt.driver [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.966 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.967 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409356.8916664, 5bc565ce-21fe-4607-b264-009e95abac90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.967 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] VM Started (Lifecycle Event)#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.993 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:16 np0005466031 nova_compute[235803]: 2025-10-02 12:49:16.997 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:49:17 np0005466031 nova_compute[235803]: 2025-10-02 12:49:17.020 2 DEBUG nova.compute.manager [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:17 np0005466031 nova_compute[235803]: 2025-10-02 12:49:17.027 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:49:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:17.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:17 np0005466031 nova_compute[235803]: 2025-10-02 12:49:17.083 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:17 np0005466031 nova_compute[235803]: 2025-10-02 12:49:17.083 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:17 np0005466031 nova_compute[235803]: 2025-10-02 12:49:17.083 2 DEBUG nova.objects.instance [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:49:17 np0005466031 nova_compute[235803]: 2025-10-02 12:49:17.143 2 DEBUG oslo_concurrency.lockutils [None req-2e737e2c-7ab4-4d86-8469-565563b7e84e 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.359 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409343.3581476, 12d6a33b-0e31-429b-8cb5-395f0571e11f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.360 2 INFO nova.compute.manager [-] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.450 2 DEBUG nova.compute.manager [None req-f21999ab-e132-4aae-8620-406651f8ed80 - - - - - -] [instance: 12d6a33b-0e31-429b-8cb5-395f0571e11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:18.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.960 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "5bc565ce-21fe-4607-b264-009e95abac90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.961 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "5bc565ce-21fe-4607-b264-009e95abac90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.961 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "5bc565ce-21fe-4607-b264-009e95abac90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.961 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "5bc565ce-21fe-4607-b264-009e95abac90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.961 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "5bc565ce-21fe-4607-b264-009e95abac90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.963 2 INFO nova.compute.manager [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Terminating instance#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.964 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "refresh_cache-5bc565ce-21fe-4607-b264-009e95abac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.964 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquired lock "refresh_cache-5bc565ce-21fe-4607-b264-009e95abac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:18 np0005466031 nova_compute[235803]: 2025-10-02 12:49:18.964 2 DEBUG nova.network.neutron [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:49:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:19.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:19 np0005466031 nova_compute[235803]: 2025-10-02 12:49:19.359 2 DEBUG nova.network.neutron [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:49:19 np0005466031 podman[295452]: 2025-10-02 12:49:19.65110266 +0000 UTC m=+0.072904651 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:49:19 np0005466031 podman[295453]: 2025-10-02 12:49:19.67643061 +0000 UTC m=+0.090851868 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct  2 08:49:19 np0005466031 nova_compute[235803]: 2025-10-02 12:49:19.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:20.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:21 np0005466031 nova_compute[235803]: 2025-10-02 12:49:21.075 2 DEBUG nova.network.neutron [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:21 np0005466031 nova_compute[235803]: 2025-10-02 12:49:21.215 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Releasing lock "refresh_cache-5bc565ce-21fe-4607-b264-009e95abac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:49:21 np0005466031 nova_compute[235803]: 2025-10-02 12:49:21.216 2 DEBUG nova.compute.manager [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:49:21 np0005466031 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct  2 08:49:21 np0005466031 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000086.scope: Consumed 5.539s CPU time.
Oct  2 08:49:21 np0005466031 systemd-machined[192227]: Machine qemu-58-instance-00000086 terminated.
Oct  2 08:49:21 np0005466031 nova_compute[235803]: 2025-10-02 12:49:21.433 2 INFO nova.virt.libvirt.driver [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance destroyed successfully.#033[00m
Oct  2 08:49:21 np0005466031 nova_compute[235803]: 2025-10-02 12:49:21.434 2 DEBUG nova.objects.instance [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lazy-loading 'resources' on Instance uuid 5bc565ce-21fe-4607-b264-009e95abac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:21.729 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:22 np0005466031 nova_compute[235803]: 2025-10-02 12:49:22.226 2 INFO nova.virt.libvirt.driver [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Deleting instance files /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90_del#033[00m
Oct  2 08:49:22 np0005466031 nova_compute[235803]: 2025-10-02 12:49:22.227 2 INFO nova.virt.libvirt.driver [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Deletion of /var/lib/nova/instances/5bc565ce-21fe-4607-b264-009e95abac90_del complete#033[00m
Oct  2 08:49:22 np0005466031 nova_compute[235803]: 2025-10-02 12:49:22.412 2 INFO nova.compute.manager [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:49:22 np0005466031 nova_compute[235803]: 2025-10-02 12:49:22.412 2 DEBUG oslo.service.loopingcall [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:49:22 np0005466031 nova_compute[235803]: 2025-10-02 12:49:22.413 2 DEBUG nova.compute.manager [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:49:22 np0005466031 nova_compute[235803]: 2025-10-02 12:49:22.413 2 DEBUG nova.network.neutron [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:49:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:22.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:23 np0005466031 NetworkManager[44907]: <info>  [1759409363.0524] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Oct  2 08:49:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:23 np0005466031 NetworkManager[44907]: <info>  [1759409363.0536] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Oct  2 08:49:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:49:23Z|00495|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.384 2 DEBUG nova.network.neutron [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.406 2 DEBUG nova.network.neutron [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.433 2 INFO nova.compute.manager [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.499 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.500 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:23 np0005466031 nova_compute[235803]: 2025-10-02 12:49:23.645 2 DEBUG oslo_concurrency.processutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/526865586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:24 np0005466031 nova_compute[235803]: 2025-10-02 12:49:24.158 2 DEBUG oslo_concurrency.processutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:24 np0005466031 nova_compute[235803]: 2025-10-02 12:49:24.165 2 DEBUG nova.compute.provider_tree [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:49:24 np0005466031 nova_compute[235803]: 2025-10-02 12:49:24.224 2 DEBUG nova.scheduler.client.report [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:49:24 np0005466031 nova_compute[235803]: 2025-10-02 12:49:24.275 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:24 np0005466031 nova_compute[235803]: 2025-10-02 12:49:24.322 2 INFO nova.scheduler.client.report [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Deleted allocations for instance 5bc565ce-21fe-4607-b264-009e95abac90#033[00m
Oct  2 08:49:24 np0005466031 nova_compute[235803]: 2025-10-02 12:49:24.516 2 DEBUG oslo_concurrency.lockutils [None req-82d4031e-708f-4a8f-8be9-74cd7413db5a 30f1ea0145af4353ae1a243777d0e0d9 272016746c594508b846776ac1682e86 - - default default] Lock "5bc565ce-21fe-4607-b264-009e95abac90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:24.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:24 np0005466031 nova_compute[235803]: 2025-10-02 12:49:24.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:25.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:25.859 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:25.859 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:25.860 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:26 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Oct  2 08:49:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:49:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:26.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:49:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:28 np0005466031 nova_compute[235803]: 2025-10-02 12:49:28.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:28.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:29.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:29 np0005466031 nova_compute[235803]: 2025-10-02 12:49:29.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:29 np0005466031 nova_compute[235803]: 2025-10-02 12:49:29.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:49:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:30.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:49:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:32.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:33.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:33 np0005466031 nova_compute[235803]: 2025-10-02 12:49:33.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:34 np0005466031 nova_compute[235803]: 2025-10-02 12:49:34.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:34 np0005466031 nova_compute[235803]: 2025-10-02 12:49:34.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:36 np0005466031 nova_compute[235803]: 2025-10-02 12:49:36.433 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409361.431926, 5bc565ce-21fe-4607-b264-009e95abac90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:49:36 np0005466031 nova_compute[235803]: 2025-10-02 12:49:36.435 2 INFO nova.compute.manager [-] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:49:36 np0005466031 nova_compute[235803]: 2025-10-02 12:49:36.511 2 DEBUG nova.compute.manager [None req-ff491b08-07cc-46b5-a308-8042d93a34a6 - - - - - -] [instance: 5bc565ce-21fe-4607-b264-009e95abac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:36.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:37.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:38 np0005466031 nova_compute[235803]: 2025-10-02 12:49:38.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:38.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:39.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:39 np0005466031 nova_compute[235803]: 2025-10-02 12:49:39.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:40.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:41.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:41 np0005466031 podman[295620]: 2025-10-02 12:49:41.951225659 +0000 UTC m=+0.047151199 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:49:41 np0005466031 podman[295621]: 2025-10-02 12:49:41.985336432 +0000 UTC m=+0.078924205 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:49:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:42.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:43 np0005466031 nova_compute[235803]: 2025-10-02 12:49:43.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:49:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:49:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:49:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:49:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:49:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:44.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:44 np0005466031 nova_compute[235803]: 2025-10-02 12:49:44.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:45.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:46.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:47.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:48 np0005466031 nova_compute[235803]: 2025-10-02 12:49:48.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:48.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:49 np0005466031 nova_compute[235803]: 2025-10-02 12:49:49.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2086177414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:50 np0005466031 podman[295945]: 2025-10-02 12:49:50.617084713 +0000 UTC m=+0.047262203 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 08:49:50 np0005466031 podman[295944]: 2025-10-02 12:49:50.627496513 +0000 UTC m=+0.057611631 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:49:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:51.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.292 2 DEBUG nova.compute.manager [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.483 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.484 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.525 2 DEBUG nova.objects.instance [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_requests' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.551 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.551 2 INFO nova.compute.claims [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.552 2 DEBUG nova.objects.instance [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.585 2 DEBUG nova.objects.instance [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.796 2 INFO nova.compute.resource_tracker [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating resource usage from migration b5c6182c-8dff-40e7-8cd8-4a10f42c45a6#033[00m
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.797 2 DEBUG nova.compute.resource_tracker [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Starting to track incoming migration b5c6182c-8dff-40e7-8cd8-4a10f42c45a6 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:49:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:52.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:52 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:52 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:52 np0005466031 nova_compute[235803]: 2025-10-02 12:49:52.996 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:53.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:53 np0005466031 nova_compute[235803]: 2025-10-02 12:49:53.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:53 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3670348211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:53 np0005466031 nova_compute[235803]: 2025-10-02 12:49:53.434 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:53 np0005466031 nova_compute[235803]: 2025-10-02 12:49:53.438 2 DEBUG nova.compute.provider_tree [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:49:53 np0005466031 nova_compute[235803]: 2025-10-02 12:49:53.461 2 DEBUG nova.scheduler.client.report [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:49:53 np0005466031 nova_compute[235803]: 2025-10-02 12:49:53.489 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:53 np0005466031 nova_compute[235803]: 2025-10-02 12:49:53.490 2 INFO nova.compute.manager [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Migrating#033[00m
Oct  2 08:49:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:54.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:54 np0005466031 nova_compute[235803]: 2025-10-02 12:49:54.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:55.094 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:55.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:55 np0005466031 nova_compute[235803]: 2025-10-02 12:49:55.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:49:55.095 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:49:56 np0005466031 nova_compute[235803]: 2025-10-02 12:49:56.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:56 np0005466031 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:49:56 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:49:56 np0005466031 systemd-logind[786]: New session 56 of user nova.
Oct  2 08:49:56 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:49:56 np0005466031 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:49:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:56.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:56 np0005466031 systemd[296065]: Queued start job for default target Main User Target.
Oct  2 08:49:56 np0005466031 systemd[296065]: Created slice User Application Slice.
Oct  2 08:49:56 np0005466031 systemd[296065]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:49:56 np0005466031 systemd[296065]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:49:56 np0005466031 systemd[296065]: Reached target Paths.
Oct  2 08:49:56 np0005466031 systemd[296065]: Reached target Timers.
Oct  2 08:49:56 np0005466031 systemd[296065]: Starting D-Bus User Message Bus Socket...
Oct  2 08:49:56 np0005466031 systemd[296065]: Starting Create User's Volatile Files and Directories...
Oct  2 08:49:56 np0005466031 systemd[296065]: Finished Create User's Volatile Files and Directories.
Oct  2 08:49:56 np0005466031 systemd[296065]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:49:56 np0005466031 systemd[296065]: Reached target Sockets.
Oct  2 08:49:56 np0005466031 systemd[296065]: Reached target Basic System.
Oct  2 08:49:56 np0005466031 systemd[296065]: Reached target Main User Target.
Oct  2 08:49:56 np0005466031 systemd[296065]: Startup finished in 141ms.
Oct  2 08:49:56 np0005466031 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:49:57 np0005466031 systemd[1]: Started Session 56 of User nova.
Oct  2 08:49:57 np0005466031 systemd[1]: session-56.scope: Deactivated successfully.
Oct  2 08:49:57 np0005466031 systemd-logind[786]: Session 56 logged out. Waiting for processes to exit.
Oct  2 08:49:57 np0005466031 systemd-logind[786]: Removed session 56.
Oct  2 08:49:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:49:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:49:57 np0005466031 systemd-logind[786]: New session 58 of user nova.
Oct  2 08:49:57 np0005466031 systemd[1]: Started Session 58 of User nova.
Oct  2 08:49:57 np0005466031 systemd[1]: session-58.scope: Deactivated successfully.
Oct  2 08:49:57 np0005466031 systemd-logind[786]: Session 58 logged out. Waiting for processes to exit.
Oct  2 08:49:57 np0005466031 systemd-logind[786]: Removed session 58.
Oct  2 08:49:58 np0005466031 nova_compute[235803]: 2025-10-02 12:49:58.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:58.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:49:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:59.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:59 np0005466031 nova_compute[235803]: 2025-10-02 12:49:59.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:00.097 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 08:50:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:00.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:01 np0005466031 nova_compute[235803]: 2025-10-02 12:50:01.514 2 DEBUG nova.compute.manager [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:01 np0005466031 nova_compute[235803]: 2025-10-02 12:50:01.515 2 DEBUG oslo_concurrency.lockutils [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:01 np0005466031 nova_compute[235803]: 2025-10-02 12:50:01.515 2 DEBUG oslo_concurrency.lockutils [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:01 np0005466031 nova_compute[235803]: 2025-10-02 12:50:01.515 2 DEBUG oslo_concurrency.lockutils [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:01 np0005466031 nova_compute[235803]: 2025-10-02 12:50:01.515 2 DEBUG nova.compute.manager [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:01 np0005466031 nova_compute[235803]: 2025-10-02 12:50:01.515 2 WARNING nova.compute.manager [req-1ed863ab-3e68-4e3f-836e-ae57b68221bd req-acb2f349-36bb-4e3e-93c4-fcc1e3951607 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:50:02 np0005466031 nova_compute[235803]: 2025-10-02 12:50:02.244 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:02 np0005466031 nova_compute[235803]: 2025-10-02 12:50:02.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:02.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.857 2 INFO nova.network.neutron [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.883 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.884 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.884 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.884 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.885 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.885 2 WARNING nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.957 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.958 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.958 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.959 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:50:03 np0005466031 nova_compute[235803]: 2025-10-02 12:50:03.959 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/836225235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:04 np0005466031 nova_compute[235803]: 2025-10-02 12:50:04.475 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:04.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:04 np0005466031 nova_compute[235803]: 2025-10-02 12:50:04.948 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:50:04 np0005466031 nova_compute[235803]: 2025-10-02 12:50:04.949 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:50:04 np0005466031 nova_compute[235803]: 2025-10-02 12:50:04.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:04 np0005466031 nova_compute[235803]: 2025-10-02 12:50:04.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:05.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.131 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.133 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4086MB free_disk=20.896709442138672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.133 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.134 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.452 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Applying migration context for instance 74af08e5-d1ea-478b-ace8-00363679ec4d as it has an incoming, in-progress migration b5c6182c-8dff-40e7-8cd8-4a10f42c45a6. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.454 2 INFO nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating resource usage from migration b5c6182c-8dff-40e7-8cd8-4a10f42c45a6#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.502 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 184f3992-03ad-4908-aeb5-b14e562fa846 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.502 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 74af08e5-d1ea-478b-ace8-00363679ec4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.502 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.502 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.520 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.577 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.578 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.601 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.634 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:50:05 np0005466031 nova_compute[235803]: 2025-10-02 12:50:05.708 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1879367731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.182 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.188 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.254 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.255 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.255 2 DEBUG nova.network.neutron [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.257 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.391 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.391 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.520 2 DEBUG nova.compute.manager [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.521 2 DEBUG nova.compute.manager [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing instance network info cache due to event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:06 np0005466031 nova_compute[235803]: 2025-10-02 12:50:06.521 2 DEBUG oslo_concurrency.lockutils [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:06.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:07.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:07 np0005466031 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:50:07 np0005466031 systemd[296065]: Activating special unit Exit the Session...
Oct  2 08:50:07 np0005466031 systemd[296065]: Stopped target Main User Target.
Oct  2 08:50:07 np0005466031 systemd[296065]: Stopped target Basic System.
Oct  2 08:50:07 np0005466031 systemd[296065]: Stopped target Paths.
Oct  2 08:50:07 np0005466031 systemd[296065]: Stopped target Sockets.
Oct  2 08:50:07 np0005466031 systemd[296065]: Stopped target Timers.
Oct  2 08:50:07 np0005466031 systemd[296065]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:50:07 np0005466031 systemd[296065]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:50:07 np0005466031 systemd[296065]: Closed D-Bus User Message Bus Socket.
Oct  2 08:50:07 np0005466031 systemd[296065]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:50:07 np0005466031 systemd[296065]: Removed slice User Application Slice.
Oct  2 08:50:07 np0005466031 systemd[296065]: Reached target Shutdown.
Oct  2 08:50:07 np0005466031 systemd[296065]: Finished Exit the Session.
Oct  2 08:50:07 np0005466031 systemd[296065]: Reached target Exit the Session.
Oct  2 08:50:07 np0005466031 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:50:07 np0005466031 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:50:07 np0005466031 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:50:07 np0005466031 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:50:07 np0005466031 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:50:07 np0005466031 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:50:07 np0005466031 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:50:07 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Oct  2 08:50:07 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:07.967912) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:50:07 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Oct  2 08:50:07 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409407967992, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2222, "num_deletes": 260, "total_data_size": 5188109, "memory_usage": 5262640, "flush_reason": "Manual Compaction"}
Oct  2 08:50:07 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408128686, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 3348140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52563, "largest_seqno": 54780, "table_properties": {"data_size": 3339020, "index_size": 5614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19755, "raw_average_key_size": 20, "raw_value_size": 3320426, "raw_average_value_size": 3451, "num_data_blocks": 244, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409237, "oldest_key_time": 1759409237, "file_creation_time": 1759409407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 160820 microseconds, and 7462 cpu microseconds.
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.128742) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 3348140 bytes OK
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.128769) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.297774) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.297859) EVENT_LOG_v1 {"time_micros": 1759409408297840, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.297913) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 5178067, prev total WAL file size 5178820, number of live WAL files 2.
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.303264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303134' seq:0, type:0; will stop at (end)
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(3269KB)], [102(10MB)]
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408303338, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14507401, "oldest_snapshot_seqno": -1}
Oct  2 08:50:08 np0005466031 nova_compute[235803]: 2025-10-02 12:50:08.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8092 keys, 14348207 bytes, temperature: kUnknown
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408760167, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 14348207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14290912, "index_size": 35935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208401, "raw_average_key_size": 25, "raw_value_size": 14143702, "raw_average_value_size": 1747, "num_data_blocks": 1427, "num_entries": 8092, "num_filter_entries": 8092, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.760443) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14348207 bytes
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.857337) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 31.8 rd, 31.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.6 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(8.6) write-amplify(4.3) OK, records in: 8632, records dropped: 540 output_compression: NoCompression
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.857381) EVENT_LOG_v1 {"time_micros": 1759409408857365, "job": 64, "event": "compaction_finished", "compaction_time_micros": 456925, "compaction_time_cpu_micros": 35998, "output_level": 6, "num_output_files": 1, "total_output_size": 14348207, "num_input_records": 8632, "num_output_records": 8092, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408858132, "job": 64, "event": "table_file_deletion", "file_number": 104}
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408860037, "job": 64, "event": "table_file_deletion", "file_number": 102}
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.303113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.860079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.860084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.860223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.860225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:08.860227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.496 2 DEBUG nova.network.neutron [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.532 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.537 2 DEBUG oslo_concurrency.lockutils [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.537 2 DEBUG nova.network.neutron [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.654 2 DEBUG os_brick.utils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.656 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.666 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.667 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5be89b-1a1d-4d5c-a75b-4122da7731d6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.668 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.676 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.676 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d587e6-2d59-47b5-bb69-3c8a0153bbc1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.678 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.687 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.687 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[9301a4e0-571f-4e2b-8047-f6332c1a87d7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.688 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[520ada10-385a-486f-8d72-7efd7df0fbc5]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.689 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.739 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "nvme version" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.742 2 DEBUG os_brick.initiator.connectors.lightos [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.743 2 DEBUG os_brick.initiator.connectors.lightos [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.743 2 DEBUG os_brick.initiator.connectors.lightos [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.743 2 DEBUG os_brick.utils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:50:09 np0005466031 nova_compute[235803]: 2025-10-02 12:50:09.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.405 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.405 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.430 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.431 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:50:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:10.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.946 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.947 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:50:10 np0005466031 nova_compute[235803]: 2025-10-02 12:50:10.948 2 INFO nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Creating image(s)#033[00m
Oct  2 08:50:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:50:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:50:11 np0005466031 nova_compute[235803]: 2025-10-02 12:50:11.116 2 DEBUG nova.storage.rbd_utils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(nova-resize) on rbd image(74af08e5-d1ea-478b-ace8-00363679ec4d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:50:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:12 np0005466031 nova_compute[235803]: 2025-10-02 12:50:12.056 2 DEBUG nova.network.neutron [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updated VIF entry in instance network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:12 np0005466031 nova_compute[235803]: 2025-10-02 12:50:12.057 2 DEBUG nova.network.neutron [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:12 np0005466031 nova_compute[235803]: 2025-10-02 12:50:12.153 2 DEBUG oslo_concurrency.lockutils [req-b349b913-87bc-42c9-8674-bcb500607372 req-b7a3510a-a576-4786-8fd1-c2f7dcea14a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e316 e316: 3 total, 3 up, 3 in
Oct  2 08:50:12 np0005466031 podman[296235]: 2025-10-02 12:50:12.671090106 +0000 UTC m=+0.087941185 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:50:12 np0005466031 podman[296236]: 2025-10-02 12:50:12.706773834 +0000 UTC m=+0.122832380 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:50:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:12.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:13 np0005466031 nova_compute[235803]: 2025-10-02 12:50:13.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:13 np0005466031 nova_compute[235803]: 2025-10-02 12:50:13.957 2 DEBUG nova.objects.instance [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.637 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.638 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Ensure instance console log exists: /var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.639 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.639 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.639 2 DEBUG oslo_concurrency.lockutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.642 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Start _get_guest_xml network_info=[{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:42:83:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ac1432a4-bab5-43b6-871c-71608985c7ae', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ac1432a4-bab5-43b6-871c-71608985c7ae', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '74af08e5-d1ea-478b-ace8-00363679ec4d', 'attached_at': '2025-10-02T12:50:10.000000', 'detached_at': '', 'volume_id': 'ac1432a4-bab5-43b6-871c-71608985c7ae', 'serial': 'ac1432a4-bab5-43b6-871c-71608985c7ae'}, 'attachment_id': 'e76d62b4-1ac3-4844-84aa-cf1f33360052', 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'guest_format': None, 'boot_index': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.646 2 WARNING nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.652 2 DEBUG nova.virt.libvirt.host [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.653 2 DEBUG nova.virt.libvirt.host [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.655 2 DEBUG nova.virt.libvirt.host [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.655 2 DEBUG nova.virt.libvirt.host [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.656 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.656 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.657 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.657 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.657 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.657 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.658 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.658 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.658 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.658 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.658 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.659 2 DEBUG nova.virt.hardware [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.659 2 DEBUG nova.objects.instance [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:14.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:14 np0005466031 nova_compute[235803]: 2025-10-02 12:50:14.992 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:50:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/550337763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:50:15 np0005466031 nova_compute[235803]: 2025-10-02 12:50:15.618 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:15 np0005466031 nova_compute[235803]: 2025-10-02 12:50:15.650 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:50:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2081917777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.124 2 DEBUG oslo_concurrency.processutils [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.247 2 DEBUG nova.virt.libvirt.vif [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:49:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:42:83:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.248 2 DEBUG nova.network.os_vif_util [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:42:83:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.250 2 DEBUG nova.network.os_vif_util [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.253 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <uuid>74af08e5-d1ea-478b-ace8-00363679ec4d</uuid>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <name>instance-00000087</name>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <memory>196608</memory>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerActionsTestOtherB-server-1460832742</nova:name>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:50:14</nova:creationTime>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.micro">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <nova:memory>192</nova:memory>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <nova:port uuid="97f82dce-0b1b-4848-bd1a-7ec40fbf49ae">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <entry name="serial">74af08e5-d1ea-478b-ace8-00363679ec4d</entry>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <entry name="uuid">74af08e5-d1ea-478b-ace8-00363679ec4d</entry>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/74af08e5-d1ea-478b-ace8-00363679ec4d_disk">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/74af08e5-d1ea-478b-ace8-00363679ec4d_disk.config">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-ac1432a4-bab5-43b6-871c-71608985c7ae">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <serial>ac1432a4-bab5-43b6-871c-71608985c7ae</serial>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:42:83:29"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <target dev="tap97f82dce-0b"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/74af08e5-d1ea-478b-ace8-00363679ec4d/console.log" append="off"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:50:16 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:50:16 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:50:16 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:50:16 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.255 2 DEBUG nova.virt.libvirt.vif [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:49:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:42:83:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.255 2 DEBUG nova.network.os_vif_util [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1350645832-network", "vif_mac": "fa:16:3e:42:83:29"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.256 2 DEBUG nova.network.os_vif_util [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.256 2 DEBUG os_vif [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97f82dce-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.263 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97f82dce-0b, col_values=(('external_ids', {'iface-id': '97f82dce-0b1b-4848-bd1a-7ec40fbf49ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:83:29', 'vm-uuid': '74af08e5-d1ea-478b-ace8-00363679ec4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:16 np0005466031 NetworkManager[44907]: <info>  [1759409416.3027] manager: (tap97f82dce-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.310 2 INFO os_vif [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b')#033[00m
Oct  2 08:50:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.707 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.708 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.708 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.708 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:42:83:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.709 2 INFO nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Using config drive#033[00m
Oct  2 08:50:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:16.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:16 np0005466031 kernel: tap97f82dce-0b: entered promiscuous mode
Oct  2 08:50:16 np0005466031 NetworkManager[44907]: <info>  [1759409416.9775] manager: (tap97f82dce-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/234)
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:50:16Z|00496|binding|INFO|Claiming lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for this chassis.
Oct  2 08:50:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:50:16Z|00497|binding|INFO|97f82dce-0b1b-4848-bd1a-7ec40fbf49ae: Claiming fa:16:3e:42:83:29 10.100.0.11
Oct  2 08:50:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:50:16Z|00498|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae ovn-installed in OVS
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:16 np0005466031 nova_compute[235803]: 2025-10-02 12:50:16.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:17 np0005466031 systemd-udevd[296411]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:50:17 np0005466031 ovn_controller[132413]: 2025-10-02T12:50:17Z|00499|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae up in Southbound
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.008 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:83:29 10.100.0.11'], port_security=['fa:16:3e:42:83:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '74af08e5-d1ea-478b-ace8-00363679ec4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '6', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.010 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.011 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4#033[00m
Oct  2 08:50:17 np0005466031 systemd-machined[192227]: New machine qemu-59-instance-00000087.
Oct  2 08:50:17 np0005466031 NetworkManager[44907]: <info>  [1759409417.0181] device (tap97f82dce-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:50:17 np0005466031 NetworkManager[44907]: <info>  [1759409417.0188] device (tap97f82dce-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:50:17 np0005466031 systemd[1]: Started Virtual Machine qemu-59-instance-00000087.
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.028 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[35eba92d-2b6f-46ff-b96b-6297beb76f94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.055 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e82a95-dc89-4bcc-bd2e-ec7924a3ce79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.059 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[34033969-0369-4b92-8293-fb41a1f91046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.087 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[821fcdae-7636-4600-93c7-3432662661de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.102 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[473f8b90-8ff5-4833-b289-6575921908e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701968, 'reachable_time': 20823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296426, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.115 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a51a627b-be95-4a02-97a5-115dc3636499]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701980, 'tstamp': 701980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296427, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701983, 'tstamp': 701983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296427, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.117 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:17 np0005466031 nova_compute[235803]: 2025-10-02 12:50:17.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:17 np0005466031 nova_compute[235803]: 2025-10-02 12:50:17.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.120 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.120 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.120 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:17.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:17.121 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:17 np0005466031 nova_compute[235803]: 2025-10-02 12:50:17.478 2 DEBUG nova.compute.manager [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:17 np0005466031 nova_compute[235803]: 2025-10-02 12:50:17.478 2 DEBUG oslo_concurrency.lockutils [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:17 np0005466031 nova_compute[235803]: 2025-10-02 12:50:17.478 2 DEBUG oslo_concurrency.lockutils [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:17 np0005466031 nova_compute[235803]: 2025-10-02 12:50:17.479 2 DEBUG oslo_concurrency.lockutils [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:17 np0005466031 nova_compute[235803]: 2025-10-02 12:50:17.479 2 DEBUG nova.compute.manager [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:17 np0005466031 nova_compute[235803]: 2025-10-02 12:50:17.479 2 WARNING nova.compute.manager [req-34eb0c96-30fc-4c94-9771-82bef56e4e74 req-fe1f51a1-7928-402f-a18d-5e83f85d8c88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.446 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409418.4457288, 74af08e5-d1ea-478b-ace8-00363679ec4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.446 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.449 2 DEBUG nova.compute.manager [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.452 2 INFO nova.virt.libvirt.driver [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance running successfully.#033[00m
Oct  2 08:50:18 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.454 2 DEBUG nova.virt.libvirt.guest [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.454 2 DEBUG nova.virt.libvirt.driver [None req-ee41a13e-c1ed-4c47-b05f-91ffc79a6613 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.516 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.521 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.651 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.651 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409418.449269, 74af08e5-d1ea-478b-ace8-00363679ec4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.652 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Started (Lifecycle Event)#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.745 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.748 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:50:18 np0005466031 nova_compute[235803]: 2025-10-02 12:50:18.826 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:50:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:19.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:19 np0005466031 nova_compute[235803]: 2025-10-02 12:50:19.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:20 np0005466031 nova_compute[235803]: 2025-10-02 12:50:20.001 2 DEBUG nova.compute.manager [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:20 np0005466031 nova_compute[235803]: 2025-10-02 12:50:20.001 2 DEBUG oslo_concurrency.lockutils [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:20 np0005466031 nova_compute[235803]: 2025-10-02 12:50:20.001 2 DEBUG oslo_concurrency.lockutils [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:20 np0005466031 nova_compute[235803]: 2025-10-02 12:50:20.002 2 DEBUG oslo_concurrency.lockutils [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:20 np0005466031 nova_compute[235803]: 2025-10-02 12:50:20.002 2 DEBUG nova.compute.manager [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:20 np0005466031 nova_compute[235803]: 2025-10-02 12:50:20.002 2 WARNING nova.compute.manager [req-5af4dc6c-ca46-44ef-868f-49ba1630df44 req-2a101f68-e3c0-4d5b-8cba-d3ecbef03aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:50:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:21.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:21 np0005466031 nova_compute[235803]: 2025-10-02 12:50:21.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:21 np0005466031 podman[296491]: 2025-10-02 12:50:21.638771303 +0000 UTC m=+0.065678993 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:50:21 np0005466031 podman[296490]: 2025-10-02 12:50:21.638955218 +0000 UTC m=+0.065571220 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:50:21 np0005466031 nova_compute[235803]: 2025-10-02 12:50:21.913 2 DEBUG nova.network.neutron [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Oct  2 08:50:21 np0005466031 nova_compute[235803]: 2025-10-02 12:50:21.914 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:21 np0005466031 nova_compute[235803]: 2025-10-02 12:50:21.914 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:21 np0005466031 nova_compute[235803]: 2025-10-02 12:50:21.914 2 DEBUG nova.network.neutron [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:50:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:22.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:23.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:23 np0005466031 nova_compute[235803]: 2025-10-02 12:50:23.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:23 np0005466031 nova_compute[235803]: 2025-10-02 12:50:23.737 2 DEBUG nova.network.neutron [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:23 np0005466031 nova_compute[235803]: 2025-10-02 12:50:23.759 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:23 np0005466031 kernel: tap97f82dce-0b (unregistering): left promiscuous mode
Oct  2 08:50:23 np0005466031 NetworkManager[44907]: <info>  [1759409423.9704] device (tap97f82dce-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:50:23 np0005466031 nova_compute[235803]: 2025-10-02 12:50:23.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:50:23Z|00500|binding|INFO|Releasing lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae from this chassis (sb_readonly=0)
Oct  2 08:50:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:50:23Z|00501|binding|INFO|Setting lport 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae down in Southbound
Oct  2 08:50:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:50:23Z|00502|binding|INFO|Removing iface tap97f82dce-0b ovn-installed in OVS
Oct  2 08:50:23 np0005466031 nova_compute[235803]: 2025-10-02 12:50:23.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:23.996 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:83:29 10.100.0.11'], port_security=['fa:16:3e:42:83:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '74af08e5-d1ea-478b-ace8-00363679ec4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '8', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:50:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:23.997 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.000 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.015 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe22278-212f-4459-b40e-7cf22338b116]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:24 np0005466031 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct  2 08:50:24 np0005466031 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000087.scope: Consumed 6.373s CPU time.
Oct  2 08:50:24 np0005466031 systemd-machined[192227]: Machine qemu-59-instance-00000087 terminated.
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.043 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b729c6-06d8-4a81-ba38-3d3e21a7b4b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.047 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a38240c0-1a3c-4ebe-b8b2-d5cf1829595e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.074 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2d21fca3-6aab-4be6-ad10-89993470b17b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.091 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6310c9b4-0f33-4c40-bf95-1aee8f03c684]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701968, 'reachable_time': 20823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296543, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.106 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5da7e9-302c-421b-83df-85a2cc976a23]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701980, 'tstamp': 701980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296544, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701983, 'tstamp': 701983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296544, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.108 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.114 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.114 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.115 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:24.115 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.210 2 INFO nova.virt.libvirt.driver [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Instance destroyed successfully.#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.211 2 DEBUG nova.objects.instance [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.239 2 DEBUG nova.virt.libvirt.vif [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460832742',display_name='tempest-ServerActionsTestOtherB-server-1460832742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460832742',id=135,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-u7yttwhb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:50:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=74af08e5-d1ea-478b-ace8-00363679ec4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.240 2 DEBUG nova.network.os_vif_util [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.240 2 DEBUG nova.network.os_vif_util [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.241 2 DEBUG os_vif [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.242 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97f82dce-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.249 2 INFO os_vif [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:29,bridge_name='br-int',has_traffic_filtering=True,id=97f82dce-0b1b-4848-bd1a-7ec40fbf49ae,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97f82dce-0b')#033[00m
Oct  2 08:50:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:50:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/221187614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:50:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:50:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/221187614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.899 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.900 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:50:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:24.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.931 2 DEBUG nova.objects.instance [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid 74af08e5-d1ea-478b-ace8-00363679ec4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:24 np0005466031 nova_compute[235803]: 2025-10-02 12:50:24.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.034 2 DEBUG oslo_concurrency.processutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:25.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:25 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1660636347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.464 2 DEBUG oslo_concurrency.processutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.471 2 DEBUG nova.compute.provider_tree [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.487 2 DEBUG nova.scheduler.client.report [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.588 2 DEBUG oslo_concurrency.lockutils [None req-41479478-a11c-429c-89cc-44c6b4498de3 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:25.860 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:25.861 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:25.861 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.951 2 DEBUG nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.951 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.952 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.952 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.953 2 DEBUG nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.953 2 WARNING nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-unplugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.953 2 DEBUG nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.954 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.954 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.954 2 DEBUG oslo_concurrency.lockutils [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.955 2 DEBUG nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:25 np0005466031 nova_compute[235803]: 2025-10-02 12:50:25.955 2 WARNING nova.compute.manager [req-fea9e50a-b686-4073-be73-bed8697a6c81 req-bf49b5d9-b493-4053-870b-8748ce96fbb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:50:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:26.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:27.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.004526) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428004606, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 475, "num_deletes": 251, "total_data_size": 555858, "memory_usage": 564824, "flush_reason": "Manual Compaction"}
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428023574, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 366281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54786, "largest_seqno": 55255, "table_properties": {"data_size": 363680, "index_size": 637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6515, "raw_average_key_size": 19, "raw_value_size": 358421, "raw_average_value_size": 1051, "num_data_blocks": 28, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409408, "oldest_key_time": 1759409408, "file_creation_time": 1759409428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 19098 microseconds, and 1782 cpu microseconds.
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.023620) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 366281 bytes OK
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.023640) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.084014) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.084060) EVENT_LOG_v1 {"time_micros": 1759409428084052, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.084081) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 552954, prev total WAL file size 552954, number of live WAL files 2.
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.084748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(357KB)], [105(13MB)]
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428084784, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 14714488, "oldest_snapshot_seqno": -1}
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 7919 keys, 12844661 bytes, temperature: kUnknown
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428364845, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 12844661, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12789873, "index_size": 33876, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205554, "raw_average_key_size": 25, "raw_value_size": 12646947, "raw_average_value_size": 1597, "num_data_blocks": 1333, "num_entries": 7919, "num_filter_entries": 7919, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.365255) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12844661 bytes
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.370425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.5 rd, 45.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.7 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(75.2) write-amplify(35.1) OK, records in: 8433, records dropped: 514 output_compression: NoCompression
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.370458) EVENT_LOG_v1 {"time_micros": 1759409428370445, "job": 66, "event": "compaction_finished", "compaction_time_micros": 280275, "compaction_time_cpu_micros": 29648, "output_level": 6, "num_output_files": 1, "total_output_size": 12844661, "num_input_records": 8433, "num_output_records": 7919, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428371182, "job": 66, "event": "table_file_deletion", "file_number": 107}
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428374319, "job": 66, "event": "table_file_deletion", "file_number": 105}
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.084642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.374467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.374472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.374474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.374475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:50:28.374477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:50:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:28.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:50:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:29.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:29 np0005466031 nova_compute[235803]: 2025-10-02 12:50:29.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:29 np0005466031 nova_compute[235803]: 2025-10-02 12:50:29.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:30 np0005466031 nova_compute[235803]: 2025-10-02 12:50:30.117 2 DEBUG nova.compute.manager [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:30 np0005466031 nova_compute[235803]: 2025-10-02 12:50:30.118 2 DEBUG nova.compute.manager [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing instance network info cache due to event network-changed-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:30 np0005466031 nova_compute[235803]: 2025-10-02 12:50:30.118 2 DEBUG oslo_concurrency.lockutils [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:30 np0005466031 nova_compute[235803]: 2025-10-02 12:50:30.118 2 DEBUG oslo_concurrency.lockutils [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:30 np0005466031 nova_compute[235803]: 2025-10-02 12:50:30.118 2 DEBUG nova.network.neutron [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Refreshing network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:31.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:32 np0005466031 nova_compute[235803]: 2025-10-02 12:50:32.221 2 DEBUG nova.network.neutron [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updated VIF entry in instance network info cache for port 97f82dce-0b1b-4848-bd1a-7ec40fbf49ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:32 np0005466031 nova_compute[235803]: 2025-10-02 12:50:32.222 2 DEBUG nova.network.neutron [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Updating instance_info_cache with network_info: [{"id": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "address": "fa:16:3e:42:83:29", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97f82dce-0b", "ovs_interfaceid": "97f82dce-0b1b-4848-bd1a-7ec40fbf49ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:32 np0005466031 nova_compute[235803]: 2025-10-02 12:50:32.247 2 DEBUG oslo_concurrency.lockutils [req-e26f5556-32c2-44a6-879e-4f0181761650 req-983a94dd-1c57-4945-992c-dc88da49f58a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-74af08e5-d1ea-478b-ace8-00363679ec4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:34 np0005466031 nova_compute[235803]: 2025-10-02 12:50:34.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:34.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:34 np0005466031 nova_compute[235803]: 2025-10-02 12:50:34.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e317 e317: 3 total, 3 up, 3 in
Oct  2 08:50:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:36.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:37.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.069 2 DEBUG nova.compute.manager [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.069 2 DEBUG oslo_concurrency.lockutils [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.069 2 DEBUG oslo_concurrency.lockutils [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.069 2 DEBUG oslo_concurrency.lockutils [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "74af08e5-d1ea-478b-ace8-00363679ec4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.069 2 DEBUG nova.compute.manager [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] No waiting events found dispatching network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.070 2 WARNING nova.compute.manager [req-8589431c-cb62-4468-bde6-8e60770ae81d req-0a7c23da-e667-4131-b48b-fe8012b963a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Received unexpected event network-vif-plugged-97f82dce-0b1b-4848-bd1a-7ec40fbf49ae for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:50:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:39.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.210 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409424.2086377, 74af08e5-d1ea-478b-ace8-00363679ec4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.210 2 INFO nova.compute.manager [-] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.240 2 DEBUG nova.compute.manager [None req-4b38b3c7-7b32-4009-be8d-928d23c05c32 - - - - - -] [instance: 74af08e5-d1ea-478b-ace8-00363679ec4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:39 np0005466031 nova_compute[235803]: 2025-10-02 12:50:39.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:40.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:41.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:42 np0005466031 nova_compute[235803]: 2025-10-02 12:50:42.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:42.869 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:50:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:42.869 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:50:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:50:42.870 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:42.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:43.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:43 np0005466031 podman[296636]: 2025-10-02 12:50:43.655515889 +0000 UTC m=+0.083814806 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:50:43 np0005466031 podman[296635]: 2025-10-02 12:50:43.655298362 +0000 UTC m=+0.084196136 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:50:44 np0005466031 nova_compute[235803]: 2025-10-02 12:50:44.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:44.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:44 np0005466031 nova_compute[235803]: 2025-10-02 12:50:44.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:45.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:46.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:47.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e318 e318: 3 total, 3 up, 3 in
Oct  2 08:50:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:48.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:49.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:49 np0005466031 nova_compute[235803]: 2025-10-02 12:50:49.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466031 nova_compute[235803]: 2025-10-02 12:50:50.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:50:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:50.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:50:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:51.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:52 np0005466031 podman[296754]: 2025-10-02 12:50:52.513364562 +0000 UTC m=+0.054877032 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:50:52 np0005466031 podman[296755]: 2025-10-02 12:50:52.512841127 +0000 UTC m=+0.050464465 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:50:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:52.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:53.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:50:54 np0005466031 nova_compute[235803]: 2025-10-02 12:50:54.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:54.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:55 np0005466031 nova_compute[235803]: 2025-10-02 12:50:55.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:55.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:50:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:50:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:56.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:57.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.042 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.043 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.071 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.188 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.188 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.194 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.194 2 INFO nova.compute.claims [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.328 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.658 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1132038165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.975 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.981 2 DEBUG nova.compute.provider_tree [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:58 np0005466031 nova_compute[235803]: 2025-10-02 12:50:58.996 2 DEBUG nova.scheduler.client.report [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.023 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.024 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.066 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.066 2 DEBUG nova.network.neutron [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.080 2 INFO nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.094 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.164 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.166 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.166 2 INFO nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating image(s)#033[00m
Oct  2 08:50:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:50:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.726 2 DEBUG nova.storage.rbd_utils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.759 2 DEBUG nova.storage.rbd_utils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.790 2 DEBUG nova.storage.rbd_utils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.794 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.829 2 DEBUG nova.policy [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b5104e5372994cd19b720862cf1ca2ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.873 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.874 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.874 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.875 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.903 2 DEBUG nova.storage.rbd_utils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:59 np0005466031 nova_compute[235803]: 2025-10-02 12:50:59.907 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1440a2f-0663-451f-bef5-bbece30acc40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:00 np0005466031 nova_compute[235803]: 2025-10-02 12:51:00.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:00.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:01.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:01 np0005466031 nova_compute[235803]: 2025-10-02 12:51:01.342 2 DEBUG nova.network.neutron [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Successfully created port: d3265627-45dd-403c-990b-451562559afe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:51:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:01 np0005466031 nova_compute[235803]: 2025-10-02 12:51:01.826 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1440a2f-0663-451f-bef5-bbece30acc40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.918s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:01 np0005466031 nova_compute[235803]: 2025-10-02 12:51:01.901 2 DEBUG nova.storage.rbd_utils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] resizing rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:51:02 np0005466031 nova_compute[235803]: 2025-10-02 12:51:02.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:02 np0005466031 nova_compute[235803]: 2025-10-02 12:51:02.934 2 DEBUG nova.network.neutron [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Successfully updated port: d3265627-45dd-403c-990b-451562559afe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:51:02 np0005466031 nova_compute[235803]: 2025-10-02 12:51:02.957 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:51:02 np0005466031 nova_compute[235803]: 2025-10-02 12:51:02.957 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:51:02 np0005466031 nova_compute[235803]: 2025-10-02 12:51:02.957 2 DEBUG nova.network.neutron [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:51:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:02.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.124 2 DEBUG nova.network.neutron [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.170 2 DEBUG nova.compute.manager [req-cadbbf09-90d5-43c9-942f-49aa353feedd req-ec873838-caa5-443b-85c4-5e1740fbbf41 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-changed-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.170 2 DEBUG nova.compute.manager [req-cadbbf09-90d5-43c9-942f-49aa353feedd req-ec873838-caa5-443b-85c4-5e1740fbbf41 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing instance network info cache due to event network-changed-d3265627-45dd-403c-990b-451562559afe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.171 2 DEBUG oslo_concurrency.lockutils [req-cadbbf09-90d5-43c9-942f-49aa353feedd req-ec873838-caa5-443b-85c4-5e1740fbbf41 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:51:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:03.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.546 2 DEBUG nova.objects.instance [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.561 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.561 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Ensure instance console log exists: /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.562 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.562 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.563 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.974 2 DEBUG nova.network.neutron [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.994 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.994 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance network_info: |[{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.995 2 DEBUG oslo_concurrency.lockutils [req-cadbbf09-90d5-43c9-942f-49aa353feedd req-ec873838-caa5-443b-85c4-5e1740fbbf41 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.995 2 DEBUG nova.network.neutron [req-cadbbf09-90d5-43c9-942f-49aa353feedd req-ec873838-caa5-443b-85c4-5e1740fbbf41 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing network info cache for port d3265627-45dd-403c-990b-451562559afe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:51:03 np0005466031 nova_compute[235803]: 2025-10-02 12:51:03.997 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Start _get_guest_xml network_info=[{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.002 2 WARNING nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.006 2 DEBUG nova.virt.libvirt.host [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.006 2 DEBUG nova.virt.libvirt.host [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.009 2 DEBUG nova.virt.libvirt.host [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.009 2 DEBUG nova.virt.libvirt.host [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.010 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.010 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.011 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.011 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.011 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.011 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.011 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.012 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.012 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.012 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.012 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.012 2 DEBUG nova.virt.hardware [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.015 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:51:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:51:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:51:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1436376951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.828 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.877 2 DEBUG nova.storage.rbd_utils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:04 np0005466031 nova_compute[235803]: 2025-10-02 12:51:04.882 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:04.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:05.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:51:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2214410976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.589 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.591 2 DEBUG nova.virt.libvirt.vif [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.591 2 DEBUG nova.network.os_vif_util [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.592 2 DEBUG nova.network.os_vif_util [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.593 2 DEBUG nova.objects.instance [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.621 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <uuid>a1440a2f-0663-451f-bef5-bbece30acc40</uuid>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <name>instance-0000008a</name>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerActionsTestOtherB-server-1789493944</nova:name>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:51:04</nova:creationTime>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <nova:port uuid="d3265627-45dd-403c-990b-451562559afe">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <entry name="serial">a1440a2f-0663-451f-bef5-bbece30acc40</entry>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <entry name="uuid">a1440a2f-0663-451f-bef5-bbece30acc40</entry>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk.config">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:a5:ff:5d"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <target dev="tapd3265627-45"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/console.log" append="off"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:51:05 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:51:05 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:51:05 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:51:05 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.623 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Preparing to wait for external event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.624 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.624 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.624 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.625 2 DEBUG nova.virt.libvirt.vif [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.626 2 DEBUG nova.network.os_vif_util [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.626 2 DEBUG nova.network.os_vif_util [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.627 2 DEBUG os_vif [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3265627-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3265627-45, col_values=(('external_ids', {'iface-id': 'd3265627-45dd-403c-990b-451562559afe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:ff:5d', 'vm-uuid': 'a1440a2f-0663-451f-bef5-bbece30acc40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:05 np0005466031 NetworkManager[44907]: <info>  [1759409465.6356] manager: (tapd3265627-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.643 2 INFO os_vif [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45')#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.662 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.662 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.745 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.745 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.745 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:a5:ff:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.746 2 INFO nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Using config drive#033[00m
Oct  2 08:51:05 np0005466031 nova_compute[235803]: 2025-10-02 12:51:05.772 2 DEBUG nova.storage.rbd_utils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:51:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4258114913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:51:06 np0005466031 nova_compute[235803]: 2025-10-02 12:51:06.140 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:06 np0005466031 nova_compute[235803]: 2025-10-02 12:51:06.173 2 INFO nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating config drive at /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config#033[00m
Oct  2 08:51:06 np0005466031 nova_compute[235803]: 2025-10-02 12:51:06.179 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvytno1l0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:06 np0005466031 nova_compute[235803]: 2025-10-02 12:51:06.236 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:06 np0005466031 nova_compute[235803]: 2025-10-02 12:51:06.236 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:06 np0005466031 nova_compute[235803]: 2025-10-02 12:51:06.240 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:06 np0005466031 nova_compute[235803]: 2025-10-02 12:51:06.241 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:06 np0005466031 nova_compute[235803]: 2025-10-02 12:51:06.319 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvytno1l0" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:06.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.026 2 DEBUG nova.storage.rbd_utils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.029 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config a1440a2f-0663-451f-bef5-bbece30acc40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.163 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.164 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4006MB free_disk=20.880725860595703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.165 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.165 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:07.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.295 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 184f3992-03ad-4908-aeb5-b14e562fa846 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.296 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance a1440a2f-0663-451f-bef5-bbece30acc40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.296 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.297 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.424 2 DEBUG nova.network.neutron [req-cadbbf09-90d5-43c9-942f-49aa353feedd req-ec873838-caa5-443b-85c4-5e1740fbbf41 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updated VIF entry in instance network info cache for port d3265627-45dd-403c-990b-451562559afe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.425 2 DEBUG nova.network.neutron [req-cadbbf09-90d5-43c9-942f-49aa353feedd req-ec873838-caa5-443b-85c4-5e1740fbbf41 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.441 2 DEBUG oslo_concurrency.lockutils [req-cadbbf09-90d5-43c9-942f-49aa353feedd req-ec873838-caa5-443b-85c4-5e1740fbbf41 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.472 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.776 2 DEBUG oslo_concurrency.processutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config a1440a2f-0663-451f-bef5-bbece30acc40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.777 2 INFO nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deleting local config drive /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config because it was imported into RBD.#033[00m
Oct  2 08:51:07 np0005466031 NetworkManager[44907]: <info>  [1759409467.8285] manager: (tapd3265627-45): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct  2 08:51:07 np0005466031 kernel: tapd3265627-45: entered promiscuous mode
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:07Z|00503|binding|INFO|Claiming lport d3265627-45dd-403c-990b-451562559afe for this chassis.
Oct  2 08:51:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:07Z|00504|binding|INFO|d3265627-45dd-403c-990b-451562559afe: Claiming fa:16:3e:a5:ff:5d 10.100.0.6
Oct  2 08:51:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:07Z|00505|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe ovn-installed in OVS
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:07 np0005466031 systemd-machined[192227]: New machine qemu-60-instance-0000008a.
Oct  2 08:51:07 np0005466031 systemd[1]: Started Virtual Machine qemu-60-instance-0000008a.
Oct  2 08:51:07 np0005466031 systemd-udevd[297373]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:51:07 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:07Z|00506|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe up in Southbound
Oct  2 08:51:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:07.905 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ff:5d 10.100.0.6'], port_security=['fa:16:3e:a5:ff:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a1440a2f-0663-451f-bef5-bbece30acc40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=d3265627-45dd-403c-990b-451562559afe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:51:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:07.906 141898 INFO neutron.agent.ovn.metadata.agent [-] Port d3265627-45dd-403c-990b-451562559afe in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis#033[00m
Oct  2 08:51:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:07.908 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4#033[00m
Oct  2 08:51:07 np0005466031 NetworkManager[44907]: <info>  [1759409467.9163] device (tapd3265627-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:51:07 np0005466031 NetworkManager[44907]: <info>  [1759409467.9176] device (tapd3265627-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:51:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:07.925 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[235215e8-2aa5-4c39-ad19-b5135dddd7e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:51:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2420490110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:51:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:07.956 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f88853-0821-4eb0-9ca7-65130b703bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:07.959 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[643d5b4d-6fd9-4bff-bfbc-a300005a9210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.959 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.966 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:51:07 np0005466031 nova_compute[235803]: 2025-10-02 12:51:07.981 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:51:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:07.988 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[66997c59-d94a-425a-95f0-eb810f31980b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:08 np0005466031 nova_compute[235803]: 2025-10-02 12:51:08.006 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:51:08 np0005466031 nova_compute[235803]: 2025-10-02 12:51:08.006 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:08.006 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4451bb63-8acb-4321-9de3-6a156857c8ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701968, 'reachable_time': 20823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297389, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:08.022 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[981849d6-884b-46a6-8f3c-12ed5012d248]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701980, 'tstamp': 701980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297390, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701983, 'tstamp': 701983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297390, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:08.023 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:08 np0005466031 nova_compute[235803]: 2025-10-02 12:51:08.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:08 np0005466031 nova_compute[235803]: 2025-10-02 12:51:08.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:08.026 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:08.026 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:51:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:08.027 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:08.027 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:51:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.263 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409469.262928, a1440a2f-0663-451f-bef5-bbece30acc40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.264 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Started (Lifecycle Event)#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.287 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.292 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409469.267308, a1440a2f-0663-451f-bef5-bbece30acc40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.292 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.313 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.318 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.338 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.687 2 DEBUG nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.687 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.688 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.688 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.688 2 DEBUG nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Processing event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.688 2 DEBUG nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.689 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.689 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.689 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.689 2 DEBUG nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-plugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.690 2 WARNING nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received unexpected event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.691 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.694 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409469.6938436, a1440a2f-0663-451f-bef5-bbece30acc40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.694 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.695 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.698 2 INFO nova.virt.libvirt.driver [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance spawned successfully.#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.698 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.728 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.734 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.737 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.737 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.737 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.738 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.738 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.739 2 DEBUG nova.virt.libvirt.driver [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.798 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.846 2 INFO nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Took 10.68 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.846 2 DEBUG nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.905 2 INFO nova.compute.manager [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Took 11.75 seconds to build instance.#033[00m
Oct  2 08:51:09 np0005466031 nova_compute[235803]: 2025-10-02 12:51:09.920 2 DEBUG oslo_concurrency.lockutils [None req-8b89d8c8-6244-4099-b1b1-5e31becd39d2 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:10 np0005466031 nova_compute[235803]: 2025-10-02 12:51:10.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:10 np0005466031 nova_compute[235803]: 2025-10-02 12:51:10.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:10.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:11 np0005466031 nova_compute[235803]: 2025-10-02 12:51:11.005 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:11 np0005466031 nova_compute[235803]: 2025-10-02 12:51:11.005 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:51:11 np0005466031 nova_compute[235803]: 2025-10-02 12:51:11.006 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:51:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:11 np0005466031 nova_compute[235803]: 2025-10-02 12:51:11.508 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:51:11 np0005466031 nova_compute[235803]: 2025-10-02 12:51:11.508 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:51:11 np0005466031 nova_compute[235803]: 2025-10-02 12:51:11.508 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:51:11 np0005466031 nova_compute[235803]: 2025-10-02 12:51:11.509 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 184f3992-03ad-4908-aeb5-b14e562fa846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:12.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.115 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updating instance_info_cache with network_info: [{"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.134 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.135 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.135 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.135 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.135 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.136 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.136 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:13.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.703 2 DEBUG nova.compute.manager [req-7a97ccfe-e1f1-46a1-9fb1-dacd5f8f23d7 req-01c85558-d301-4c31-a0ef-36428e71f15f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-changed-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.704 2 DEBUG nova.compute.manager [req-7a97ccfe-e1f1-46a1-9fb1-dacd5f8f23d7 req-01c85558-d301-4c31-a0ef-36428e71f15f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing instance network info cache due to event network-changed-d3265627-45dd-403c-990b-451562559afe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.704 2 DEBUG oslo_concurrency.lockutils [req-7a97ccfe-e1f1-46a1-9fb1-dacd5f8f23d7 req-01c85558-d301-4c31-a0ef-36428e71f15f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.704 2 DEBUG oslo_concurrency.lockutils [req-7a97ccfe-e1f1-46a1-9fb1-dacd5f8f23d7 req-01c85558-d301-4c31-a0ef-36428e71f15f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:51:13 np0005466031 nova_compute[235803]: 2025-10-02 12:51:13.705 2 DEBUG nova.network.neutron [req-7a97ccfe-e1f1-46a1-9fb1-dacd5f8f23d7 req-01c85558-d301-4c31-a0ef-36428e71f15f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing network info cache for port d3265627-45dd-403c-990b-451562559afe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:51:14 np0005466031 podman[297436]: 2025-10-02 12:51:14.67904296 +0000 UTC m=+0.097683265 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:51:14 np0005466031 podman[297437]: 2025-10-02 12:51:14.699567741 +0000 UTC m=+0.114335394 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:51:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:14.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:15 np0005466031 nova_compute[235803]: 2025-10-02 12:51:15.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:15.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:15 np0005466031 nova_compute[235803]: 2025-10-02 12:51:15.213 2 DEBUG nova.network.neutron [req-7a97ccfe-e1f1-46a1-9fb1-dacd5f8f23d7 req-01c85558-d301-4c31-a0ef-36428e71f15f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updated VIF entry in instance network info cache for port d3265627-45dd-403c-990b-451562559afe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:51:15 np0005466031 nova_compute[235803]: 2025-10-02 12:51:15.214 2 DEBUG nova.network.neutron [req-7a97ccfe-e1f1-46a1-9fb1-dacd5f8f23d7 req-01c85558-d301-4c31-a0ef-36428e71f15f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:15 np0005466031 nova_compute[235803]: 2025-10-02 12:51:15.235 2 DEBUG oslo_concurrency.lockutils [req-7a97ccfe-e1f1-46a1-9fb1-dacd5f8f23d7 req-01c85558-d301-4c31-a0ef-36428e71f15f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:51:15 np0005466031 nova_compute[235803]: 2025-10-02 12:51:15.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:51:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2108244912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:51:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:16.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:18.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:20 np0005466031 nova_compute[235803]: 2025-10-02 12:51:20.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:20 np0005466031 nova_compute[235803]: 2025-10-02 12:51:20.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:20.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:21.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:22 np0005466031 podman[297483]: 2025-10-02 12:51:22.649527189 +0000 UTC m=+0.071340566 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  2 08:51:22 np0005466031 podman[297482]: 2025-10-02 12:51:22.653471793 +0000 UTC m=+0.075046403 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:51:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e319 e319: 3 total, 3 up, 3 in
Oct  2 08:51:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:23.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:23.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:24.295 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:51:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:24.295 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:51:24 np0005466031 nova_compute[235803]: 2025-10-02 12:51:24.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:24 np0005466031 nova_compute[235803]: 2025-10-02 12:51:24.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:25.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:25 np0005466031 nova_compute[235803]: 2025-10-02 12:51:25.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:25.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:25 np0005466031 nova_compute[235803]: 2025-10-02 12:51:25.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:25.861 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:25.862 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:25.862 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:27.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:27.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:27 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:27Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:ff:5d 10.100.0.6
Oct  2 08:51:27 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:27Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:ff:5d 10.100.0.6
Oct  2 08:51:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:29.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:51:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:29.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:51:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:29Z|00507|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:51:29 np0005466031 nova_compute[235803]: 2025-10-02 12:51:29.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005466031 nova_compute[235803]: 2025-10-02 12:51:30.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:51:30.297 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:30 np0005466031 nova_compute[235803]: 2025-10-02 12:51:30.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005466031 nova_compute[235803]: 2025-10-02 12:51:30.646 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:30 np0005466031 nova_compute[235803]: 2025-10-02 12:51:30.646 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:51:30 np0005466031 nova_compute[235803]: 2025-10-02 12:51:30.662 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:51:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:31.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:31.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:31 np0005466031 nova_compute[235803]: 2025-10-02 12:51:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:31 np0005466031 nova_compute[235803]: 2025-10-02 12:51:31.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:51:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e320 e320: 3 total, 3 up, 3 in
Oct  2 08:51:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:33.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:33.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:33 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:33Z|00508|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:51:33 np0005466031 nova_compute[235803]: 2025-10-02 12:51:33.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:35 np0005466031 nova_compute[235803]: 2025-10-02 12:51:35.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:35.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:35.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:35 np0005466031 nova_compute[235803]: 2025-10-02 12:51:35.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:37.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:37.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:39.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:39.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:40 np0005466031 nova_compute[235803]: 2025-10-02 12:51:40.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:40 np0005466031 nova_compute[235803]: 2025-10-02 12:51:40.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:41.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:41.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:43.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:45 np0005466031 nova_compute[235803]: 2025-10-02 12:51:45.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:45 np0005466031 podman[297607]: 2025-10-02 12:51:45.096259947 +0000 UTC m=+0.084632149 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:51:45 np0005466031 podman[297608]: 2025-10-02 12:51:45.09877922 +0000 UTC m=+0.083168507 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:51:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:45.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:45 np0005466031 nova_compute[235803]: 2025-10-02 12:51:45.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:47.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:47.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:49.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:49.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:50 np0005466031 nova_compute[235803]: 2025-10-02 12:51:50.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:50 np0005466031 nova_compute[235803]: 2025-10-02 12:51:50.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:50 np0005466031 nova_compute[235803]: 2025-10-02 12:51:50.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:52 np0005466031 ovn_controller[132413]: 2025-10-02T12:51:52Z|00509|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:51:52 np0005466031 nova_compute[235803]: 2025-10-02 12:51:52.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:53.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:53 np0005466031 podman[297682]: 2025-10-02 12:51:53.649578521 +0000 UTC m=+0.067717442 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:51:53 np0005466031 podman[297681]: 2025-10-02 12:51:53.67662315 +0000 UTC m=+0.089872020 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:51:54 np0005466031 nova_compute[235803]: 2025-10-02 12:51:54.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:55 np0005466031 nova_compute[235803]: 2025-10-02 12:51:55.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:55.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:55 np0005466031 nova_compute[235803]: 2025-10-02 12:51:55.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:57.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:59.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:51:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:51:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:51:59 np0005466031 nova_compute[235803]: 2025-10-02 12:51:59.650 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:00 np0005466031 nova_compute[235803]: 2025-10-02 12:52:00.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:00 np0005466031 nova_compute[235803]: 2025-10-02 12:52:00.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:01.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:01.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:02 np0005466031 nova_compute[235803]: 2025-10-02 12:52:02.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:03.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:03.408 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:03 np0005466031 nova_compute[235803]: 2025-10-02 12:52:03.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:03.409 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:52:03 np0005466031 nova_compute[235803]: 2025-10-02 12:52:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:52:03Z|00510|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:52:03 np0005466031 nova_compute[235803]: 2025-10-02 12:52:03.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:05.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:05.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.664 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.665 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.665 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.665 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:52:05 np0005466031 nova_compute[235803]: 2025-10-02 12:52:05.665 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/723638720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.236 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.341 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.342 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.345 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.345 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.545 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.546 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3880MB free_disk=20.876129150390625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.547 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.547 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.638 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 184f3992-03ad-4908-aeb5-b14e562fa846 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.639 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance a1440a2f-0663-451f-bef5-bbece30acc40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.639 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.639 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:52:06 np0005466031 nova_compute[235803]: 2025-10-02 12:52:06.703 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:07.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:52:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:52:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:07.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e321 e321: 3 total, 3 up, 3 in
Oct  2 08:52:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.028775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528028846, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1353, "num_deletes": 256, "total_data_size": 2866087, "memory_usage": 2911936, "flush_reason": "Manual Compaction"}
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1669602666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:08 np0005466031 nova_compute[235803]: 2025-10-02 12:52:08.051 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:08 np0005466031 nova_compute[235803]: 2025-10-02 12:52:08.057 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:52:08 np0005466031 nova_compute[235803]: 2025-10-02 12:52:08.092 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528096781, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 1204616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55261, "largest_seqno": 56608, "table_properties": {"data_size": 1199873, "index_size": 2139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12972, "raw_average_key_size": 21, "raw_value_size": 1189398, "raw_average_value_size": 1962, "num_data_blocks": 94, "num_entries": 606, "num_filter_entries": 606, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409428, "oldest_key_time": 1759409428, "file_creation_time": 1759409528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 68223 microseconds, and 6591 cpu microseconds.
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.097007) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 1204616 bytes OK
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.097046) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.139257) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.139306) EVENT_LOG_v1 {"time_micros": 1759409528139295, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.139331) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2859624, prev total WAL file size 2859624, number of live WAL files 2.
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.140297) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(1176KB)], [108(12MB)]
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528140328, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 14049277, "oldest_snapshot_seqno": -1}
Oct  2 08:52:08 np0005466031 nova_compute[235803]: 2025-10-02 12:52:08.155 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:52:08 np0005466031 nova_compute[235803]: 2025-10-02 12:52:08.155 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8038 keys, 10810495 bytes, temperature: kUnknown
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528301894, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10810495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10758318, "index_size": 31009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 208360, "raw_average_key_size": 25, "raw_value_size": 10616770, "raw_average_value_size": 1320, "num_data_blocks": 1215, "num_entries": 8038, "num_filter_entries": 8038, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.302173) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10810495 bytes
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.319264) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.9 rd, 66.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 12.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(20.6) write-amplify(9.0) OK, records in: 8525, records dropped: 487 output_compression: NoCompression
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.319298) EVENT_LOG_v1 {"time_micros": 1759409528319286, "job": 68, "event": "compaction_finished", "compaction_time_micros": 161682, "compaction_time_cpu_micros": 26893, "output_level": 6, "num_output_files": 1, "total_output_size": 10810495, "num_input_records": 8525, "num_output_records": 8038, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528319932, "job": 68, "event": "table_file_deletion", "file_number": 110}
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528322470, "job": 68, "event": "table_file_deletion", "file_number": 108}
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.140181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.322610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.322616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.322618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.322620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:52:08.322622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466031 nova_compute[235803]: 2025-10-02 12:52:08.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:09.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:09.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:10 np0005466031 nova_compute[235803]: 2025-10-02 12:52:10.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:10.411 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:10 np0005466031 nova_compute[235803]: 2025-10-02 12:52:10.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:11.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:11 np0005466031 nova_compute[235803]: 2025-10-02 12:52:11.155 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:11 np0005466031 nova_compute[235803]: 2025-10-02 12:52:11.156 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:52:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:52:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:11.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:52:11 np0005466031 nova_compute[235803]: 2025-10-02 12:52:11.585 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:52:11 np0005466031 nova_compute[235803]: 2025-10-02 12:52:11.586 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:52:11 np0005466031 nova_compute[235803]: 2025-10-02 12:52:11.586 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:52:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e322 e322: 3 total, 3 up, 3 in
Oct  2 08:52:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:13.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:13.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e323 e323: 3 total, 3 up, 3 in
Oct  2 08:52:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:15.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.308 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.355 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.355 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.355 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.356 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.356 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.356 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:52:15 np0005466031 podman[298007]: 2025-10-02 12:52:15.632867701 +0000 UTC m=+0.060035870 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:52:15 np0005466031 podman[298008]: 2025-10-02 12:52:15.660153487 +0000 UTC m=+0.087146861 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:15 np0005466031 ovn_controller[132413]: 2025-10-02T12:52:15Z|00511|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:52:15 np0005466031 nova_compute[235803]: 2025-10-02 12:52:15.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:17.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:17.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:19.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:52:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:52:20 np0005466031 nova_compute[235803]: 2025-10-02 12:52:20.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:20 np0005466031 nova_compute[235803]: 2025-10-02 12:52:20.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:21.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:21.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:23.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:52:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:23.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:52:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e324 e324: 3 total, 3 up, 3 in
Oct  2 08:52:24 np0005466031 podman[298061]: 2025-10-02 12:52:24.668970068 +0000 UTC m=+0.074910099 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 08:52:24 np0005466031 podman[298060]: 2025-10-02 12:52:24.693628018 +0000 UTC m=+0.097476488 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 08:52:25 np0005466031 nova_compute[235803]: 2025-10-02 12:52:25.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:25.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:25.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:25 np0005466031 nova_compute[235803]: 2025-10-02 12:52:25.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:25.862 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:25.862 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:25.863 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:27.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:27.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:29.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:29.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:30 np0005466031 nova_compute[235803]: 2025-10-02 12:52:30.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:30 np0005466031 nova_compute[235803]: 2025-10-02 12:52:30.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:30 np0005466031 nova_compute[235803]: 2025-10-02 12:52:30.831 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:31.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:31.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:33.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:35 np0005466031 nova_compute[235803]: 2025-10-02 12:52:35.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:35.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:35 np0005466031 nova_compute[235803]: 2025-10-02 12:52:35.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:37.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:37.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:39.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:39.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:40 np0005466031 nova_compute[235803]: 2025-10-02 12:52:40.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:40 np0005466031 nova_compute[235803]: 2025-10-02 12:52:40.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:41.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:41.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:43.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:43.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:45 np0005466031 nova_compute[235803]: 2025-10-02 12:52:45.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:45.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:45.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:45 np0005466031 nova_compute[235803]: 2025-10-02 12:52:45.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:45 np0005466031 nova_compute[235803]: 2025-10-02 12:52:45.889 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:45 np0005466031 nova_compute[235803]: 2025-10-02 12:52:45.890 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:45 np0005466031 nova_compute[235803]: 2025-10-02 12:52:45.890 2 INFO nova.compute.manager [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Shelving#033[00m
Oct  2 08:52:45 np0005466031 nova_compute[235803]: 2025-10-02 12:52:45.931 2 DEBUG nova.virt.libvirt.driver [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:52:46 np0005466031 podman[298213]: 2025-10-02 12:52:46.631339385 +0000 UTC m=+0.055287063 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 08:52:46 np0005466031 podman[298214]: 2025-10-02 12:52:46.658371944 +0000 UTC m=+0.079568873 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:52:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:47.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:47.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:48 np0005466031 nova_compute[235803]: 2025-10-02 12:52:48.849 2 DEBUG nova.compute.manager [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:52:48 np0005466031 nova_compute[235803]: 2025-10-02 12:52:48.948 2 INFO nova.virt.libvirt.driver [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:52:48 np0005466031 kernel: tapd3265627-45 (unregistering): left promiscuous mode
Oct  2 08:52:48 np0005466031 NetworkManager[44907]: <info>  [1759409568.9797] device (tapd3265627-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:52:48 np0005466031 ovn_controller[132413]: 2025-10-02T12:52:48Z|00512|binding|INFO|Releasing lport d3265627-45dd-403c-990b-451562559afe from this chassis (sb_readonly=0)
Oct  2 08:52:48 np0005466031 ovn_controller[132413]: 2025-10-02T12:52:48Z|00513|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe down in Southbound
Oct  2 08:52:48 np0005466031 ovn_controller[132413]: 2025-10-02T12:52:48Z|00514|binding|INFO|Removing iface tapd3265627-45 ovn-installed in OVS
Oct  2 08:52:48 np0005466031 nova_compute[235803]: 2025-10-02 12:52:48.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.037 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ff:5d 10.100.0.6'], port_security=['fa:16:3e:a5:ff:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a1440a2f-0663-451f-bef5-bbece30acc40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=d3265627-45dd-403c-990b-451562559afe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.038 141898 INFO neutron.agent.ovn.metadata.agent [-] Port d3265627-45dd-403c-990b-451562559afe in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.040 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4#033[00m
Oct  2 08:52:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.055 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5f417b8c-0846-48e9-ba96-c7b8a5ee64f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:49 np0005466031 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct  2 08:52:49 np0005466031 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000008a.scope: Consumed 17.610s CPU time.
Oct  2 08:52:49 np0005466031 systemd-machined[192227]: Machine qemu-60-instance-0000008a terminated.
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.084 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[83af2e05-02fd-46b9-b44d-58c191c11108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.087 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce118a6-931e-43ae-9be5-e6028308f6d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.100 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.101 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.112 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5a38ad90-1658-48e5-95a3-5c6278a78e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.130 2 DEBUG nova.objects.instance [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'pci_requests' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.130 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5a53f6-79a4-47ba-869b-a55b81a46a07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701968, 'reachable_time': 20823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298270, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.146 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[becc40fe-4136-49ae-b492-136dc25f22a6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701980, 'tstamp': 701980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298271, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9266ebd7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701983, 'tstamp': 701983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298271, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.148 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.150 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.151 2 INFO nova.compute.claims [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.151 2 DEBUG nova.objects.instance [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'resources' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.155 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.156 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.156 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:49.157 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:52:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:49.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.190 2 DEBUG nova.objects.instance [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'numa_topology' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.195 2 INFO nova.virt.libvirt.driver [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance destroyed successfully.#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.196 2 DEBUG nova.objects.instance [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'numa_topology' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.219 2 DEBUG nova.objects.instance [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.284 2 INFO nova.compute.resource_tracker [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating resource usage from migration 5556a224-fcba-479b-9e8e-b1ada4008517#033[00m
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.285 2 DEBUG nova.compute.resource_tracker [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Starting to track incoming migration 5556a224-fcba-479b-9e8e-b1ada4008517 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:52:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:49.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:49 np0005466031 nova_compute[235803]: 2025-10-02 12:52:49.740 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.097 2 DEBUG nova.compute.manager [req-42918f84-510f-42ab-8a15-03059d4ba3ba req-04997474-8f60-4269-bd5d-792e86e46aa4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.098 2 DEBUG oslo_concurrency.lockutils [req-42918f84-510f-42ab-8a15-03059d4ba3ba req-04997474-8f60-4269-bd5d-792e86e46aa4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.098 2 DEBUG oslo_concurrency.lockutils [req-42918f84-510f-42ab-8a15-03059d4ba3ba req-04997474-8f60-4269-bd5d-792e86e46aa4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.099 2 DEBUG oslo_concurrency.lockutils [req-42918f84-510f-42ab-8a15-03059d4ba3ba req-04997474-8f60-4269-bd5d-792e86e46aa4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.099 2 DEBUG nova.compute.manager [req-42918f84-510f-42ab-8a15-03059d4ba3ba req-04997474-8f60-4269-bd5d-792e86e46aa4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.099 2 WARNING nova.compute.manager [req-42918f84-510f-42ab-8a15-03059d4ba3ba req-04997474-8f60-4269-bd5d-792e86e46aa4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received unexpected event network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe for instance with vm_state active and task_state shelving.#033[00m
Oct  2 08:52:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3424068853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.163 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.168 2 DEBUG nova.compute.provider_tree [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.198 2 DEBUG nova.scheduler.client.report [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.231 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.231 2 INFO nova.compute.manager [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Migrating#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.370 2 INFO nova.virt.libvirt.driver [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Beginning cold snapshot process#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.605 2 DEBUG nova.virt.libvirt.imagebackend [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:52:50 np0005466031 nova_compute[235803]: 2025-10-02 12:52:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:51.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:51 np0005466031 nova_compute[235803]: 2025-10-02 12:52:51.606 2 DEBUG nova.storage.rbd_utils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(ffd6f5741193428588a5375da4612fe2) on rbd image(a1440a2f-0663-451f-bef5-bbece30acc40_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:52:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:52.333 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:52 np0005466031 nova_compute[235803]: 2025-10-02 12:52:52.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:52:52.334 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:52:52 np0005466031 nova_compute[235803]: 2025-10-02 12:52:52.579 2 DEBUG nova.compute.manager [req-6cb5bd71-b892-4721-93c6-87c676c9a556 req-2a76da45-83f8-4065-8da1-1cf298034d7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:52 np0005466031 nova_compute[235803]: 2025-10-02 12:52:52.579 2 DEBUG oslo_concurrency.lockutils [req-6cb5bd71-b892-4721-93c6-87c676c9a556 req-2a76da45-83f8-4065-8da1-1cf298034d7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:52 np0005466031 nova_compute[235803]: 2025-10-02 12:52:52.580 2 DEBUG oslo_concurrency.lockutils [req-6cb5bd71-b892-4721-93c6-87c676c9a556 req-2a76da45-83f8-4065-8da1-1cf298034d7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:52 np0005466031 nova_compute[235803]: 2025-10-02 12:52:52.580 2 DEBUG oslo_concurrency.lockutils [req-6cb5bd71-b892-4721-93c6-87c676c9a556 req-2a76da45-83f8-4065-8da1-1cf298034d7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:52 np0005466031 nova_compute[235803]: 2025-10-02 12:52:52.580 2 DEBUG nova.compute.manager [req-6cb5bd71-b892-4721-93c6-87c676c9a556 req-2a76da45-83f8-4065-8da1-1cf298034d7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-plugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:52:52 np0005466031 nova_compute[235803]: 2025-10-02 12:52:52.580 2 WARNING nova.compute.manager [req-6cb5bd71-b892-4721-93c6-87c676c9a556 req-2a76da45-83f8-4065-8da1-1cf298034d7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received unexpected event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 08:52:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e325 e325: 3 total, 3 up, 3 in
Oct  2 08:52:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:53.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:53.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:54 np0005466031 nova_compute[235803]: 2025-10-02 12:52:54.720 2 DEBUG nova.storage.rbd_utils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] cloning vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk@ffd6f5741193428588a5375da4612fe2 to images/47596e8e-a667-4ff8-bd1f-3f35c36243ae clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:52:54 np0005466031 nova_compute[235803]: 2025-10-02 12:52:54.845 2 DEBUG nova.storage.rbd_utils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] flattening images/47596e8e-a667-4ff8-bd1f-3f35c36243ae flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:52:55 np0005466031 nova_compute[235803]: 2025-10-02 12:52:55.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:55.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:55.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:55 np0005466031 podman[298414]: 2025-10-02 12:52:55.652010432 +0000 UTC m=+0.068822204 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:52:55 np0005466031 podman[298415]: 2025-10-02 12:52:55.677538777 +0000 UTC m=+0.093669349 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:52:55 np0005466031 nova_compute[235803]: 2025-10-02 12:52:55.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:52:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:57.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:52:57 np0005466031 systemd-logind[786]: New session 59 of user nova.
Oct  2 08:52:57 np0005466031 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:52:57 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:52:57 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:52:57 np0005466031 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:52:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:57.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:57 np0005466031 systemd[298460]: Queued start job for default target Main User Target.
Oct  2 08:52:57 np0005466031 systemd[298460]: Created slice User Application Slice.
Oct  2 08:52:57 np0005466031 systemd[298460]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:52:57 np0005466031 systemd[298460]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:52:57 np0005466031 systemd[298460]: Reached target Paths.
Oct  2 08:52:57 np0005466031 systemd[298460]: Reached target Timers.
Oct  2 08:52:57 np0005466031 systemd[298460]: Starting D-Bus User Message Bus Socket...
Oct  2 08:52:57 np0005466031 systemd[298460]: Starting Create User's Volatile Files and Directories...
Oct  2 08:52:57 np0005466031 systemd[298460]: Finished Create User's Volatile Files and Directories.
Oct  2 08:52:57 np0005466031 systemd[298460]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:52:57 np0005466031 systemd[298460]: Reached target Sockets.
Oct  2 08:52:57 np0005466031 systemd[298460]: Reached target Basic System.
Oct  2 08:52:57 np0005466031 systemd[298460]: Reached target Main User Target.
Oct  2 08:52:57 np0005466031 systemd[298460]: Startup finished in 141ms.
Oct  2 08:52:57 np0005466031 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:52:57 np0005466031 systemd[1]: Started Session 59 of User nova.
Oct  2 08:52:57 np0005466031 nova_compute[235803]: 2025-10-02 12:52:57.494 2 DEBUG nova.storage.rbd_utils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] removing snapshot(ffd6f5741193428588a5375da4612fe2) on rbd image(a1440a2f-0663-451f-bef5-bbece30acc40_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:52:57 np0005466031 systemd[1]: session-59.scope: Deactivated successfully.
Oct  2 08:52:57 np0005466031 systemd-logind[786]: Session 59 logged out. Waiting for processes to exit.
Oct  2 08:52:57 np0005466031 systemd-logind[786]: Removed session 59.
Oct  2 08:52:57 np0005466031 systemd-logind[786]: New session 61 of user nova.
Oct  2 08:52:57 np0005466031 systemd[1]: Started Session 61 of User nova.
Oct  2 08:52:57 np0005466031 systemd[1]: session-61.scope: Deactivated successfully.
Oct  2 08:52:57 np0005466031 systemd-logind[786]: Session 61 logged out. Waiting for processes to exit.
Oct  2 08:52:57 np0005466031 systemd-logind[786]: Removed session 61.
Oct  2 08:52:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e326 e326: 3 total, 3 up, 3 in
Oct  2 08:52:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:59 np0005466031 nova_compute[235803]: 2025-10-02 12:52:59.082 2 DEBUG nova.storage.rbd_utils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] creating snapshot(snap) on rbd image(47596e8e-a667-4ff8-bd1f-3f35c36243ae) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:52:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:52:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:59.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:52:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:52:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:59.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:00 np0005466031 nova_compute[235803]: 2025-10-02 12:53:00.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e327 e327: 3 total, 3 up, 3 in
Oct  2 08:53:00 np0005466031 nova_compute[235803]: 2025-10-02 12:53:00.651 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:00 np0005466031 nova_compute[235803]: 2025-10-02 12:53:00.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:01.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:01.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:01.336 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:03.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:03.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.362 2 INFO nova.virt.libvirt.driver [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Snapshot image upload complete#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.363 2 DEBUG nova.compute.manager [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.458 2 INFO nova.compute.manager [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Shelve offloading#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.464 2 INFO nova.virt.libvirt.driver [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance destroyed successfully.#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.464 2 DEBUG nova.compute.manager [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.466 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.467 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.467 2 DEBUG nova.network.neutron [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.905 2 DEBUG nova.compute.manager [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.905 2 DEBUG oslo_concurrency.lockutils [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.906 2 DEBUG oslo_concurrency.lockutils [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.906 2 DEBUG oslo_concurrency.lockutils [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.906 2 DEBUG nova.compute.manager [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:03 np0005466031 nova_compute[235803]: 2025-10-02 12:53:03.906 2 WARNING nova.compute.manager [req-d8ce2839-54a6-4816-932f-45405812fa8b req-4c23b2c6-5323-4fb7-a83c-ef94679fcba7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:53:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:04 np0005466031 nova_compute[235803]: 2025-10-02 12:53:04.193 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409569.1922252, a1440a2f-0663-451f-bef5-bbece30acc40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:04 np0005466031 nova_compute[235803]: 2025-10-02 12:53:04.194 2 INFO nova.compute.manager [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:53:04 np0005466031 nova_compute[235803]: 2025-10-02 12:53:04.225 2 DEBUG nova.compute.manager [None req-37e5c685-23fd-4027-b761-00b2f3ec9b82 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:04 np0005466031 nova_compute[235803]: 2025-10-02 12:53:04.227 2 DEBUG nova.compute.manager [None req-37e5c685-23fd-4027-b761-00b2f3ec9b82 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:53:04 np0005466031 nova_compute[235803]: 2025-10-02 12:53:04.268 2 INFO nova.compute.manager [None req-37e5c685-23fd-4027-b761-00b2f3ec9b82 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Oct  2 08:53:04 np0005466031 nova_compute[235803]: 2025-10-02 12:53:04.774 2 INFO nova.network.neutron [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:53:05 np0005466031 nova_compute[235803]: 2025-10-02 12:53:05.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:05.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:05.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:05 np0005466031 nova_compute[235803]: 2025-10-02 12:53:05.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.017 2 DEBUG nova.network.neutron [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.061 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.097 2 DEBUG nova.compute.manager [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.097 2 DEBUG oslo_concurrency.lockutils [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.098 2 DEBUG oslo_concurrency.lockutils [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.098 2 DEBUG oslo_concurrency.lockutils [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.098 2 DEBUG nova.compute.manager [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.098 2 WARNING nova.compute.manager [req-25e4f108-e713-4413-a68c-a039546309be req-4ea997ad-97cd-43d1-964b-881bc570c9a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.283 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.283 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.284 2 DEBUG nova.network.neutron [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.470 2 DEBUG nova.compute.manager [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.470 2 DEBUG nova.compute.manager [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing instance network info cache due to event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.470 2 DEBUG oslo_concurrency.lockutils [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:06 np0005466031 nova_compute[235803]: 2025-10-02 12:53:06.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:07.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:07.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.538 2 INFO nova.virt.libvirt.driver [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance destroyed successfully.#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.539 2 DEBUG nova.objects.instance [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.557 2 DEBUG nova.virt.libvirt.vif [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member',shelved_at='2025-10-02T12:53:03.363111',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='47596e8e-a667-4ff8-bd1f-3f35c36243ae'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.558 2 DEBUG nova.network.os_vif_util [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.558 2 DEBUG nova.network.os_vif_util [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.559 2 DEBUG os_vif [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3265627-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.568 2 INFO os_vif [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45')#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.673 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.673 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.673 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.673 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.673 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:07 np0005466031 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:53:07 np0005466031 systemd[298460]: Activating special unit Exit the Session...
Oct  2 08:53:07 np0005466031 systemd[298460]: Stopped target Main User Target.
Oct  2 08:53:07 np0005466031 systemd[298460]: Stopped target Basic System.
Oct  2 08:53:07 np0005466031 systemd[298460]: Stopped target Paths.
Oct  2 08:53:07 np0005466031 systemd[298460]: Stopped target Sockets.
Oct  2 08:53:07 np0005466031 systemd[298460]: Stopped target Timers.
Oct  2 08:53:07 np0005466031 systemd[298460]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:53:07 np0005466031 systemd[298460]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:53:07 np0005466031 systemd[298460]: Closed D-Bus User Message Bus Socket.
Oct  2 08:53:07 np0005466031 systemd[298460]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:53:07 np0005466031 systemd[298460]: Removed slice User Application Slice.
Oct  2 08:53:07 np0005466031 systemd[298460]: Reached target Shutdown.
Oct  2 08:53:07 np0005466031 systemd[298460]: Finished Exit the Session.
Oct  2 08:53:07 np0005466031 systemd[298460]: Reached target Exit the Session.
Oct  2 08:53:07 np0005466031 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:53:07 np0005466031 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:53:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e328 e328: 3 total, 3 up, 3 in
Oct  2 08:53:07 np0005466031 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:53:07 np0005466031 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:53:07 np0005466031 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:53:07 np0005466031 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:53:07 np0005466031 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.879 2 DEBUG nova.compute.manager [req-081f125d-87fd-4331-a438-64bc752d9e6c req-7d6aaf14-2b06-46b3-a9cf-dd79f68196b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-changed-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.879 2 DEBUG nova.compute.manager [req-081f125d-87fd-4331-a438-64bc752d9e6c req-7d6aaf14-2b06-46b3-a9cf-dd79f68196b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing instance network info cache due to event network-changed-d3265627-45dd-403c-990b-451562559afe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.880 2 DEBUG oslo_concurrency.lockutils [req-081f125d-87fd-4331-a438-64bc752d9e6c req-7d6aaf14-2b06-46b3-a9cf-dd79f68196b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.880 2 DEBUG oslo_concurrency.lockutils [req-081f125d-87fd-4331-a438-64bc752d9e6c req-7d6aaf14-2b06-46b3-a9cf-dd79f68196b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:07 np0005466031 nova_compute[235803]: 2025-10-02 12:53:07.880 2 DEBUG nova.network.neutron [req-081f125d-87fd-4331-a438-64bc752d9e6c req-7d6aaf14-2b06-46b3-a9cf-dd79f68196b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing network info cache for port d3265627-45dd-403c-990b-451562559afe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:08 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1871141040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.199 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.456 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.456 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.462 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.462 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.637 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.638 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4029MB free_disk=20.80624771118164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.639 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.639 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.709 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Applying migration context for instance 063c7d3e-98b4-46a4-a75e-de10a2135604 as it has an incoming, in-progress migration 5556a224-fcba-479b-9e8e-b1ada4008517. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.710 2 INFO nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating resource usage from migration 5556a224-fcba-479b-9e8e-b1ada4008517#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.795 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 184f3992-03ad-4908-aeb5-b14e562fa846 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.795 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance a1440a2f-0663-451f-bef5-bbece30acc40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.795 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 063c7d3e-98b4-46a4-a75e-de10a2135604 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.796 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.796 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.848 2 INFO nova.virt.libvirt.driver [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deleting instance files /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40_del#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.849 2 INFO nova.virt.libvirt.driver [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deletion of /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40_del complete#033[00m
Oct  2 08:53:08 np0005466031 nova_compute[235803]: 2025-10-02 12:53:08.993 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.038 2 INFO nova.scheduler.client.report [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Deleted allocations for instance a1440a2f-0663-451f-bef5-bbece30acc40#033[00m
Oct  2 08:53:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.144 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.151 2 DEBUG nova.network.neutron [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.177 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.181 2 DEBUG oslo_concurrency.lockutils [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.181 2 DEBUG nova.network.neutron [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:53:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:09.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.260 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.263 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.263 2 INFO nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Creating image(s)#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.304 2 DEBUG nova.storage.rbd_utils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] creating snapshot(nova-resize) on rbd image(063c7d3e-98b4-46a4-a75e-de10a2135604_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:53:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:09.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2854629276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.495 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.502 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.519 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.544 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.545 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.546 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:09 np0005466031 nova_compute[235803]: 2025-10-02 12:53:09.628 2 DEBUG oslo_concurrency.processutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e329 e329: 3 total, 3 up, 3 in
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3794160655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.133 2 DEBUG oslo_concurrency.processutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.138 2 DEBUG nova.compute.provider_tree [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.173 2 DEBUG nova.scheduler.client.report [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.211 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.220 2 DEBUG nova.objects.instance [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.313 2 DEBUG oslo_concurrency.lockutils [None req-442041c7-2f3c-4d37-bb15-1a4561715764 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 24.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.346 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.347 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Ensure instance console log exists: /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.347 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.348 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.348 2 DEBUG oslo_concurrency.lockutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.350 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Start _get_guest_xml network_info=[{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--631630504", "vif_mac": "fa:16:3e:f0:dd:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.354 2 WARNING nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.357 2 DEBUG nova.virt.libvirt.host [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.358 2 DEBUG nova.virt.libvirt.host [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.360 2 DEBUG nova.virt.libvirt.host [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.360 2 DEBUG nova.virt.libvirt.host [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.361 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.361 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.362 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.362 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.362 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.362 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.363 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.363 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.363 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.363 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.363 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.364 2 DEBUG nova.virt.hardware [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.364 2 DEBUG nova.objects.instance [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.379 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:53:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2288584448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.806 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:10 np0005466031 nova_compute[235803]: 2025-10-02 12:53:10.838 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:11.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:53:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/301379997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.267 2 DEBUG oslo_concurrency.processutils [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.269 2 DEBUG nova.virt.libvirt.vif [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-657670950',display_name='tempest-TestNetworkAdvancedServerOps-server-657670950',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-657670950',id=141,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJUsVCzPVQ4EYUnLe4xVecX/G7C+Cia09idavSODc4ZN//6Cqf+a8ivFPaF6ii5km7SztqC4ETT2rQva0v04xuCgbV1S1NVEoEr76v1/FpEPV08UhMxhurTufTiANa0c8g==',key_name='tempest-TestNetworkAdvancedServerOps-2083529481',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-kcomn0fd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:04Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=063c7d3e-98b4-46a4-a75e-de10a2135604,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--631630504", "vif_mac": "fa:16:3e:f0:dd:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.270 2 DEBUG nova.network.os_vif_util [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--631630504", "vif_mac": "fa:16:3e:f0:dd:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.270 2 DEBUG nova.network.os_vif_util [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.273 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <uuid>063c7d3e-98b4-46a4-a75e-de10a2135604</uuid>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <name>instance-0000008d</name>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-657670950</nova:name>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:53:10</nova:creationTime>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <nova:port uuid="3b7a0e63-af58-4d73-8bc7-684e63bb5e96">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <entry name="serial">063c7d3e-98b4-46a4-a75e-de10a2135604</entry>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <entry name="uuid">063c7d3e-98b4-46a4-a75e-de10a2135604</entry>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/063c7d3e-98b4-46a4-a75e-de10a2135604_disk">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/063c7d3e-98b4-46a4-a75e-de10a2135604_disk.config">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:f0:dd:e3"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <target dev="tap3b7a0e63-af"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604/console.log" append="off"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:53:11 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:53:11 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:53:11 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:53:11 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.275 2 DEBUG nova.virt.libvirt.vif [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-657670950',display_name='tempest-TestNetworkAdvancedServerOps-server-657670950',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-657670950',id=141,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJUsVCzPVQ4EYUnLe4xVecX/G7C+Cia09idavSODc4ZN//6Cqf+a8ivFPaF6ii5km7SztqC4ETT2rQva0v04xuCgbV1S1NVEoEr76v1/FpEPV08UhMxhurTufTiANa0c8g==',key_name='tempest-TestNetworkAdvancedServerOps-2083529481',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-kcomn0fd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:04Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=063c7d3e-98b4-46a4-a75e-de10a2135604,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--631630504", "vif_mac": "fa:16:3e:f0:dd:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.275 2 DEBUG nova.network.os_vif_util [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--631630504", "vif_mac": "fa:16:3e:f0:dd:e3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.276 2 DEBUG nova.network.os_vif_util [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.276 2 DEBUG os_vif [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.277 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b7a0e63-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b7a0e63-af, col_values=(('external_ids', {'iface-id': '3b7a0e63-af58-4d73-8bc7-684e63bb5e96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:dd:e3', 'vm-uuid': '063c7d3e-98b4-46a4-a75e-de10a2135604'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 NetworkManager[44907]: <info>  [1759409591.2836] manager: (tap3b7a0e63-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.292 2 INFO os_vif [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af')#033[00m
Oct  2 08:53:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:53:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:11.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.352 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.353 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.353 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] No VIF found with MAC fa:16:3e:f0:dd:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.354 2 INFO nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Using config drive#033[00m
Oct  2 08:53:11 np0005466031 kernel: tap3b7a0e63-af: entered promiscuous mode
Oct  2 08:53:11 np0005466031 NetworkManager[44907]: <info>  [1759409591.4580] manager: (tap3b7a0e63-af): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Oct  2 08:53:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:11Z|00515|binding|INFO|Claiming lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for this chassis.
Oct  2 08:53:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:11Z|00516|binding|INFO|3b7a0e63-af58-4d73-8bc7-684e63bb5e96: Claiming fa:16:3e:f0:dd:e3 10.100.0.6
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.472 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:dd:e3 10.100.0.6'], port_security=['fa:16:3e:f0:dd:e3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '063c7d3e-98b4-46a4-a75e-de10a2135604', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'de779411-ca14-48cf-b925-43960a45cd14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=184be778-bd1f-45cf-8f02-03b61731fc05, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=3b7a0e63-af58-4d73-8bc7-684e63bb5e96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.475 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 in datapath c011060c-3c24-4fdd-8151-c45f0e81f0db bound to our chassis#033[00m
Oct  2 08:53:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:11Z|00517|binding|INFO|Setting lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 ovn-installed in OVS
Oct  2 08:53:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:11Z|00518|binding|INFO|Setting lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 up in Southbound
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.478 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c011060c-3c24-4fdd-8151-c45f0e81f0db#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 systemd-udevd[298829]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.492 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e04a50d9-579e-459c-b8b5-f4b8e1006af9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.493 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc011060c-31 in ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.495 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc011060c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.495 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[190130ca-fc01-4ab3-8b64-157424f68067]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.496 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[27586c1b-1b39-46a5-a059-5132ff98008b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 NetworkManager[44907]: <info>  [1759409591.4990] device (tap3b7a0e63-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:53:11 np0005466031 NetworkManager[44907]: <info>  [1759409591.4996] device (tap3b7a0e63-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:53:11 np0005466031 systemd-machined[192227]: New machine qemu-61-instance-0000008d.
Oct  2 08:53:11 np0005466031 systemd[1]: Started Virtual Machine qemu-61-instance-0000008d.
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.513 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ed4989-10ea-4ebb-b9f9-8e83345d1a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.528 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[be539597-bdaa-4b7f-a29c-1bc7a8695251]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.546 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.547 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.547 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.562 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[65544385-0fe8-483c-96c6-ea87b7c1a865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.569 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9d534f-c8bd-493f-b7af-d2d99736bed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 NetworkManager[44907]: <info>  [1759409591.5704] manager: (tapc011060c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.609 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d7efc8-73d6-4270-ae21-18fab40b2c12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.613 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[179ffc79-4a40-4ac6-b1cb-d1ab0a4d9547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 NetworkManager[44907]: <info>  [1759409591.6418] device (tapc011060c-30): carrier: link connected
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.646 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[110b0f11-04ca-4f76-b499-b6c15b0e0555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.667 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2e077968-24b2-45e4-a9b3-128922922f80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc011060c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:3a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744723, 'reachable_time': 19963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298862, 'error': None, 'target': 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.683 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[21179e3c-eed6-4fc5-810f-1225047bbacb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:3a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 744723, 'tstamp': 744723}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298863, 'error': None, 'target': 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.700 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[de3592df-84c8-40dc-8adc-76c59a446432]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc011060c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:3a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744723, 'reachable_time': 19963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298864, 'error': None, 'target': 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.730 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3a492927-1a76-4389-8c75-ee7952688bb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.790 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[02d41f6f-aa55-4bea-b8a2-f22722be4d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.792 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc011060c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.792 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.793 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc011060c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:11 np0005466031 kernel: tapc011060c-30: entered promiscuous mode
Oct  2 08:53:11 np0005466031 NetworkManager[44907]: <info>  [1759409591.7954] manager: (tapc011060c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.798 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc011060c-30, col_values=(('external_ids', {'iface-id': 'b93b6a15-3b4f-4af6-9700-32891dbbf041'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:11Z|00519|binding|INFO|Releasing lport b93b6a15-3b4f-4af6-9700-32891dbbf041 from this chassis (sb_readonly=0)
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.815 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c011060c-3c24-4fdd-8151-c45f0e81f0db.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c011060c-3c24-4fdd-8151-c45f0e81f0db.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.816 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[40b5a6f6-e77c-45c7-8272-ce2d53c73320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.817 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-c011060c-3c24-4fdd-8151-c45f0e81f0db
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/c011060c-3c24-4fdd-8151-c45f0e81f0db.pid.haproxy
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID c011060c-3c24-4fdd-8151-c45f0e81f0db
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:53:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:11.818 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'env', 'PROCESS_TAG=haproxy-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c011060c-3c24-4fdd-8151-c45f0e81f0db.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.897 2 DEBUG nova.network.neutron [req-081f125d-87fd-4331-a438-64bc752d9e6c req-7d6aaf14-2b06-46b3-a9cf-dd79f68196b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updated VIF entry in instance network info cache for port d3265627-45dd-403c-990b-451562559afe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.897 2 DEBUG nova.network.neutron [req-081f125d-87fd-4331-a438-64bc752d9e6c req-7d6aaf14-2b06-46b3-a9cf-dd79f68196b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapd3265627-45", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.987 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.987 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.988 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:53:11 np0005466031 nova_compute[235803]: 2025-10-02 12:53:11.988 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 184f3992-03ad-4908-aeb5-b14e562fa846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.030 2 DEBUG oslo_concurrency.lockutils [req-081f125d-87fd-4331-a438-64bc752d9e6c req-7d6aaf14-2b06-46b3-a9cf-dd79f68196b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.106 2 DEBUG nova.compute.manager [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.106 2 DEBUG oslo_concurrency.lockutils [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.106 2 DEBUG oslo_concurrency.lockutils [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.106 2 DEBUG oslo_concurrency.lockutils [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.106 2 DEBUG nova.compute.manager [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.107 2 WARNING nova.compute.manager [req-bf653eb3-6ca5-4c9a-be56-1254fd2c9a35 req-fdef74eb-47cc-499e-a58e-2cc661192656 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:53:12 np0005466031 podman[298939]: 2025-10-02 12:53:12.17953279 +0000 UTC m=+0.052381750 container create a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:53:12 np0005466031 systemd[1]: Started libpod-conmon-a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a.scope.
Oct  2 08:53:12 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:53:12 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28148e43f0e7bdf929585f9147adbfc2554b733310de0fec7a3631ad3a46a0b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:53:12 np0005466031 podman[298939]: 2025-10-02 12:53:12.151048629 +0000 UTC m=+0.023897609 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:53:12 np0005466031 podman[298939]: 2025-10-02 12:53:12.257976549 +0000 UTC m=+0.130825509 container init a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:53:12 np0005466031 podman[298939]: 2025-10-02 12:53:12.263116667 +0000 UTC m=+0.135965617 container start a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:53:12 np0005466031 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[298954]: [NOTICE]   (298958) : New worker (298960) forked
Oct  2 08:53:12 np0005466031 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[298954]: [NOTICE]   (298958) : Loading success.
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.498 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409592.497951, 063c7d3e-98b4-46a4-a75e-de10a2135604 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.499 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.501 2 DEBUG nova.compute.manager [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.505 2 INFO nova.virt.libvirt.driver [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance running successfully.#033[00m
Oct  2 08:53:12 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.508 2 DEBUG nova.virt.libvirt.guest [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.508 2 DEBUG nova.virt.libvirt.driver [None req-c3c1c6ec-58f3-4069-bd0c-8ee9895a6a32 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.562 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.565 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.619 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.619 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409592.4995143, 063c7d3e-98b4-46a4-a75e-de10a2135604 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.619 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] VM Started (Lifecycle Event)#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.647 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:12 np0005466031 nova_compute[235803]: 2025-10-02 12:53:12.650 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:53:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:13.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:13 np0005466031 nova_compute[235803]: 2025-10-02 12:53:13.246 2 DEBUG nova.network.neutron [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updated VIF entry in instance network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:53:13 np0005466031 nova_compute[235803]: 2025-10-02 12:53:13.247 2 DEBUG nova.network.neutron [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:13 np0005466031 nova_compute[235803]: 2025-10-02 12:53:13.266 2 DEBUG oslo_concurrency.lockutils [req-cf290f95-7be6-4c54-8ccb-48da9fe4c5fb req-0578c34f-1227-48a9-a039-42f97e239e06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:14 np0005466031 nova_compute[235803]: 2025-10-02 12:53:14.264 2 DEBUG nova.compute.manager [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:14 np0005466031 nova_compute[235803]: 2025-10-02 12:53:14.265 2 DEBUG oslo_concurrency.lockutils [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:14 np0005466031 nova_compute[235803]: 2025-10-02 12:53:14.265 2 DEBUG oslo_concurrency.lockutils [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:14 np0005466031 nova_compute[235803]: 2025-10-02 12:53:14.265 2 DEBUG oslo_concurrency.lockutils [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:14 np0005466031 nova_compute[235803]: 2025-10-02 12:53:14.265 2 DEBUG nova.compute.manager [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:14 np0005466031 nova_compute[235803]: 2025-10-02 12:53:14.266 2 WARNING nova.compute.manager [req-af993621-f332-434b-ad29-67ce38a95ff0 req-8a067f07-8195-47ee-8e5e-f5c7db51da01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:53:15 np0005466031 nova_compute[235803]: 2025-10-02 12:53:15.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:15.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:15.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:16 np0005466031 nova_compute[235803]: 2025-10-02 12:53:16.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:53:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:53:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:53:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:17.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:17.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:17 np0005466031 podman[299101]: 2025-10-02 12:53:17.62308787 +0000 UTC m=+0.050356951 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:53:17 np0005466031 podman[299102]: 2025-10-02 12:53:17.662395533 +0000 UTC m=+0.087419380 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  2 08:53:18 np0005466031 nova_compute[235803]: 2025-10-02 12:53:18.264 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updating instance_info_cache with network_info: [{"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:18 np0005466031 nova_compute[235803]: 2025-10-02 12:53:18.287 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-184f3992-03ad-4908-aeb5-b14e562fa846" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:18 np0005466031 nova_compute[235803]: 2025-10-02 12:53:18.287 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:53:18 np0005466031 nova_compute[235803]: 2025-10-02 12:53:18.288 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:18 np0005466031 nova_compute[235803]: 2025-10-02 12:53:18.288 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:18 np0005466031 nova_compute[235803]: 2025-10-02 12:53:18.289 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:18 np0005466031 nova_compute[235803]: 2025-10-02 12:53:18.289 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:53:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:19.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:19.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:20 np0005466031 nova_compute[235803]: 2025-10-02 12:53:20.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:20Z|00520|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:53:20 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:20Z|00521|binding|INFO|Releasing lport b93b6a15-3b4f-4af6-9700-32891dbbf041 from this chassis (sb_readonly=0)
Oct  2 08:53:20 np0005466031 nova_compute[235803]: 2025-10-02 12:53:20.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:21.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:21 np0005466031 nova_compute[235803]: 2025-10-02 12:53:21.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:21.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e330 e330: 3 total, 3 up, 3 in
Oct  2 08:53:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:53:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:53:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:23.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:23.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:25 np0005466031 nova_compute[235803]: 2025-10-02 12:53:25.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:25.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:25.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:25Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:dd:e3 10.100.0.6
Oct  2 08:53:25 np0005466031 podman[299221]: 2025-10-02 12:53:25.856579174 +0000 UTC m=+0.054120500 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:53:25 np0005466031 podman[299220]: 2025-10-02 12:53:25.8637395 +0000 UTC m=+0.062854382 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:53:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:25.863 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:25.864 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:25.864 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:26 np0005466031 nova_compute[235803]: 2025-10-02 12:53:26.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:27.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:27.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:28 np0005466031 nova_compute[235803]: 2025-10-02 12:53:28.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:29.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:29.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:30 np0005466031 nova_compute[235803]: 2025-10-02 12:53:30.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:31.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:31 np0005466031 nova_compute[235803]: 2025-10-02 12:53:31.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:31.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:32 np0005466031 nova_compute[235803]: 2025-10-02 12:53:32.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e331 e331: 3 total, 3 up, 3 in
Oct  2 08:53:33 np0005466031 nova_compute[235803]: 2025-10-02 12:53:33.223 2 INFO nova.compute.manager [None req-f89f0e36-261e-4b30-a400-ae97fa849b9f 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Get console output#033[00m
Oct  2 08:53:33 np0005466031 nova_compute[235803]: 2025-10-02 12:53:33.229 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:53:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:33.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:33.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.183 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.183 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.184 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.184 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.184 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.185 2 INFO nova.compute.manager [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Terminating instance#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.185 2 DEBUG nova.compute.manager [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:53:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:35.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.307 2 DEBUG nova.compute.manager [req-02100045-8f11-48e5-82bc-f9ad655bd43d req-2d5aa9ca-c353-451e-ace8-2fa703b3118a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.308 2 DEBUG nova.compute.manager [req-02100045-8f11-48e5-82bc-f9ad655bd43d req-2d5aa9ca-c353-451e-ace8-2fa703b3118a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing instance network info cache due to event network-changed-3b7a0e63-af58-4d73-8bc7-684e63bb5e96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.308 2 DEBUG oslo_concurrency.lockutils [req-02100045-8f11-48e5-82bc-f9ad655bd43d req-2d5aa9ca-c353-451e-ace8-2fa703b3118a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.309 2 DEBUG oslo_concurrency.lockutils [req-02100045-8f11-48e5-82bc-f9ad655bd43d req-2d5aa9ca-c353-451e-ace8-2fa703b3118a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.309 2 DEBUG nova.network.neutron [req-02100045-8f11-48e5-82bc-f9ad655bd43d req-2d5aa9ca-c353-451e-ace8-2fa703b3118a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Refreshing network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:53:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:35.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:35 np0005466031 kernel: tap3b7a0e63-af (unregistering): left promiscuous mode
Oct  2 08:53:35 np0005466031 NetworkManager[44907]: <info>  [1759409615.4016] device (tap3b7a0e63-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:53:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:35Z|00522|binding|INFO|Releasing lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 from this chassis (sb_readonly=0)
Oct  2 08:53:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:35Z|00523|binding|INFO|Setting lport 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 down in Southbound
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:35Z|00524|binding|INFO|Removing iface tap3b7a0e63-af ovn-installed in OVS
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:35.471 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:dd:e3 10.100.0.6'], port_security=['fa:16:3e:f0:dd:e3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '063c7d3e-98b4-46a4-a75e-de10a2135604', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'de779411-ca14-48cf-b925-43960a45cd14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=184be778-bd1f-45cf-8f02-03b61731fc05, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=3b7a0e63-af58-4d73-8bc7-684e63bb5e96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:35.472 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96 in datapath c011060c-3c24-4fdd-8151-c45f0e81f0db unbound from our chassis#033[00m
Oct  2 08:53:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:35.473 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c011060c-3c24-4fdd-8151-c45f0e81f0db, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:35.474 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[05024491-8950-4ed6-a1a5-162ef17d1009]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:35.475 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db namespace which is not needed anymore#033[00m
Oct  2 08:53:35 np0005466031 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct  2 08:53:35 np0005466031 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000008d.scope: Consumed 14.176s CPU time.
Oct  2 08:53:35 np0005466031 systemd-machined[192227]: Machine qemu-61-instance-0000008d terminated.
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.628 2 INFO nova.virt.libvirt.driver [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Instance destroyed successfully.#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.629 2 DEBUG nova.objects.instance [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid 063c7d3e-98b4-46a4-a75e-de10a2135604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.642 2 DEBUG nova.virt.libvirt.vif [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-657670950',display_name='tempest-TestNetworkAdvancedServerOps-server-657670950',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-657670950',id=141,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJUsVCzPVQ4EYUnLe4xVecX/G7C+Cia09idavSODc4ZN//6Cqf+a8ivFPaF6ii5km7SztqC4ETT2rQva0v04xuCgbV1S1NVEoEr76v1/FpEPV08UhMxhurTufTiANa0c8g==',key_name='tempest-TestNetworkAdvancedServerOps-2083529481',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:53:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-kcomn0fd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:53:22Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=063c7d3e-98b4-46a4-a75e-de10a2135604,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.643 2 DEBUG nova.network.os_vif_util [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.643 2 DEBUG nova.network.os_vif_util [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.644 2 DEBUG os_vif [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b7a0e63-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:35 np0005466031 nova_compute[235803]: 2025-10-02 12:53:35.653 2 INFO os_vif [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:dd:e3,bridge_name='br-int',has_traffic_filtering=True,id=3b7a0e63-af58-4d73-8bc7-684e63bb5e96,network=Network(c011060c-3c24-4fdd-8151-c45f0e81f0db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b7a0e63-af')#033[00m
Oct  2 08:53:35 np0005466031 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[298954]: [NOTICE]   (298958) : haproxy version is 2.8.14-c23fe91
Oct  2 08:53:35 np0005466031 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[298954]: [NOTICE]   (298958) : path to executable is /usr/sbin/haproxy
Oct  2 08:53:35 np0005466031 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[298954]: [WARNING]  (298958) : Exiting Master process...
Oct  2 08:53:35 np0005466031 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[298954]: [WARNING]  (298958) : Exiting Master process...
Oct  2 08:53:35 np0005466031 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[298954]: [ALERT]    (298958) : Current worker (298960) exited with code 143 (Terminated)
Oct  2 08:53:35 np0005466031 neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db[298954]: [WARNING]  (298958) : All workers exited. Exiting... (0)
Oct  2 08:53:35 np0005466031 systemd[1]: libpod-a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a.scope: Deactivated successfully.
Oct  2 08:53:35 np0005466031 podman[299313]: 2025-10-02 12:53:35.93146468 +0000 UTC m=+0.358161049 container died a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:53:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:53:36 np0005466031 systemd[1]: var-lib-containers-storage-overlay-28148e43f0e7bdf929585f9147adbfc2554b733310de0fec7a3631ad3a46a0b6-merged.mount: Deactivated successfully.
Oct  2 08:53:36 np0005466031 podman[299313]: 2025-10-02 12:53:36.578114612 +0000 UTC m=+1.004810971 container cleanup a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:53:36 np0005466031 systemd[1]: libpod-conmon-a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a.scope: Deactivated successfully.
Oct  2 08:53:37 np0005466031 podman[299370]: 2025-10-02 12:53:37.16114309 +0000 UTC m=+0.563139476 container remove a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.168 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[535c390f-7a5c-4fa7-af76-33c7287a150f]: (4, ('Thu Oct  2 12:53:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db (a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a)\na5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a\nThu Oct  2 12:53:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db (a5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a)\na5af11ae295909a62f1e40a13c2d2bdcf1b903edcac94d9b69ea7dd59e78f74a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.170 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[562cadd5-5f96-4738-993e-49d444fd99a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.171 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc011060c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:37 np0005466031 nova_compute[235803]: 2025-10-02 12:53:37.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:37 np0005466031 kernel: tapc011060c-30: left promiscuous mode
Oct  2 08:53:37 np0005466031 nova_compute[235803]: 2025-10-02 12:53:37.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.191 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0de5c33e-bf8c-416e-b9ce-1512034ff464]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.222 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[46d41586-a771-4398-b7d5-0db5341a9a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.223 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ea708e-0aab-4a1f-8c32-82f805838f75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.245 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7cda665a-1f48-466a-a8f0-837f405569a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744714, 'reachable_time': 22921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299387, 'error': None, 'target': 'ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.248 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c011060c-3c24-4fdd-8151-c45f0e81f0db deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:53:37 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:37.248 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[2d31c026-afcb-455f-a6ab-e8e1cad5d2cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:37 np0005466031 systemd[1]: run-netns-ovnmeta\x2dc011060c\x2d3c24\x2d4fdd\x2d8151\x2dc45f0e81f0db.mount: Deactivated successfully.
Oct  2 08:53:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:37.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:37.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:38 np0005466031 nova_compute[235803]: 2025-10-02 12:53:38.203 2 DEBUG nova.network.neutron [req-02100045-8f11-48e5-82bc-f9ad655bd43d req-2d5aa9ca-c353-451e-ace8-2fa703b3118a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updated VIF entry in instance network info cache for port 3b7a0e63-af58-4d73-8bc7-684e63bb5e96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:53:38 np0005466031 nova_compute[235803]: 2025-10-02 12:53:38.204 2 DEBUG nova.network.neutron [req-02100045-8f11-48e5-82bc-f9ad655bd43d req-2d5aa9ca-c353-451e-ace8-2fa703b3118a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [{"id": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "address": "fa:16:3e:f0:dd:e3", "network": {"id": "c011060c-3c24-4fdd-8151-c45f0e81f0db", "bridge": "br-int", "label": "tempest-network-smoke--631630504", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b7a0e63-af", "ovs_interfaceid": "3b7a0e63-af58-4d73-8bc7-684e63bb5e96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:38 np0005466031 nova_compute[235803]: 2025-10-02 12:53:38.221 2 DEBUG oslo_concurrency.lockutils [req-02100045-8f11-48e5-82bc-f9ad655bd43d req-2d5aa9ca-c353-451e-ace8-2fa703b3118a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-063c7d3e-98b4-46a4-a75e-de10a2135604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:39 np0005466031 nova_compute[235803]: 2025-10-02 12:53:39.014 2 INFO nova.virt.libvirt.driver [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Deleting instance files /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604_del#033[00m
Oct  2 08:53:39 np0005466031 nova_compute[235803]: 2025-10-02 12:53:39.015 2 INFO nova.virt.libvirt.driver [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Deletion of /var/lib/nova/instances/063c7d3e-98b4-46a4-a75e-de10a2135604_del complete#033[00m
Oct  2 08:53:39 np0005466031 nova_compute[235803]: 2025-10-02 12:53:39.072 2 INFO nova.compute.manager [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Took 3.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:53:39 np0005466031 nova_compute[235803]: 2025-10-02 12:53:39.072 2 DEBUG oslo.service.loopingcall [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:53:39 np0005466031 nova_compute[235803]: 2025-10-02 12:53:39.073 2 DEBUG nova.compute.manager [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:53:39 np0005466031 nova_compute[235803]: 2025-10-02 12:53:39.073 2 DEBUG nova.network.neutron [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:53:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:39.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:39.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:40 np0005466031 nova_compute[235803]: 2025-10-02 12:53:40.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e332 e332: 3 total, 3 up, 3 in
Oct  2 08:53:40 np0005466031 nova_compute[235803]: 2025-10-02 12:53:40.349 2 DEBUG nova.network.neutron [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:40 np0005466031 nova_compute[235803]: 2025-10-02 12:53:40.426 2 INFO nova.compute.manager [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Took 1.35 seconds to deallocate network for instance.#033[00m
Oct  2 08:53:40 np0005466031 nova_compute[235803]: 2025-10-02 12:53:40.571 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:40 np0005466031 nova_compute[235803]: 2025-10-02 12:53:40.571 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:40.639 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:40.640 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:53:40 np0005466031 nova_compute[235803]: 2025-10-02 12:53:40.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:40 np0005466031 nova_compute[235803]: 2025-10-02 12:53:40.646 2 DEBUG oslo_concurrency.processutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:40 np0005466031 nova_compute[235803]: 2025-10-02 12:53:40.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.017 2 DEBUG nova.compute.manager [req-907515ee-6c55-46ed-a88e-98ac55912d6d req-eb55e929-f8d4-4364-a522-0679c954eb36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-deleted-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.038 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.038 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.038 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.039 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.039 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.039 2 WARNING nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-unplugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.039 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.040 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.040 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.040 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.040 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] No waiting events found dispatching network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.041 2 WARNING nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Received unexpected event network-vif-plugged-3b7a0e63-af58-4d73-8bc7-684e63bb5e96 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:53:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1513231900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.079 2 DEBUG oslo_concurrency.processutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.085 2 DEBUG nova.compute.provider_tree [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.104 2 DEBUG nova.scheduler.client.report [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.145 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.171 2 INFO nova.scheduler.client.report [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance 063c7d3e-98b4-46a4-a75e-de10a2135604#033[00m
Oct  2 08:53:41 np0005466031 nova_compute[235803]: 2025-10-02 12:53:41.234 2 DEBUG oslo_concurrency.lockutils [None req-85b0eb64-6fdc-4eb3-91ec-e65ef51c95df 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "063c7d3e-98b4-46a4-a75e-de10a2135604" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:41.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:41.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:43.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:43.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e333 e333: 3 total, 3 up, 3 in
Oct  2 08:53:44 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:44Z|00525|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:53:44 np0005466031 nova_compute[235803]: 2025-10-02 12:53:44.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:45 np0005466031 nova_compute[235803]: 2025-10-02 12:53:45.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:45.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:45 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:45Z|00526|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:53:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:45.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:45 np0005466031 nova_compute[235803]: 2025-10-02 12:53:45.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.572377) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625572446, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1373, "num_deletes": 255, "total_data_size": 2910736, "memory_usage": 2962768, "flush_reason": "Manual Compaction"}
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625584327, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 1908022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56613, "largest_seqno": 57981, "table_properties": {"data_size": 1901941, "index_size": 3348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13366, "raw_average_key_size": 20, "raw_value_size": 1889629, "raw_average_value_size": 2902, "num_data_blocks": 147, "num_entries": 651, "num_filter_entries": 651, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409528, "oldest_key_time": 1759409528, "file_creation_time": 1759409625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 11996 microseconds, and 4558 cpu microseconds.
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.584378) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 1908022 bytes OK
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.584399) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.585677) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.585695) EVENT_LOG_v1 {"time_micros": 1759409625585689, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.585713) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2904182, prev total WAL file size 2904182, number of live WAL files 2.
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.586919) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(1863KB)], [111(10MB)]
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625586977, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12718517, "oldest_snapshot_seqno": -1}
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8165 keys, 10743186 bytes, temperature: kUnknown
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625643213, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 10743186, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10690228, "index_size": 31470, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 211921, "raw_average_key_size": 25, "raw_value_size": 10546455, "raw_average_value_size": 1291, "num_data_blocks": 1228, "num_entries": 8165, "num_filter_entries": 8165, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.643454) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10743186 bytes
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.645346) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.9 rd, 190.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 8689, records dropped: 524 output_compression: NoCompression
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.645416) EVENT_LOG_v1 {"time_micros": 1759409625645392, "job": 70, "event": "compaction_finished", "compaction_time_micros": 56307, "compaction_time_cpu_micros": 28603, "output_level": 6, "num_output_files": 1, "total_output_size": 10743186, "num_input_records": 8689, "num_output_records": 8165, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625646144, "job": 70, "event": "table_file_deletion", "file_number": 113}
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625649240, "job": 70, "event": "table_file_deletion", "file_number": 111}
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.586869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.649333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.649339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.649342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.649345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:53:45.649347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466031 nova_compute[235803]: 2025-10-02 12:53:45.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:47.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:47.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e334 e334: 3 total, 3 up, 3 in
Oct  2 08:53:48 np0005466031 nova_compute[235803]: 2025-10-02 12:53:48.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:48 np0005466031 podman[299466]: 2025-10-02 12:53:48.697778579 +0000 UTC m=+0.069957677 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:53:48 np0005466031 podman[299467]: 2025-10-02 12:53:48.752452964 +0000 UTC m=+0.131401817 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:53:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:49.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:49.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:49.642 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:50 np0005466031 nova_compute[235803]: 2025-10-02 12:53:50.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:50 np0005466031 nova_compute[235803]: 2025-10-02 12:53:50.627 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409615.6262877, 063c7d3e-98b4-46a4-a75e-de10a2135604 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:50 np0005466031 nova_compute[235803]: 2025-10-02 12:53:50.628 2 INFO nova.compute.manager [-] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:53:50 np0005466031 nova_compute[235803]: 2025-10-02 12:53:50.648 2 DEBUG nova.compute.manager [None req-8cc2d82e-ee6e-47e5-b51e-3dbac78d52b8 - - - - - -] [instance: 063c7d3e-98b4-46a4-a75e-de10a2135604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:50 np0005466031 nova_compute[235803]: 2025-10-02 12:53:50.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:51.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e335 e335: 3 total, 3 up, 3 in
Oct  2 08:53:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:51.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:52 np0005466031 nova_compute[235803]: 2025-10-02 12:53:52.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e336 e336: 3 total, 3 up, 3 in
Oct  2 08:53:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:53.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:53:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:53.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:53:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:54 np0005466031 nova_compute[235803]: 2025-10-02 12:53:54.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e337 e337: 3 total, 3 up, 3 in
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.087 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "184f3992-03ad-4908-aeb5-b14e562fa846" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.088 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.088 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.088 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.088 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.089 2 INFO nova.compute.manager [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Terminating instance#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.090 2 DEBUG nova.compute.manager [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:53:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:55.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:55.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:55 np0005466031 kernel: tapdc6c1baa-6e (unregistering): left promiscuous mode
Oct  2 08:53:55 np0005466031 NetworkManager[44907]: <info>  [1759409635.6772] device (tapdc6c1baa-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:55Z|00527|binding|INFO|Releasing lport dc6c1baa-6ec8-4649-bfbc-c6720e954f7b from this chassis (sb_readonly=0)
Oct  2 08:53:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:55Z|00528|binding|INFO|Setting lport dc6c1baa-6ec8-4649-bfbc-c6720e954f7b down in Southbound
Oct  2 08:53:55 np0005466031 ovn_controller[132413]: 2025-10-02T12:53:55Z|00529|binding|INFO|Removing iface tapdc6c1baa-6e ovn-installed in OVS
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:53:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:55.705 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:bf:36 10.100.0.5'], port_security=['fa:16:3e:1e:bf:36 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '184f3992-03ad-4908-aeb5-b14e562fa846', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c38185a-c389-4d04-8fc6-53a62e6c5352', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:55.706 141898 INFO neutron.agent.ovn.metadata.agent [-] Port dc6c1baa-6ec8-4649-bfbc-c6720e954f7b in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis#033[00m
Oct  2 08:53:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:55.708 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:53:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:55.709 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[82747b6d-924a-4bee-b85a-aca89cd42a0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:55.709 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace which is not needed anymore#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:55 np0005466031 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct  2 08:53:55 np0005466031 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007f.scope: Consumed 28.263s CPU time.
Oct  2 08:53:55 np0005466031 systemd-machined[192227]: Machine qemu-55-instance-0000007f terminated.
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.937 2 INFO nova.virt.libvirt.driver [-] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Instance destroyed successfully.#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.938 2 DEBUG nova.objects.instance [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid 184f3992-03ad-4908-aeb5-b14e562fa846 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:55 np0005466031 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[289273]: [NOTICE]   (289277) : haproxy version is 2.8.14-c23fe91
Oct  2 08:53:55 np0005466031 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[289273]: [NOTICE]   (289277) : path to executable is /usr/sbin/haproxy
Oct  2 08:53:55 np0005466031 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[289273]: [WARNING]  (289277) : Exiting Master process...
Oct  2 08:53:55 np0005466031 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[289273]: [ALERT]    (289277) : Current worker (289279) exited with code 143 (Terminated)
Oct  2 08:53:55 np0005466031 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[289273]: [WARNING]  (289277) : All workers exited. Exiting... (0)
Oct  2 08:53:55 np0005466031 systemd[1]: libpod-97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62.scope: Deactivated successfully.
Oct  2 08:53:55 np0005466031 podman[299541]: 2025-10-02 12:53:55.95911411 +0000 UTC m=+0.143336261 container died 97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.974 2 DEBUG nova.virt.libvirt.vif [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:47:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1510354407',display_name='tempest-ServerActionsTestOtherB-server-1510354407',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1510354407',id=127,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-qwaoankd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:42Z,user_data=None,user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=184f3992-03ad-4908-aeb5-b14e562fa846,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.975 2 DEBUG nova.network.os_vif_util [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "address": "fa:16:3e:1e:bf:36", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc6c1baa-6e", "ovs_interfaceid": "dc6c1baa-6ec8-4649-bfbc-c6720e954f7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.976 2 DEBUG nova.network.os_vif_util [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:bf:36,bridge_name='br-int',has_traffic_filtering=True,id=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc6c1baa-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.977 2 DEBUG os_vif [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:bf:36,bridge_name='br-int',has_traffic_filtering=True,id=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc6c1baa-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc6c1baa-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e338 e338: 3 total, 3 up, 3 in
Oct  2 08:53:55 np0005466031 nova_compute[235803]: 2025-10-02 12:53:55.986 2 INFO os_vif [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:bf:36,bridge_name='br-int',has_traffic_filtering=True,id=dc6c1baa-6ec8-4649-bfbc-c6720e954f7b,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc6c1baa-6e')#033[00m
Oct  2 08:53:56 np0005466031 nova_compute[235803]: 2025-10-02 12:53:56.080 2 DEBUG nova.compute.manager [req-b1b9fd0f-adec-4a67-a521-1012433509e5 req-fe674b35-ec26-43d5-862b-e9ef468bda43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received event network-vif-unplugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:56 np0005466031 nova_compute[235803]: 2025-10-02 12:53:56.080 2 DEBUG oslo_concurrency.lockutils [req-b1b9fd0f-adec-4a67-a521-1012433509e5 req-fe674b35-ec26-43d5-862b-e9ef468bda43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:56 np0005466031 nova_compute[235803]: 2025-10-02 12:53:56.081 2 DEBUG oslo_concurrency.lockutils [req-b1b9fd0f-adec-4a67-a521-1012433509e5 req-fe674b35-ec26-43d5-862b-e9ef468bda43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:56 np0005466031 nova_compute[235803]: 2025-10-02 12:53:56.081 2 DEBUG oslo_concurrency.lockutils [req-b1b9fd0f-adec-4a67-a521-1012433509e5 req-fe674b35-ec26-43d5-862b-e9ef468bda43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:56 np0005466031 nova_compute[235803]: 2025-10-02 12:53:56.081 2 DEBUG nova.compute.manager [req-b1b9fd0f-adec-4a67-a521-1012433509e5 req-fe674b35-ec26-43d5-862b-e9ef468bda43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] No waiting events found dispatching network-vif-unplugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:56 np0005466031 nova_compute[235803]: 2025-10-02 12:53:56.081 2 DEBUG nova.compute.manager [req-b1b9fd0f-adec-4a67-a521-1012433509e5 req-fe674b35-ec26-43d5-862b-e9ef468bda43 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received event network-vif-unplugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:53:56 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62-userdata-shm.mount: Deactivated successfully.
Oct  2 08:53:56 np0005466031 systemd[1]: var-lib-containers-storage-overlay-8285ef4d4a4c90555f83bec9ecd25c0dc17f1bcb9685e07e079368d14f519e56-merged.mount: Deactivated successfully.
Oct  2 08:53:56 np0005466031 podman[299575]: 2025-10-02 12:53:56.107712792 +0000 UTC m=+0.120379990 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:53:56 np0005466031 podman[299541]: 2025-10-02 12:53:56.218919296 +0000 UTC m=+0.403141447 container cleanup 97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:53:56 np0005466031 podman[299569]: 2025-10-02 12:53:56.221944463 +0000 UTC m=+0.238693088 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct  2 08:53:56 np0005466031 systemd[1]: libpod-conmon-97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62.scope: Deactivated successfully.
Oct  2 08:53:56 np0005466031 podman[299641]: 2025-10-02 12:53:56.374972772 +0000 UTC m=+0.127165645 container remove 97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.381 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c38a6e18-91b4-41f0-92a6-888c6df670a6]: (4, ('Thu Oct  2 12:53:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62)\n97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62\nThu Oct  2 12:53:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62)\n97b85aa7ba16408bb08a341adc2d623057445e4e8c3026f8edd60bd8b0831b62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.382 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5c44565a-0275-4cfe-b990-d30fa0bcf85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.383 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:56 np0005466031 kernel: tap9266ebd7-30: left promiscuous mode
Oct  2 08:53:56 np0005466031 nova_compute[235803]: 2025-10-02 12:53:56.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:56 np0005466031 nova_compute[235803]: 2025-10-02 12:53:56.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.405 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[abf4dd64-85dc-432c-8b81-87f5438ec1fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.428 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[897321fe-510b-472f-a9e4-3c41fe7a46b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.429 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2279ccd0-846b-4705-afc3-6ff82653a368]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.446 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[009b6a7b-9484-4e5b-a16f-ab792e09a476]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701961, 'reachable_time': 25641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299653, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:56 np0005466031 systemd[1]: run-netns-ovnmeta\x2d9266ebd7\x2d321c\x2d4fc7\x2da6c8\x2dc1c304634bb4.mount: Deactivated successfully.
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.449 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:53:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:53:56.450 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5b02cf-4074-417d-87a6-75405808cb30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:57 np0005466031 nova_compute[235803]: 2025-10-02 12:53:57.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:57.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:58 np0005466031 nova_compute[235803]: 2025-10-02 12:53:58.188 2 DEBUG nova.compute.manager [req-4609e244-94ee-41fe-8b63-fa90b91f4159 req-b6ef8c59-7973-49fd-a913-f26e6a10f703 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received event network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:58 np0005466031 nova_compute[235803]: 2025-10-02 12:53:58.188 2 DEBUG oslo_concurrency.lockutils [req-4609e244-94ee-41fe-8b63-fa90b91f4159 req-b6ef8c59-7973-49fd-a913-f26e6a10f703 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:58 np0005466031 nova_compute[235803]: 2025-10-02 12:53:58.188 2 DEBUG oslo_concurrency.lockutils [req-4609e244-94ee-41fe-8b63-fa90b91f4159 req-b6ef8c59-7973-49fd-a913-f26e6a10f703 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:58 np0005466031 nova_compute[235803]: 2025-10-02 12:53:58.189 2 DEBUG oslo_concurrency.lockutils [req-4609e244-94ee-41fe-8b63-fa90b91f4159 req-b6ef8c59-7973-49fd-a913-f26e6a10f703 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:58 np0005466031 nova_compute[235803]: 2025-10-02 12:53:58.189 2 DEBUG nova.compute.manager [req-4609e244-94ee-41fe-8b63-fa90b91f4159 req-b6ef8c59-7973-49fd-a913-f26e6a10f703 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] No waiting events found dispatching network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:58 np0005466031 nova_compute[235803]: 2025-10-02 12:53:58.189 2 WARNING nova.compute.manager [req-4609e244-94ee-41fe-8b63-fa90b91f4159 req-b6ef8c59-7973-49fd-a913-f26e6a10f703 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received unexpected event network-vif-plugged-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:53:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:59.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:53:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:59.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:59 np0005466031 nova_compute[235803]: 2025-10-02 12:53:59.664 2 INFO nova.virt.libvirt.driver [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Deleting instance files /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846_del#033[00m
Oct  2 08:53:59 np0005466031 nova_compute[235803]: 2025-10-02 12:53:59.665 2 INFO nova.virt.libvirt.driver [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Deletion of /var/lib/nova/instances/184f3992-03ad-4908-aeb5-b14e562fa846_del complete#033[00m
Oct  2 08:53:59 np0005466031 nova_compute[235803]: 2025-10-02 12:53:59.736 2 INFO nova.compute.manager [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Took 4.65 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:53:59 np0005466031 nova_compute[235803]: 2025-10-02 12:53:59.736 2 DEBUG oslo.service.loopingcall [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:53:59 np0005466031 nova_compute[235803]: 2025-10-02 12:53:59.736 2 DEBUG nova.compute.manager [-] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:53:59 np0005466031 nova_compute[235803]: 2025-10-02 12:53:59.737 2 DEBUG nova.network.neutron [-] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:54:00 np0005466031 nova_compute[235803]: 2025-10-02 12:54:00.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:00 np0005466031 nova_compute[235803]: 2025-10-02 12:54:00.584 2 DEBUG nova.network.neutron [-] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:00 np0005466031 nova_compute[235803]: 2025-10-02 12:54:00.605 2 INFO nova.compute.manager [-] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Took 0.87 seconds to deallocate network for instance.#033[00m
Oct  2 08:54:00 np0005466031 nova_compute[235803]: 2025-10-02 12:54:00.670 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:00 np0005466031 nova_compute[235803]: 2025-10-02 12:54:00.670 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:00 np0005466031 nova_compute[235803]: 2025-10-02 12:54:00.683 2 DEBUG nova.compute.manager [req-7e33bc4a-5c52-4832-9fd5-a7bfedff39d8 req-ea035376-5912-4176-b9a8-52082f1d3209 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Received event network-vif-deleted-dc6c1baa-6ec8-4649-bfbc-c6720e954f7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:00 np0005466031 nova_compute[235803]: 2025-10-02 12:54:00.732 2 DEBUG oslo_concurrency.processutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:00 np0005466031 nova_compute[235803]: 2025-10-02 12:54:00.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2879670665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:01 np0005466031 nova_compute[235803]: 2025-10-02 12:54:01.224 2 DEBUG oslo_concurrency.processutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:01 np0005466031 nova_compute[235803]: 2025-10-02 12:54:01.235 2 DEBUG nova.compute.provider_tree [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:01 np0005466031 nova_compute[235803]: 2025-10-02 12:54:01.271 2 DEBUG nova.scheduler.client.report [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:01 np0005466031 nova_compute[235803]: 2025-10-02 12:54:01.298 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:01.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:01 np0005466031 nova_compute[235803]: 2025-10-02 12:54:01.343 2 INFO nova.scheduler.client.report [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Deleted allocations for instance 184f3992-03ad-4908-aeb5-b14e562fa846#033[00m
Oct  2 08:54:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:01.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:01 np0005466031 nova_compute[235803]: 2025-10-02 12:54:01.470 2 DEBUG oslo_concurrency.lockutils [None req-bd45bb0c-3828-4d73-8030-bd0ce2fabda9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "184f3992-03ad-4908-aeb5-b14e562fa846" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:02 np0005466031 nova_compute[235803]: 2025-10-02 12:54:02.318 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:02 np0005466031 nova_compute[235803]: 2025-10-02 12:54:02.318 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:02 np0005466031 nova_compute[235803]: 2025-10-02 12:54:02.336 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:54:02 np0005466031 nova_compute[235803]: 2025-10-02 12:54:02.397 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:02 np0005466031 nova_compute[235803]: 2025-10-02 12:54:02.397 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:02 np0005466031 nova_compute[235803]: 2025-10-02 12:54:02.404 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:54:02 np0005466031 nova_compute[235803]: 2025-10-02 12:54:02.404 2 INFO nova.compute.claims [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:54:02 np0005466031 nova_compute[235803]: 2025-10-02 12:54:02.534 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:03 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2957687789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.308 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.315 2 DEBUG nova.compute.provider_tree [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:03.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.348 2 DEBUG nova.scheduler.client.report [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 e339: 3 total, 3 up, 3 in
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.383 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.385 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:54:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:54:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:03.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.445 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.446 2 DEBUG nova.network.neutron [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.473 2 INFO nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.500 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.557 2 INFO nova.virt.block_device [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Booting with blank volume at /dev/vda#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:03 np0005466031 nova_compute[235803]: 2025-10-02 12:54:03.684 2 DEBUG nova.policy [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00be63ea13c84e3d9419078865524099', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:54:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:04 np0005466031 nova_compute[235803]: 2025-10-02 12:54:04.514 2 DEBUG nova.network.neutron [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Successfully created port: 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:54:05 np0005466031 nova_compute[235803]: 2025-10-02 12:54:05.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:54:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2143275758' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:54:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:54:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2143275758' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:54:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:05.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:05.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:05 np0005466031 nova_compute[235803]: 2025-10-02 12:54:05.985 2 DEBUG nova.network.neutron [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Successfully updated port: 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:54:05 np0005466031 nova_compute[235803]: 2025-10-02 12:54:05.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:06 np0005466031 nova_compute[235803]: 2025-10-02 12:54:06.006 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:06 np0005466031 nova_compute[235803]: 2025-10-02 12:54:06.007 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquired lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:06 np0005466031 nova_compute[235803]: 2025-10-02 12:54:06.007 2 DEBUG nova.network.neutron [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:06 np0005466031 nova_compute[235803]: 2025-10-02 12:54:06.161 2 DEBUG nova.network.neutron [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:54:06 np0005466031 nova_compute[235803]: 2025-10-02 12:54:06.341 2 DEBUG nova.compute.manager [req-a99ac6bf-1005-4769-9479-2463f2cacfd8 req-06882b4a-7d71-4b4a-a75f-91667fd097ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-changed-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:06 np0005466031 nova_compute[235803]: 2025-10-02 12:54:06.342 2 DEBUG nova.compute.manager [req-a99ac6bf-1005-4769-9479-2463f2cacfd8 req-06882b4a-7d71-4b4a-a75f-91667fd097ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Refreshing instance network info cache due to event network-changed-5b3c9d9a-c3cd-49a5-b917-49aefaefd249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:06 np0005466031 nova_compute[235803]: 2025-10-02 12:54:06.342 2 DEBUG oslo_concurrency.lockutils [req-a99ac6bf-1005-4769-9479-2463f2cacfd8 req-06882b4a-7d71-4b4a-a75f-91667fd097ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:07.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:07.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.620 2 DEBUG nova.network.neutron [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating instance_info_cache with network_info: [{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.653 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Releasing lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.654 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance network_info: |[{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.656 2 DEBUG oslo_concurrency.lockutils [req-a99ac6bf-1005-4769-9479-2463f2cacfd8 req-06882b4a-7d71-4b4a-a75f-91667fd097ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.656 2 DEBUG nova.network.neutron [req-a99ac6bf-1005-4769-9479-2463f2cacfd8 req-06882b4a-7d71-4b4a-a75f-91667fd097ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Refreshing network info cache for port 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.831 2 DEBUG os_brick.utils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.833 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.844 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.844 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[de640726-5ab9-4631-b856-69d80129d617]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.846 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.853 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.853 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[c926068d-64f0-4c42-8953-a40249d31866]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.854 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.861 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.862 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[529920ae-833c-4891-bb94-63241312345d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.863 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[22943a25-dfbb-4262-bfd3-12fbca599d37]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.863 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.901 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.903 2 DEBUG os_brick.initiator.connectors.lightos [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.904 2 DEBUG os_brick.initiator.connectors.lightos [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.904 2 DEBUG os_brick.initiator.connectors.lightos [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.904 2 DEBUG os_brick.utils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:54:07 np0005466031 nova_compute[235803]: 2025-10-02 12:54:07.905 2 DEBUG nova.virt.block_device [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating existing volume attachment record: 370ada91-e18a-4022-9d41-8c5758765ce2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:54:08 np0005466031 nova_compute[235803]: 2025-10-02 12:54:08.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:08 np0005466031 nova_compute[235803]: 2025-10-02 12:54:08.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:08 np0005466031 nova_compute[235803]: 2025-10-02 12:54:08.657 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:08 np0005466031 nova_compute[235803]: 2025-10-02 12:54:08.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:08 np0005466031 nova_compute[235803]: 2025-10-02 12:54:08.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:08 np0005466031 nova_compute[235803]: 2025-10-02 12:54:08.658 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:54:08 np0005466031 nova_compute[235803]: 2025-10-02 12:54:08.658 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.011 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.013 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.014 2 INFO nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Creating image(s)#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.014 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.015 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Ensure instance console log exists: /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.015 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.015 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.016 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.018 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Start _get_guest_xml network_info=[{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f80c06cb-0550-4a66-a7bd-bba5ed3d622f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f80c06cb-0550-4a66-a7bd-bba5ed3d622f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '58e2a72f-a2b9-41a0-9c67-607e978d8b88', 'attached_at': '', 'detached_at': '', 'volume_id': 'f80c06cb-0550-4a66-a7bd-bba5ed3d622f', 'serial': 'f80c06cb-0550-4a66-a7bd-bba5ed3d622f'}, 'attachment_id': '370ada91-e18a-4022-9d41-8c5758765ce2', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.023 2 WARNING nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.027 2 DEBUG nova.virt.libvirt.host [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.027 2 DEBUG nova.virt.libvirt.host [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.030 2 DEBUG nova.virt.libvirt.host [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.031 2 DEBUG nova.virt.libvirt.host [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.032 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.032 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.032 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.032 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.033 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.033 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.033 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.033 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.034 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.034 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.034 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.034 2 DEBUG nova.virt.hardware [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.063 2 DEBUG nova.storage.rbd_utils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.067 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3621289055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.113 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.286 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.288 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4251MB free_disk=20.90142822265625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.288 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.289 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:09.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.372 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 58e2a72f-a2b9-41a0-9c67-607e978d8b88 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.373 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.373 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.395 2 DEBUG nova.network.neutron [req-a99ac6bf-1005-4769-9479-2463f2cacfd8 req-06882b4a-7d71-4b4a-a75f-91667fd097ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updated VIF entry in instance network info cache for port 5b3c9d9a-c3cd-49a5-b917-49aefaefd249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.397 2 DEBUG nova.network.neutron [req-a99ac6bf-1005-4769-9479-2463f2cacfd8 req-06882b4a-7d71-4b4a-a75f-91667fd097ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating instance_info_cache with network_info: [{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.415 2 DEBUG oslo_concurrency.lockutils [req-a99ac6bf-1005-4769-9479-2463f2cacfd8 req-06882b4a-7d71-4b4a-a75f-91667fd097ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.422 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:09.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:54:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2341730290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.586 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.658 2 DEBUG nova.virt.libvirt.vif [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:54:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-996317369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-996317369',id=147,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-dhn09nvd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:03Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=58e2a72f-a2b9-41a0-9c67-607e978d8b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.658 2 DEBUG nova.network.os_vif_util [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.659 2 DEBUG nova.network.os_vif_util [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:eb:77,bridge_name='br-int',has_traffic_filtering=True,id=5b3c9d9a-c3cd-49a5-b917-49aefaefd249,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b3c9d9a-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.660 2 DEBUG nova.objects.instance [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.691 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <uuid>58e2a72f-a2b9-41a0-9c67-607e978d8b88</uuid>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <name>instance-00000093</name>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-996317369</nova:name>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:54:09</nova:creationTime>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <nova:user uuid="00be63ea13c84e3d9419078865524099">tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member</nova:user>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <nova:project uuid="cb2da64acac041cb8d38c3b43fe4dbe9">tempest-ServerBootFromVolumeStableRescueTest-1641553658</nova:project>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <nova:port uuid="5b3c9d9a-c3cd-49a5-b917-49aefaefd249">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <entry name="serial">58e2a72f-a2b9-41a0-9c67-607e978d8b88</entry>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <entry name="uuid">58e2a72f-a2b9-41a0-9c67-607e978d8b88</entry>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-f80c06cb-0550-4a66-a7bd-bba5ed3d622f">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <serial>f80c06cb-0550-4a66-a7bd-bba5ed3d622f</serial>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:ea:eb:77"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <target dev="tap5b3c9d9a-c3"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/console.log" append="off"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:54:09 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:54:09 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:54:09 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:54:09 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.692 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Preparing to wait for external event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.693 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.693 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.693 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.694 2 DEBUG nova.virt.libvirt.vif [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:54:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-996317369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-996317369',id=147,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-dhn09nvd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:03Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=58e2a72f-a2b9-41a0-9c67-607e978d8b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.694 2 DEBUG nova.network.os_vif_util [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.695 2 DEBUG nova.network.os_vif_util [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:eb:77,bridge_name='br-int',has_traffic_filtering=True,id=5b3c9d9a-c3cd-49a5-b917-49aefaefd249,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b3c9d9a-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.695 2 DEBUG os_vif [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:eb:77,bridge_name='br-int',has_traffic_filtering=True,id=5b3c9d9a-c3cd-49a5-b917-49aefaefd249,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b3c9d9a-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.700 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b3c9d9a-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.700 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b3c9d9a-c3, col_values=(('external_ids', {'iface-id': '5b3c9d9a-c3cd-49a5-b917-49aefaefd249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:eb:77', 'vm-uuid': '58e2a72f-a2b9-41a0-9c67-607e978d8b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:09 np0005466031 NetworkManager[44907]: <info>  [1759409649.7039] manager: (tap5b3c9d9a-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.711 2 INFO os_vif [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:eb:77,bridge_name='br-int',has_traffic_filtering=True,id=5b3c9d9a-c3cd-49a5-b917-49aefaefd249,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b3c9d9a-c3')#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.775 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.775 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.776 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No VIF found with MAC fa:16:3e:ea:eb:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.776 2 INFO nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Using config drive#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.810 2 DEBUG nova.storage.rbd_utils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4040131513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.904 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.913 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.941 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.967 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:54:09 np0005466031 nova_compute[235803]: 2025-10-02 12:54:09.967 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.353 2 INFO nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Creating config drive at /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.360 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvzzxpids execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.499 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvzzxpids" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.528 2 DEBUG nova.storage.rbd_utils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.532 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.698 2 DEBUG oslo_concurrency.processutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.699 2 INFO nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Deleting local config drive /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config because it was imported into RBD.#033[00m
Oct  2 08:54:10 np0005466031 kernel: tap5b3c9d9a-c3: entered promiscuous mode
Oct  2 08:54:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:10Z|00530|binding|INFO|Claiming lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for this chassis.
Oct  2 08:54:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:10Z|00531|binding|INFO|5b3c9d9a-c3cd-49a5-b917-49aefaefd249: Claiming fa:16:3e:ea:eb:77 10.100.0.11
Oct  2 08:54:10 np0005466031 NetworkManager[44907]: <info>  [1759409650.7469] manager: (tap5b3c9d9a-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.756 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:eb:77 10.100.0.11'], port_security=['fa:16:3e:ea:eb:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '58e2a72f-a2b9-41a0-9c67-607e978d8b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5b3c9d9a-c3cd-49a5-b917-49aefaefd249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.758 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 bound to our chassis#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.759 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.772 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2c306bcf-2f04-46f8-a7ab-acc8d4beb68d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.773 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b0ec11e-01 in ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.775 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b0ec11e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.775 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[52cf4bc6-74e3-4f51-bf37-5f25e6fa9a4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.777 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4a92f0a3-5ff0-44c3-8ad6-db1b3456c133]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 systemd-machined[192227]: New machine qemu-62-instance-00000093.
Oct  2 08:54:10 np0005466031 systemd-udevd[299923]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.789 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d7201ca5-6223-419e-87f4-bab2d9957de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 NetworkManager[44907]: <info>  [1759409650.7928] device (tap5b3c9d9a-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:54:10 np0005466031 NetworkManager[44907]: <info>  [1759409650.7936] device (tap5b3c9d9a-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.811 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0165b8eb-d14a-463a-8c26-8a965f3ce92b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 systemd[1]: Started Virtual Machine qemu-62-instance-00000093.
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:10Z|00532|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 ovn-installed in OVS
Oct  2 08:54:10 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:10Z|00533|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 up in Southbound
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.847 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1b5927-384d-47ad-8439-352a289545ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 NetworkManager[44907]: <info>  [1759409650.8536] manager: (tap7b0ec11e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/243)
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.853 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[95506997-7c67-4637-bb02-45209cb5142d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 systemd-udevd[299926]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.883 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d45f41bc-81b4-43aa-a8b2-40c8cead7149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.887 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ccc282-ea07-4b81-ac84-f3c0b134ed8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 NetworkManager[44907]: <info>  [1759409650.9132] device (tap7b0ec11e-00): carrier: link connected
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.918 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[54fbce12-de2d-4733-9b7a-b13ecf5f6238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.936 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409635.9330842, 184f3992-03ad-4908-aeb5-b14e562fa846 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.937 2 INFO nova.compute.manager [-] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.940 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb46f42-8cbe-421a-ac8b-d03e384e64e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750650, 'reachable_time': 26936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299955, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.957 2 DEBUG nova.compute.manager [None req-f3466664-1d71-4855-bee2-db30f40609b6 - - - - - -] [instance: 184f3992-03ad-4908-aeb5-b14e562fa846] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.964 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3db29a-8072-486d-8d30-18b5e11904b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:f76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750650, 'tstamp': 750650}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299956, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:10 np0005466031 nova_compute[235803]: 2025-10-02 12:54:10.968 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:10.984 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b229016f-a840-4dac-b9a4-172662d4efff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750650, 'reachable_time': 26936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299957, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.023 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3bb8cf-0cb7-464a-a73d-05289dcd3fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.063 2 DEBUG nova.compute.manager [req-f9b56ff9-8d4c-4f5d-a12c-593fb704df97 req-9df5ca79-5c97-450a-bc4e-6d6b0dc72161 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.063 2 DEBUG oslo_concurrency.lockutils [req-f9b56ff9-8d4c-4f5d-a12c-593fb704df97 req-9df5ca79-5c97-450a-bc4e-6d6b0dc72161 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.064 2 DEBUG oslo_concurrency.lockutils [req-f9b56ff9-8d4c-4f5d-a12c-593fb704df97 req-9df5ca79-5c97-450a-bc4e-6d6b0dc72161 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.064 2 DEBUG oslo_concurrency.lockutils [req-f9b56ff9-8d4c-4f5d-a12c-593fb704df97 req-9df5ca79-5c97-450a-bc4e-6d6b0dc72161 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.064 2 DEBUG nova.compute.manager [req-f9b56ff9-8d4c-4f5d-a12c-593fb704df97 req-9df5ca79-5c97-450a-bc4e-6d6b0dc72161 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Processing event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.092 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cc477d86-5c96-4f87-b7e6-9a2f78686214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.093 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.093 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.094 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:11 np0005466031 NetworkManager[44907]: <info>  [1759409651.0961] manager: (tap7b0ec11e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Oct  2 08:54:11 np0005466031 kernel: tap7b0ec11e-00: entered promiscuous mode
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.097 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:11 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:11Z|00534|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.121 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.123 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4ad892-f6a9-4887-9e0a-3592dfbaed5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.123 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:54:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:11.124 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'env', 'PROCESS_TAG=haproxy-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:54:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:54:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:11.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:54:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:11.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:11 np0005466031 podman[300029]: 2025-10-02 12:54:11.532792184 +0000 UTC m=+0.065105847 container create 705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:54:11 np0005466031 systemd[1]: Started libpod-conmon-705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997.scope.
Oct  2 08:54:11 np0005466031 podman[300029]: 2025-10-02 12:54:11.493857792 +0000 UTC m=+0.026171485 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:54:11 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:54:11 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7345e51521a488e7c614c740c1c02e9484c862a55c031bae125252b5b527b3b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.657 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:54:11 np0005466031 podman[300029]: 2025-10-02 12:54:11.729336576 +0000 UTC m=+0.261650249 container init 705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:54:11 np0005466031 podman[300029]: 2025-10-02 12:54:11.73539858 +0000 UTC m=+0.267712243 container start 705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:54:11 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[300044]: [NOTICE]   (300048) : New worker (300050) forked
Oct  2 08:54:11 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[300044]: [NOTICE]   (300048) : Loading success.
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.873 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.875 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409651.872794, 58e2a72f-a2b9-41a0-9c67-607e978d8b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.876 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] VM Started (Lifecycle Event)#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.878 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.881 2 INFO nova.virt.libvirt.driver [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance spawned successfully.#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.882 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.903 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.907 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.907 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.907 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.908 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.908 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.909 2 DEBUG nova.virt.libvirt.driver [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.913 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.943 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.943 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409651.8743901, 58e2a72f-a2b9-41a0-9c67-607e978d8b88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.943 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.971 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.975 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409651.878119, 58e2a72f-a2b9-41a0-9c67-607e978d8b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.976 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:54:11 np0005466031 nova_compute[235803]: 2025-10-02 12:54:11.997 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:12 np0005466031 nova_compute[235803]: 2025-10-02 12:54:12.000 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:12 np0005466031 nova_compute[235803]: 2025-10-02 12:54:12.005 2 INFO nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Took 2.99 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:54:12 np0005466031 nova_compute[235803]: 2025-10-02 12:54:12.006 2 DEBUG nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:12 np0005466031 nova_compute[235803]: 2025-10-02 12:54:12.037 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:54:12 np0005466031 nova_compute[235803]: 2025-10-02 12:54:12.069 2 INFO nova.compute.manager [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Took 9.69 seconds to build instance.#033[00m
Oct  2 08:54:12 np0005466031 nova_compute[235803]: 2025-10-02 12:54:12.084 2 DEBUG oslo_concurrency.lockutils [None req-a944cabf-1fd4-484a-ab3d-e30a56325285 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.267 2 DEBUG nova.compute.manager [req-3889dc29-a092-475c-8ed5-3bfdf9b50b0e req-e2003b08-95f8-44f1-a421-8becd2875b00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.268 2 DEBUG oslo_concurrency.lockutils [req-3889dc29-a092-475c-8ed5-3bfdf9b50b0e req-e2003b08-95f8-44f1-a421-8becd2875b00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.268 2 DEBUG oslo_concurrency.lockutils [req-3889dc29-a092-475c-8ed5-3bfdf9b50b0e req-e2003b08-95f8-44f1-a421-8becd2875b00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.268 2 DEBUG oslo_concurrency.lockutils [req-3889dc29-a092-475c-8ed5-3bfdf9b50b0e req-e2003b08-95f8-44f1-a421-8becd2875b00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.268 2 DEBUG nova.compute.manager [req-3889dc29-a092-475c-8ed5-3bfdf9b50b0e req-e2003b08-95f8-44f1-a421-8becd2875b00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.268 2 WARNING nova.compute.manager [req-3889dc29-a092-475c-8ed5-3bfdf9b50b0e req-e2003b08-95f8-44f1-a421-8becd2875b00 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:54:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:13.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:13.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.950 2 INFO nova.compute.manager [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Rescuing#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.950 2 DEBUG oslo_concurrency.lockutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.951 2 DEBUG oslo_concurrency.lockutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquired lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:13 np0005466031 nova_compute[235803]: 2025-10-02 12:54:13.951 2 DEBUG nova.network.neutron [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:14 np0005466031 nova_compute[235803]: 2025-10-02 12:54:14.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:15 np0005466031 nova_compute[235803]: 2025-10-02 12:54:15.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:15.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:15 np0005466031 nova_compute[235803]: 2025-10-02 12:54:15.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:15 np0005466031 nova_compute[235803]: 2025-10-02 12:54:15.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:54:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:17.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:18 np0005466031 nova_compute[235803]: 2025-10-02 12:54:18.131 2 DEBUG nova.network.neutron [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating instance_info_cache with network_info: [{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:18 np0005466031 nova_compute[235803]: 2025-10-02 12:54:18.161 2 DEBUG oslo_concurrency.lockutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Releasing lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:18 np0005466031 nova_compute[235803]: 2025-10-02 12:54:18.439 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:54:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:19.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:19.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:19 np0005466031 podman[300063]: 2025-10-02 12:54:19.642345982 +0000 UTC m=+0.067550167 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:54:19 np0005466031 podman[300064]: 2025-10-02 12:54:19.681013586 +0000 UTC m=+0.106977943 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:54:19 np0005466031 nova_compute[235803]: 2025-10-02 12:54:19.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:20 np0005466031 nova_compute[235803]: 2025-10-02 12:54:20.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:21.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:21.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:23.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:23.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:54:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:54:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:54:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:24 np0005466031 nova_compute[235803]: 2025-10-02 12:54:24.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:25 np0005466031 nova_compute[235803]: 2025-10-02 12:54:25.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:25.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:25.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:25.864 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:25.864 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:25.865 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:26 np0005466031 podman[300270]: 2025-10-02 12:54:26.320398027 +0000 UTC m=+0.051674169 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:54:26 np0005466031 podman[300269]: 2025-10-02 12:54:26.330613402 +0000 UTC m=+0.065840438 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:54:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:27.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:54:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:27.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:54:28 np0005466031 nova_compute[235803]: 2025-10-02 12:54:28.485 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:54:28 np0005466031 nova_compute[235803]: 2025-10-02 12:54:28.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:29.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:54:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:29.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:54:29 np0005466031 nova_compute[235803]: 2025-10-02 12:54:29.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:30 np0005466031 nova_compute[235803]: 2025-10-02 12:54:30.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:31.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:33.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:54:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:54:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:34 np0005466031 nova_compute[235803]: 2025-10-02 12:54:34.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:35 np0005466031 nova_compute[235803]: 2025-10-02 12:54:35.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:35.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:37.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:37.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.491 2 DEBUG nova.compute.manager [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.576 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.577 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.601 2 DEBUG nova.objects.instance [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'pci_requests' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.617 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.618 2 INFO nova.compute.claims [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.618 2 DEBUG nova.objects.instance [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'resources' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.631 2 DEBUG nova.objects.instance [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'numa_topology' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.645 2 DEBUG nova.objects.instance [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.697 2 INFO nova.compute.resource_tracker [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating resource usage from migration 967e2439-1b81-4fe0-baf4-48b7e3d12a87#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.697 2 DEBUG nova.compute.resource_tracker [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Starting to track incoming migration 967e2439-1b81-4fe0-baf4-48b7e3d12a87 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:54:38 np0005466031 nova_compute[235803]: 2025-10-02 12:54:38.771 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1167518381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:39 np0005466031 nova_compute[235803]: 2025-10-02 12:54:39.237 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:39 np0005466031 nova_compute[235803]: 2025-10-02 12:54:39.243 2 DEBUG nova.compute.provider_tree [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:39 np0005466031 nova_compute[235803]: 2025-10-02 12:54:39.257 2 DEBUG nova.scheduler.client.report [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:39 np0005466031 nova_compute[235803]: 2025-10-02 12:54:39.280 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:39 np0005466031 nova_compute[235803]: 2025-10-02 12:54:39.280 2 INFO nova.compute.manager [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Migrating#033[00m
Oct  2 08:54:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:39.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:39.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:39 np0005466031 nova_compute[235803]: 2025-10-02 12:54:39.529 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:54:39 np0005466031 nova_compute[235803]: 2025-10-02 12:54:39.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:40 np0005466031 nova_compute[235803]: 2025-10-02 12:54:40.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:41.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:41 np0005466031 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:54:41 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:54:41 np0005466031 systemd-logind[786]: New session 62 of user nova.
Oct  2 08:54:41 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:54:41 np0005466031 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:54:41 np0005466031 systemd[300415]: Queued start job for default target Main User Target.
Oct  2 08:54:41 np0005466031 systemd[300415]: Created slice User Application Slice.
Oct  2 08:54:41 np0005466031 systemd[300415]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:54:41 np0005466031 systemd[300415]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:54:41 np0005466031 systemd[300415]: Reached target Paths.
Oct  2 08:54:41 np0005466031 systemd[300415]: Reached target Timers.
Oct  2 08:54:41 np0005466031 systemd[300415]: Starting D-Bus User Message Bus Socket...
Oct  2 08:54:41 np0005466031 systemd[300415]: Starting Create User's Volatile Files and Directories...
Oct  2 08:54:41 np0005466031 systemd[300415]: Finished Create User's Volatile Files and Directories.
Oct  2 08:54:41 np0005466031 systemd[300415]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:54:41 np0005466031 systemd[300415]: Reached target Sockets.
Oct  2 08:54:41 np0005466031 systemd[300415]: Reached target Basic System.
Oct  2 08:54:41 np0005466031 systemd[300415]: Reached target Main User Target.
Oct  2 08:54:41 np0005466031 systemd[300415]: Startup finished in 146ms.
Oct  2 08:54:41 np0005466031 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:54:41 np0005466031 systemd[1]: Started Session 62 of User nova.
Oct  2 08:54:42 np0005466031 systemd[1]: session-62.scope: Deactivated successfully.
Oct  2 08:54:42 np0005466031 systemd-logind[786]: Session 62 logged out. Waiting for processes to exit.
Oct  2 08:54:42 np0005466031 systemd-logind[786]: Removed session 62.
Oct  2 08:54:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:42.120 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:42.122 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:54:42 np0005466031 nova_compute[235803]: 2025-10-02 12:54:42.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:42 np0005466031 systemd-logind[786]: New session 64 of user nova.
Oct  2 08:54:42 np0005466031 systemd[1]: Started Session 64 of User nova.
Oct  2 08:54:42 np0005466031 systemd[1]: session-64.scope: Deactivated successfully.
Oct  2 08:54:42 np0005466031 systemd-logind[786]: Session 64 logged out. Waiting for processes to exit.
Oct  2 08:54:42 np0005466031 systemd-logind[786]: Removed session 64.
Oct  2 08:54:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:43.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:43.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:44 np0005466031 nova_compute[235803]: 2025-10-02 12:54:44.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:44 np0005466031 nova_compute[235803]: 2025-10-02 12:54:44.877 2 DEBUG nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:44 np0005466031 nova_compute[235803]: 2025-10-02 12:54:44.878 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:44 np0005466031 nova_compute[235803]: 2025-10-02 12:54:44.878 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:44 np0005466031 nova_compute[235803]: 2025-10-02 12:54:44.878 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:44 np0005466031 nova_compute[235803]: 2025-10-02 12:54:44.878 2 DEBUG nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:44 np0005466031 nova_compute[235803]: 2025-10-02 12:54:44.878 2 WARNING nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:54:45 np0005466031 nova_compute[235803]: 2025-10-02 12:54:45.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:45.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:45.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:46 np0005466031 nova_compute[235803]: 2025-10-02 12:54:46.208 2 INFO nova.network.neutron [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating port 20204810-ff47-450e-80e5-23d03b435455 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:54:46 np0005466031 nova_compute[235803]: 2025-10-02 12:54:46.720 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:46 np0005466031 nova_compute[235803]: 2025-10-02 12:54:46.720 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:46 np0005466031 nova_compute[235803]: 2025-10-02 12:54:46.721 2 DEBUG nova.network.neutron [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.026 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.027 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.027 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.027 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.028 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.028 2 WARNING nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.028 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.028 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:47 np0005466031 nova_compute[235803]: 2025-10-02 12:54:47.029 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:47.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:47.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:48 np0005466031 nova_compute[235803]: 2025-10-02 12:54:48.154 2 DEBUG nova.network.neutron [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:48 np0005466031 nova_compute[235803]: 2025-10-02 12:54:48.180 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:48 np0005466031 nova_compute[235803]: 2025-10-02 12:54:48.185 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:48 np0005466031 nova_compute[235803]: 2025-10-02 12:54:48.185 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:48 np0005466031 nova_compute[235803]: 2025-10-02 12:54:48.289 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:54:48 np0005466031 nova_compute[235803]: 2025-10-02 12:54:48.290 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:54:48 np0005466031 nova_compute[235803]: 2025-10-02 12:54:48.290 2 INFO nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Creating image(s)#033[00m
Oct  2 08:54:48 np0005466031 nova_compute[235803]: 2025-10-02 12:54:48.326 2 DEBUG nova.storage.rbd_utils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] creating snapshot(nova-resize) on rbd image(a1e0932b-16b6-46b9-8192-b89b91e91802_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:54:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:49.124 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e340 e340: 3 total, 3 up, 3 in
Oct  2 08:54:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:49.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.452 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.453 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.475 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.722 2 DEBUG nova.objects.instance [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.822 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.822 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Ensure instance console log exists: /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.822 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.823 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.823 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.825 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start _get_guest_xml network_info=[{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.828 2 WARNING nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.832 2 DEBUG nova.virt.libvirt.host [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.832 2 DEBUG nova.virt.libvirt.host [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.835 2 DEBUG nova.virt.libvirt.host [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.835 2 DEBUG nova.virt.libvirt.host [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.836 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.836 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.837 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.837 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.837 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.837 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.838 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.838 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.838 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.838 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.838 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.839 2 DEBUG nova.virt.hardware [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.839 2 DEBUG nova.objects.instance [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:49 np0005466031 nova_compute[235803]: 2025-10-02 12:54:49.854 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:54:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3158757264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.352 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.390 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.580 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:54:50 np0005466031 podman[300624]: 2025-10-02 12:54:50.623394293 +0000 UTC m=+0.054648556 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 08:54:50 np0005466031 podman[300625]: 2025-10-02 12:54:50.651329528 +0000 UTC m=+0.081393576 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:54:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:54:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/121809429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.819 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.821 2 DEBUG nova.virt.libvirt.vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:45Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.821 2 DEBUG nova.network.os_vif_util [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.822 2 DEBUG nova.network.os_vif_util [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.825 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <uuid>a1e0932b-16b6-46b9-8192-b89b91e91802</uuid>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <name>instance-00000092</name>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1723654799</nova:name>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:54:49</nova:creationTime>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <nova:port uuid="20204810-ff47-450e-80e5-23d03b435455">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <entry name="serial">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <entry name="uuid">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:5b:41:1c"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <target dev="tap20204810-ff"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/console.log" append="off"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:54:50 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:54:50 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:54:50 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:54:50 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.826 2 DEBUG nova.virt.libvirt.vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:45Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.827 2 DEBUG nova.network.os_vif_util [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.828 2 DEBUG nova.network.os_vif_util [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.828 2 DEBUG os_vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20204810-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20204810-ff, col_values=(('external_ids', {'iface-id': '20204810-ff47-450e-80e5-23d03b435455', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:41:1c', 'vm-uuid': 'a1e0932b-16b6-46b9-8192-b89b91e91802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005466031 NetworkManager[44907]: <info>  [1759409690.8362] manager: (tap20204810-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005466031 nova_compute[235803]: 2025-10-02 12:54:50.841 2 INFO os_vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.028 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.028 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.029 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] No VIF found with MAC fa:16:3e:5b:41:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.029 2 INFO nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Using config drive#033[00m
Oct  2 08:54:51 np0005466031 kernel: tap20204810-ff: entered promiscuous mode
Oct  2 08:54:51 np0005466031 NetworkManager[44907]: <info>  [1759409691.1085] manager: (tap20204810-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:51Z|00535|binding|INFO|Claiming lport 20204810-ff47-450e-80e5-23d03b435455 for this chassis.
Oct  2 08:54:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:51Z|00536|binding|INFO|20204810-ff47-450e-80e5-23d03b435455: Claiming fa:16:3e:5b:41:1c 10.100.0.7
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 NetworkManager[44907]: <info>  [1759409691.1208] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct  2 08:54:51 np0005466031 NetworkManager[44907]: <info>  [1759409691.1213] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.125 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.126 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 bound to our chassis#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.127 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a39243cb-5286-4429-8879-7b4d535de128#033[00m
Oct  2 08:54:51 np0005466031 systemd-udevd[300704]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.139 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5d09cc8b-cd91-446c-b1ba-31e9e080b2bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.140 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa39243cb-51 in ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:54:51 np0005466031 systemd-machined[192227]: New machine qemu-63-instance-00000092.
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.142 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa39243cb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.143 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e1adbe6a-f41c-4f59-9c96-a67bd084b128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.144 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9b84bf-d90b-4ce9-a2c3-63ce5fdc05a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 NetworkManager[44907]: <info>  [1759409691.1510] device (tap20204810-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:54:51 np0005466031 NetworkManager[44907]: <info>  [1759409691.1522] device (tap20204810-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.158 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb366c7-d4d6-4781-b5c3-f8dfdf947d46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 systemd[1]: Started Virtual Machine qemu-63-instance-00000092.
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.181 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc40c94-25aa-4ae2-bddc-7d06ea0c7f0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.211 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8d24af-2ba9-4c39-80a7-b94202ae3e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.219 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d610dd-1fc1-4ea1-837d-89e2d2644cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 NetworkManager[44907]: <info>  [1759409691.2225] manager: (tapa39243cb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/249)
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:51Z|00537|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.249 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ca365b8a-e5ea-4d21-acf6-a3ea93d29665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.252 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[07b048d3-85ba-44c2-9730-6e6b9105140d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:51Z|00538|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 ovn-installed in OVS
Oct  2 08:54:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:51Z|00539|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 up in Southbound
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 NetworkManager[44907]: <info>  [1759409691.2757] device (tapa39243cb-50): carrier: link connected
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.282 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5cbc62-ad45-4925-b571-c9fc44eee3f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.298 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cec5b364-7fd0-4746-8163-f6bac1ba0bef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754686, 'reachable_time': 23432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300736, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.313 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8d88e1-6f8b-4b17-afc2-01dc72c0e237]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:6ac8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 754686, 'tstamp': 754686}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300737, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.328 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e441c3-062f-4375-beac-98cfc7fa2114]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754686, 'reachable_time': 23432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300738, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.355 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc2d1c0-32dd-4045-bc0c-5efea933358f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:51.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.413 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d41ff59b-1024-4c07-b912-755ee4612b7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.414 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.414 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.415 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa39243cb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 NetworkManager[44907]: <info>  [1759409691.4173] manager: (tapa39243cb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Oct  2 08:54:51 np0005466031 kernel: tapa39243cb-50: entered promiscuous mode
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.421 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa39243cb-50, col_values=(('external_ids', {'iface-id': '75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:51Z|00540|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.436 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.437 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[96f2d288-0fca-46fe-ac92-16bf7d14ac42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.438 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-a39243cb-5286-4429-8879-7b4d535de128
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID a39243cb-5286-4429-8879-7b4d535de128
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:54:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:51.438 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'env', 'PROCESS_TAG=haproxy-a39243cb-5286-4429-8879-7b4d535de128', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a39243cb-5286-4429-8879-7b4d535de128.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:54:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:54:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:51.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.617 2 DEBUG nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.617 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.618 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.618 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.618 2 DEBUG nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:51 np0005466031 nova_compute[235803]: 2025-10-02 12:54:51.618 2 WARNING nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:54:51 np0005466031 podman[300795]: 2025-10-02 12:54:51.774552959 +0000 UTC m=+0.020998846 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:54:52 np0005466031 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:54:52 np0005466031 systemd[300415]: Activating special unit Exit the Session...
Oct  2 08:54:52 np0005466031 systemd[300415]: Stopped target Main User Target.
Oct  2 08:54:52 np0005466031 systemd[300415]: Stopped target Basic System.
Oct  2 08:54:52 np0005466031 systemd[300415]: Stopped target Paths.
Oct  2 08:54:52 np0005466031 systemd[300415]: Stopped target Sockets.
Oct  2 08:54:52 np0005466031 systemd[300415]: Stopped target Timers.
Oct  2 08:54:52 np0005466031 systemd[300415]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:54:52 np0005466031 systemd[300415]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:54:52 np0005466031 systemd[300415]: Closed D-Bus User Message Bus Socket.
Oct  2 08:54:52 np0005466031 systemd[300415]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:54:52 np0005466031 systemd[300415]: Removed slice User Application Slice.
Oct  2 08:54:52 np0005466031 systemd[300415]: Reached target Shutdown.
Oct  2 08:54:52 np0005466031 systemd[300415]: Finished Exit the Session.
Oct  2 08:54:52 np0005466031 systemd[300415]: Reached target Exit the Session.
Oct  2 08:54:52 np0005466031 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:54:52 np0005466031 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:54:52 np0005466031 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:54:52 np0005466031 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:54:52 np0005466031 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:54:52 np0005466031 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:54:52 np0005466031 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:54:52 np0005466031 podman[300795]: 2025-10-02 12:54:52.403293184 +0000 UTC m=+0.649739051 container create 9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:54:52 np0005466031 systemd[1]: Started libpod-conmon-9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990.scope.
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.470 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409692.4694817, a1e0932b-16b6-46b9-8192-b89b91e91802 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.471 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.473 2 DEBUG nova.compute.manager [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.476 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance running successfully.#033[00m
Oct  2 08:54:52 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:54:52 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.478 2 DEBUG nova.virt.libvirt.guest [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.479 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:54:52 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc795577c4f7d70a7aac7d54b95d0340a6cf197c39d0816e6756c9fa1634ac01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.494 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.497 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.540 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.540 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409692.4736726, a1e0932b-16b6-46b9-8192-b89b91e91802 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.541 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Started (Lifecycle Event)#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.582 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:52 np0005466031 nova_compute[235803]: 2025-10-02 12:54:52.588 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:52 np0005466031 podman[300795]: 2025-10-02 12:54:52.671540883 +0000 UTC m=+0.917986760 container init 9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:54:52 np0005466031 podman[300795]: 2025-10-02 12:54:52.677529306 +0000 UTC m=+0.923975183 container start 9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:54:52 np0005466031 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[300829]: [NOTICE]   (300833) : New worker (300835) forked
Oct  2 08:54:52 np0005466031 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[300829]: [NOTICE]   (300833) : Loading success.
Oct  2 08:54:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:53.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:53.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:53 np0005466031 nova_compute[235803]: 2025-10-02 12:54:53.931 2 DEBUG nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:53 np0005466031 nova_compute[235803]: 2025-10-02 12:54:53.931 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:53 np0005466031 nova_compute[235803]: 2025-10-02 12:54:53.932 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:53 np0005466031 nova_compute[235803]: 2025-10-02 12:54:53.932 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:53 np0005466031 nova_compute[235803]: 2025-10-02 12:54:53.932 2 DEBUG nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:53 np0005466031 nova_compute[235803]: 2025-10-02 12:54:53.932 2 WARNING nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:54:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:54 np0005466031 nova_compute[235803]: 2025-10-02 12:54:54.694 2 DEBUG nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Port 20204810-ff47-450e-80e5-23d03b435455 binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Oct  2 08:54:54 np0005466031 nova_compute[235803]: 2025-10-02 12:54:54.695 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:54 np0005466031 nova_compute[235803]: 2025-10-02 12:54:54.695 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:54 np0005466031 nova_compute[235803]: 2025-10-02 12:54:54.696 2 DEBUG nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:55 np0005466031 nova_compute[235803]: 2025-10-02 12:54:55.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:55.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:55.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:55 np0005466031 nova_compute[235803]: 2025-10-02 12:54:55.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:55 np0005466031 nova_compute[235803]: 2025-10-02 12:54:55.956 2 DEBUG nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:55 np0005466031 nova_compute[235803]: 2025-10-02 12:54:55.988 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:56 np0005466031 kernel: tap20204810-ff (unregistering): left promiscuous mode
Oct  2 08:54:56 np0005466031 NetworkManager[44907]: <info>  [1759409696.0516] device (tap20204810-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:54:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:56Z|00541|binding|INFO|Releasing lport 20204810-ff47-450e-80e5-23d03b435455 from this chassis (sb_readonly=0)
Oct  2 08:54:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:56Z|00542|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 down in Southbound
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 ovn_controller[132413]: 2025-10-02T12:54:56Z|00543|binding|INFO|Removing iface tap20204810-ff ovn-installed in OVS
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.076 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.077 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 unbound from our chassis#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.078 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a39243cb-5286-4429-8879-7b4d535de128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.079 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4cd0ef-b4ad-46c7-9a95-6de506a29d5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.080 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace which is not needed anymore#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct  2 08:54:56 np0005466031 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000092.scope: Consumed 4.208s CPU time.
Oct  2 08:54:56 np0005466031 systemd-machined[192227]: Machine qemu-63-instance-00000092 terminated.
Oct  2 08:54:56 np0005466031 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[300829]: [NOTICE]   (300833) : haproxy version is 2.8.14-c23fe91
Oct  2 08:54:56 np0005466031 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[300829]: [NOTICE]   (300833) : path to executable is /usr/sbin/haproxy
Oct  2 08:54:56 np0005466031 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[300829]: [WARNING]  (300833) : Exiting Master process...
Oct  2 08:54:56 np0005466031 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[300829]: [ALERT]    (300833) : Current worker (300835) exited with code 143 (Terminated)
Oct  2 08:54:56 np0005466031 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[300829]: [WARNING]  (300833) : All workers exited. Exiting... (0)
Oct  2 08:54:56 np0005466031 systemd[1]: libpod-9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990.scope: Deactivated successfully.
Oct  2 08:54:56 np0005466031 NetworkManager[44907]: <info>  [1759409696.2297] manager: (tap20204810-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 podman[300869]: 2025-10-02 12:54:56.234536509 +0000 UTC m=+0.051347450 container died 9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.250 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance destroyed successfully.#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.251 2 DEBUG nova.objects.instance [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:56 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990-userdata-shm.mount: Deactivated successfully.
Oct  2 08:54:56 np0005466031 systemd[1]: var-lib-containers-storage-overlay-cc795577c4f7d70a7aac7d54b95d0340a6cf197c39d0816e6756c9fa1634ac01-merged.mount: Deactivated successfully.
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.267 2 DEBUG nova.virt.libvirt.vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:52Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.268 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.268 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.269 2 DEBUG os_vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20204810-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.277 2 INFO os_vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.282 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.282 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:56 np0005466031 podman[300869]: 2025-10-02 12:54:56.28697372 +0000 UTC m=+0.103784651 container cleanup 9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:54:56 np0005466031 systemd[1]: libpod-conmon-9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990.scope: Deactivated successfully.
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.306 2 DEBUG nova.objects.instance [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.338 2 DEBUG nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.338 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.338 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.340 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.340 2 DEBUG nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.340 2 WARNING nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:54:56 np0005466031 podman[300901]: 2025-10-02 12:54:56.352721995 +0000 UTC m=+0.044449862 container remove 9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.359 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[70ca8113-d552-44b1-bac2-92bf4c03f23a]: (4, ('Thu Oct  2 12:54:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990)\n9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990\nThu Oct  2 12:54:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990)\n9419979cc768150dcc8dd2bb5012e4e82e5626c05c15108e8d84d4948e026990\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.361 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[276cd025-5b87-4a2a-8af2-a36bf96bab53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.362 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:56 np0005466031 kernel: tapa39243cb-50: left promiscuous mode
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.381 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[efba263c-37cc-463d-b208-7b1c0752701c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.402 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.404 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9c6939-89a2-4362-81a1-167e3a78289e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.405 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[83642b0e-1b09-4dc3-adfb-b2866b509218]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.423 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3777fd17-847b-409f-b6bb-24447319047d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754679, 'reachable_time': 41550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300930, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:56 np0005466031 systemd[1]: run-netns-ovnmeta\x2da39243cb\x2d5286\x2d4429\x2d8879\x2d7b4d535de128.mount: Deactivated successfully.
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.430 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:54:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:54:56.430 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[309fff4e-58a5-457b-ab7b-83e2f7173a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:56 np0005466031 podman[300915]: 2025-10-02 12:54:56.473476274 +0000 UTC m=+0.074692753 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 08:54:56 np0005466031 podman[300916]: 2025-10-02 12:54:56.493783819 +0000 UTC m=+0.088829701 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:54:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:56 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/141177385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.877 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.883 2 DEBUG nova.compute.provider_tree [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.900 2 DEBUG nova.scheduler.client.report [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:56 np0005466031 nova_compute[235803]: 2025-10-02 12:54:56.960 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:57.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:58 np0005466031 nova_compute[235803]: 2025-10-02 12:54:58.447 2 DEBUG nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:58 np0005466031 nova_compute[235803]: 2025-10-02 12:54:58.447 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:58 np0005466031 nova_compute[235803]: 2025-10-02 12:54:58.448 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:58 np0005466031 nova_compute[235803]: 2025-10-02 12:54:58.448 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:58 np0005466031 nova_compute[235803]: 2025-10-02 12:54:58.448 2 DEBUG nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:58 np0005466031 nova_compute[235803]: 2025-10-02 12:54:58.448 2 WARNING nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:54:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:59 np0005466031 nova_compute[235803]: 2025-10-02 12:54:59.188 2 DEBUG nova.compute.manager [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:59 np0005466031 nova_compute[235803]: 2025-10-02 12:54:59.188 2 DEBUG nova.compute.manager [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:59 np0005466031 nova_compute[235803]: 2025-10-02 12:54:59.189 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:59 np0005466031 nova_compute[235803]: 2025-10-02 12:54:59.189 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:59 np0005466031 nova_compute[235803]: 2025-10-02 12:54:59.189 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:59 np0005466031 nova_compute[235803]: 2025-10-02 12:54:59.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:54:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:59.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:00 np0005466031 nova_compute[235803]: 2025-10-02 12:55:00.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:00 np0005466031 nova_compute[235803]: 2025-10-02 12:55:00.730 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:01 np0005466031 nova_compute[235803]: 2025-10-02 12:55:01.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:01.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:01.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:01 np0005466031 nova_compute[235803]: 2025-10-02 12:55:01.625 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:55:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e341 e341: 3 total, 3 up, 3 in
Oct  2 08:55:02 np0005466031 nova_compute[235803]: 2025-10-02 12:55:02.410 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:55:02 np0005466031 nova_compute[235803]: 2025-10-02 12:55:02.410 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:02 np0005466031 nova_compute[235803]: 2025-10-02 12:55:02.428 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:03.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:03 np0005466031 nova_compute[235803]: 2025-10-02 12:55:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:04 np0005466031 nova_compute[235803]: 2025-10-02 12:55:04.523 2 DEBUG nova.compute.manager [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:04 np0005466031 nova_compute[235803]: 2025-10-02 12:55:04.524 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:04 np0005466031 nova_compute[235803]: 2025-10-02 12:55:04.525 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:04 np0005466031 nova_compute[235803]: 2025-10-02 12:55:04.525 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:04 np0005466031 nova_compute[235803]: 2025-10-02 12:55:04.525 2 DEBUG nova.compute.manager [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:04 np0005466031 nova_compute[235803]: 2025-10-02 12:55:04.526 2 WARNING nova.compute.manager [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:55:04 np0005466031 nova_compute[235803]: 2025-10-02 12:55:04.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:05 np0005466031 nova_compute[235803]: 2025-10-02 12:55:05.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:55:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182082830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:55:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:55:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182082830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:55:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:05.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:05.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:06 np0005466031 nova_compute[235803]: 2025-10-02 12:55:06.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:07.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:07 np0005466031 nova_compute[235803]: 2025-10-02 12:55:07.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:08 np0005466031 nova_compute[235803]: 2025-10-02 12:55:08.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:08 np0005466031 nova_compute[235803]: 2025-10-02 12:55:08.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:08 np0005466031 nova_compute[235803]: 2025-10-02 12:55:08.702 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:08 np0005466031 nova_compute[235803]: 2025-10-02 12:55:08.702 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:08 np0005466031 nova_compute[235803]: 2025-10-02 12:55:08.703 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:08 np0005466031 nova_compute[235803]: 2025-10-02 12:55:08.703 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:55:08 np0005466031 nova_compute[235803]: 2025-10-02 12:55:08.703 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3776642989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.238 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.443 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.443 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:55:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:09.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.606 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.607 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4206MB free_disk=20.87628173828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.607 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.608 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.841 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 58e2a72f-a2b9-41a0-9c67-607e978d8b88 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.842 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.843 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.858 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.875 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.876 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.891 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.913 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:55:09 np0005466031 nova_compute[235803]: 2025-10-02 12:55:09.944 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:10 np0005466031 nova_compute[235803]: 2025-10-02 12:55:10.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3005431859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:10 np0005466031 nova_compute[235803]: 2025-10-02 12:55:10.361 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:10 np0005466031 nova_compute[235803]: 2025-10-02 12:55:10.367 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:10 np0005466031 nova_compute[235803]: 2025-10-02 12:55:10.447 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:10 np0005466031 nova_compute[235803]: 2025-10-02 12:55:10.624 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:55:10 np0005466031 nova_compute[235803]: 2025-10-02 12:55:10.624 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:11 np0005466031 nova_compute[235803]: 2025-10-02 12:55:11.247 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409696.2451296, a1e0932b-16b6-46b9-8192-b89b91e91802 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:11 np0005466031 nova_compute[235803]: 2025-10-02 12:55:11.248 2 INFO nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:55:11 np0005466031 nova_compute[235803]: 2025-10-02 12:55:11.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:11 np0005466031 nova_compute[235803]: 2025-10-02 12:55:11.295 2 DEBUG nova.compute.manager [None req-27b2d4f4-fd67-4da5-8999-e2da95b14540 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:11.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:55:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:11.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:55:12 np0005466031 nova_compute[235803]: 2025-10-02 12:55:12.674 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:55:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 e342: 3 total, 3 up, 3 in
Oct  2 08:55:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:13.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:13.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:13 np0005466031 nova_compute[235803]: 2025-10-02 12:55:13.624 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:13 np0005466031 nova_compute[235803]: 2025-10-02 12:55:13.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:13 np0005466031 nova_compute[235803]: 2025-10-02 12:55:13.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:55:13 np0005466031 nova_compute[235803]: 2025-10-02 12:55:13.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:55:13 np0005466031 nova_compute[235803]: 2025-10-02 12:55:13.651 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:13 np0005466031 nova_compute[235803]: 2025-10-02 12:55:13.651 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:13 np0005466031 nova_compute[235803]: 2025-10-02 12:55:13.652 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:55:13 np0005466031 nova_compute[235803]: 2025-10-02 12:55:13.652 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:15 np0005466031 nova_compute[235803]: 2025-10-02 12:55:15.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:15.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:16 np0005466031 nova_compute[235803]: 2025-10-02 12:55:16.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:16 np0005466031 nova_compute[235803]: 2025-10-02 12:55:16.830 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating instance_info_cache with network_info: [{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:16 np0005466031 nova_compute[235803]: 2025-10-02 12:55:16.849 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:16 np0005466031 nova_compute[235803]: 2025-10-02 12:55:16.850 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:55:16 np0005466031 nova_compute[235803]: 2025-10-02 12:55:16.850 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:17.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:17 np0005466031 nova_compute[235803]: 2025-10-02 12:55:17.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:17 np0005466031 nova_compute[235803]: 2025-10-02 12:55:17.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:55:18 np0005466031 nova_compute[235803]: 2025-10-02 12:55:18.694 2 INFO nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance failed to shutdown in 60 seconds.#033[00m
Oct  2 08:55:18 np0005466031 kernel: tap5b3c9d9a-c3 (unregistering): left promiscuous mode
Oct  2 08:55:18 np0005466031 NetworkManager[44907]: <info>  [1759409718.7360] device (tap5b3c9d9a-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:55:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:18Z|00544|binding|INFO|Releasing lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 from this chassis (sb_readonly=0)
Oct  2 08:55:18 np0005466031 nova_compute[235803]: 2025-10-02 12:55:18.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:18Z|00545|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 down in Southbound
Oct  2 08:55:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:18Z|00546|binding|INFO|Removing iface tap5b3c9d9a-c3 ovn-installed in OVS
Oct  2 08:55:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:18.754 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:eb:77 10.100.0.11'], port_security=['fa:16:3e:ea:eb:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '58e2a72f-a2b9-41a0-9c67-607e978d8b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5b3c9d9a-c3cd-49a5-b917-49aefaefd249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:18.755 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis#033[00m
Oct  2 08:55:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:18.756 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:55:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:18.757 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fab03344-284c-4d4d-a2b4-5b200760e453]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:18.758 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 namespace which is not needed anymore#033[00m
Oct  2 08:55:18 np0005466031 nova_compute[235803]: 2025-10-02 12:55:18.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:18 np0005466031 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct  2 08:55:18 np0005466031 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000093.scope: Consumed 1.772s CPU time.
Oct  2 08:55:18 np0005466031 systemd-machined[192227]: Machine qemu-62-instance-00000093 terminated.
Oct  2 08:55:18 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[300044]: [NOTICE]   (300048) : haproxy version is 2.8.14-c23fe91
Oct  2 08:55:18 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[300044]: [NOTICE]   (300048) : path to executable is /usr/sbin/haproxy
Oct  2 08:55:18 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[300044]: [WARNING]  (300048) : Exiting Master process...
Oct  2 08:55:18 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[300044]: [ALERT]    (300048) : Current worker (300050) exited with code 143 (Terminated)
Oct  2 08:55:18 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[300044]: [WARNING]  (300048) : All workers exited. Exiting... (0)
Oct  2 08:55:18 np0005466031 systemd[1]: libpod-705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997.scope: Deactivated successfully.
Oct  2 08:55:18 np0005466031 podman[301107]: 2025-10-02 12:55:18.905066935 +0000 UTC m=+0.044725730 container died 705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:55:18 np0005466031 nova_compute[235803]: 2025-10-02 12:55:18.933 2 INFO nova.virt.libvirt.driver [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance destroyed successfully.#033[00m
Oct  2 08:55:18 np0005466031 nova_compute[235803]: 2025-10-02 12:55:18.935 2 DEBUG nova.objects.instance [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:18 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997-userdata-shm.mount: Deactivated successfully.
Oct  2 08:55:18 np0005466031 systemd[1]: var-lib-containers-storage-overlay-7345e51521a488e7c614c740c1c02e9484c862a55c031bae125252b5b527b3b4-merged.mount: Deactivated successfully.
Oct  2 08:55:18 np0005466031 podman[301107]: 2025-10-02 12:55:18.958484244 +0000 UTC m=+0.098143039 container cleanup 705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:55:18 np0005466031 nova_compute[235803]: 2025-10-02 12:55:18.960 2 INFO nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Attempting a stable device rescue#033[00m
Oct  2 08:55:18 np0005466031 systemd[1]: libpod-conmon-705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997.scope: Deactivated successfully.
Oct  2 08:55:19 np0005466031 podman[301148]: 2025-10-02 12:55:19.027716199 +0000 UTC m=+0.042563888 container remove 705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.034 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9c962ba8-6b6e-4429-a3ac-a85b8efaac93]: (4, ('Thu Oct  2 12:55:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 (705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997)\n705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997\nThu Oct  2 12:55:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 (705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997)\n705d0f7ff2e64066c940de9729c803ade88fd3843a4af872c8b14677b0857997\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.035 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8379d888-7d54-4477-8536-e4d6f09f6915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.036 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:19 np0005466031 kernel: tap7b0ec11e-00: left promiscuous mode
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.098 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c59f1baf-b380-40c4-afa5-46fb820ad4a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.127 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3320b4bb-31d0-46a0-a3e0-92a2516a1543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.128 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5b226339-aa9a-4913-a52f-5708f06603e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.143 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[20d706d4-ae53-4e64-a271-533eaab95fd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750643, 'reachable_time': 17271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301165, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.146 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:55:19 np0005466031 systemd[1]: run-netns-ovnmeta\x2d7b0ec11e\x2d03f1\x2d4b98\x2dac7a\x2d50b364660bd2.mount: Deactivated successfully.
Oct  2 08:55:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:19.146 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8cd6e2-4fe9-45ba-b0b5-3ab7cbfe9b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.244 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.248 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.249 2 INFO nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Creating image(s)#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.273 2 DEBUG nova.storage.rbd_utils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.276 2 DEBUG nova.objects.instance [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.314 2 DEBUG nova.storage.rbd_utils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.339 2 DEBUG nova.storage.rbd_utils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.343 2 DEBUG oslo_concurrency.lockutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "9a6e77cd014e8cd1817102a6e177c261e1b7c1d2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.344 2 DEBUG oslo_concurrency.lockutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "9a6e77cd014e8cd1817102a6e177c261e1b7c1d2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:19.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:55:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:19.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.617 2 DEBUG nova.virt.libvirt.imagebackend [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/38d5e3ab-8ce6-4053-8f4a-2a76356bf535/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/38d5e3ab-8ce6-4053-8f4a-2a76356bf535/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.687 2 DEBUG nova.virt.libvirt.imagebackend [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/38d5e3ab-8ce6-4053-8f4a-2a76356bf535/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:55:19 np0005466031 nova_compute[235803]: 2025-10-02 12:55:19.688 2 DEBUG nova.storage.rbd_utils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] cloning images/38d5e3ab-8ce6-4053-8f4a-2a76356bf535@snap to None/58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.526 2 DEBUG oslo_concurrency.lockutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "9a6e77cd014e8cd1817102a6e177c261e1b7c1d2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.575 2 DEBUG nova.objects.instance [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'migration_context' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.589 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.592 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Start _get_guest_xml network_info=[{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "vif_mac": "fa:16:3e:ea:eb:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '38d5e3ab-8ce6-4053-8f4a-2a76356bf535', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f80c06cb-0550-4a66-a7bd-bba5ed3d622f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f80c06cb-0550-4a66-a7bd-bba5ed3d622f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '58e2a72f-a2b9-41a0-9c67-607e978d8b88', 'attached_at': '', 'detached_at': '', 'volume_id': 'f80c06cb-0550-4a66-a7bd-bba5ed3d622f', 'serial': 'f80c06cb-0550-4a66-a7bd-bba5ed3d622f'}, 'attachment_id': '370ada91-e18a-4022-9d41-8c5758765ce2', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.592 2 DEBUG nova.objects.instance [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'resources' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.604 2 WARNING nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.610 2 DEBUG nova.virt.libvirt.host [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.610 2 DEBUG nova.virt.libvirt.host [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.614 2 DEBUG nova.virt.libvirt.host [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.614 2 DEBUG nova.virt.libvirt.host [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.615 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.615 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.616 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.616 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.616 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.617 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.617 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.617 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.617 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.618 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.618 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.618 2 DEBUG nova.virt.hardware [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.618 2 DEBUG nova.objects.instance [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.670 2 DEBUG oslo_concurrency.processutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.872 2 DEBUG nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.873 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.874 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.874 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.874 2 DEBUG nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.874 2 WARNING nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.875 2 DEBUG nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.875 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.875 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.875 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.876 2 DEBUG nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:20 np0005466031 nova_compute[235803]: 2025-10-02 12:55:20.876 2 WARNING nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:55:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2386159734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.212 2 DEBUG oslo_concurrency.processutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.240 2 DEBUG oslo_concurrency.processutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:21.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:21.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:21 np0005466031 podman[301368]: 2025-10-02 12:55:21.631271832 +0000 UTC m=+0.060471814 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:55:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4102663619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:21 np0005466031 podman[301369]: 2025-10-02 12:55:21.689769957 +0000 UTC m=+0.117347452 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.702 2 DEBUG oslo_concurrency.processutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.704 2 DEBUG nova.virt.libvirt.vif [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:54:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-996317369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-996317369',id=147,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-dhn09nvd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:12Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=58e2a72f-a2b9-41a0-9c67-607e978d8b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "vif_mac": "fa:16:3e:ea:eb:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.705 2 DEBUG nova.network.os_vif_util [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "vif_mac": "fa:16:3e:ea:eb:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.706 2 DEBUG nova.network.os_vif_util [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:eb:77,bridge_name='br-int',has_traffic_filtering=True,id=5b3c9d9a-c3cd-49a5-b917-49aefaefd249,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b3c9d9a-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.708 2 DEBUG nova.objects.instance [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.725 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <uuid>58e2a72f-a2b9-41a0-9c67-607e978d8b88</uuid>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <name>instance-00000093</name>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-996317369</nova:name>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:55:20</nova:creationTime>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <nova:user uuid="00be63ea13c84e3d9419078865524099">tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member</nova:user>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <nova:project uuid="cb2da64acac041cb8d38c3b43fe4dbe9">tempest-ServerBootFromVolumeStableRescueTest-1641553658</nova:project>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <nova:port uuid="5b3c9d9a-c3cd-49a5-b917-49aefaefd249">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <entry name="serial">58e2a72f-a2b9-41a0-9c67-607e978d8b88</entry>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <entry name="uuid">58e2a72f-a2b9-41a0-9c67-607e978d8b88</entry>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-f80c06cb-0550-4a66-a7bd-bba5ed3d622f">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <serial>f80c06cb-0550-4a66-a7bd-bba5ed3d622f</serial>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.rescue">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <boot order="1"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:ea:eb:77"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <target dev="tap5b3c9d9a-c3"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/console.log" append="off"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:55:21 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:55:21 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:55:21 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:55:21 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.733 2 INFO nova.virt.libvirt.driver [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance destroyed successfully.#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.866 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.867 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.867 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.868 2 DEBUG nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No VIF found with MAC fa:16:3e:ea:eb:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.868 2 INFO nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Using config drive#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.901 2 DEBUG nova.storage.rbd_utils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.934 2 DEBUG nova.objects.instance [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:21 np0005466031 nova_compute[235803]: 2025-10-02 12:55:21.978 2 DEBUG nova.objects.instance [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'keypairs' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:23.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:23.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:24 np0005466031 nova_compute[235803]: 2025-10-02 12:55:24.344 2 INFO nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Creating config drive at /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config.rescue#033[00m
Oct  2 08:55:24 np0005466031 nova_compute[235803]: 2025-10-02 12:55:24.349 2 DEBUG oslo_concurrency.processutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5471252 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:24 np0005466031 nova_compute[235803]: 2025-10-02 12:55:24.486 2 DEBUG oslo_concurrency.processutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5471252" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:24 np0005466031 nova_compute[235803]: 2025-10-02 12:55:24.514 2 DEBUG nova.storage.rbd_utils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:24 np0005466031 nova_compute[235803]: 2025-10-02 12:55:24.518 2 DEBUG oslo_concurrency.processutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config.rescue 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:25.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.511 2 DEBUG oslo_concurrency.processutils [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config.rescue 58e2a72f-a2b9-41a0-9c67-607e978d8b88_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.993s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.512 2 INFO nova.virt.libvirt.driver [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Deleting local config drive /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:55:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:25.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:25 np0005466031 kernel: tap5b3c9d9a-c3: entered promiscuous mode
Oct  2 08:55:25 np0005466031 NetworkManager[44907]: <info>  [1759409725.5552] manager: (tap5b3c9d9a-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:25Z|00547|binding|INFO|Claiming lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for this chassis.
Oct  2 08:55:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:25Z|00548|binding|INFO|5b3c9d9a-c3cd-49a5-b917-49aefaefd249: Claiming fa:16:3e:ea:eb:77 10.100.0.11
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.564 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:eb:77 10.100.0.11'], port_security=['fa:16:3e:ea:eb:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '58e2a72f-a2b9-41a0-9c67-607e978d8b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5b3c9d9a-c3cd-49a5-b917-49aefaefd249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.565 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 bound to our chassis#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.567 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:55:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:25Z|00549|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 ovn-installed in OVS
Oct  2 08:55:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:25Z|00550|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 up in Southbound
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:25 np0005466031 systemd-udevd[301490]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.578 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[217c4e90-2e46-4495-8904-74e3555ee889]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.579 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b0ec11e-01 in ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.581 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b0ec11e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.581 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4b65b404-ad0b-43f7-bcac-5ebe92e95a7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.581 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9d85d356-604d-4664-9768-c91a29be7d1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 systemd-machined[192227]: New machine qemu-64-instance-00000093.
Oct  2 08:55:25 np0005466031 NetworkManager[44907]: <info>  [1759409725.5972] device (tap5b3c9d9a-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.595 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[730be36c-9aca-474f-9760-2bf55509d420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 NetworkManager[44907]: <info>  [1759409725.5979] device (tap5b3c9d9a-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.611 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dec7a0f6-77fe-475b-8de7-e5f0d9a3cc82]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 systemd[1]: Started Virtual Machine qemu-64-instance-00000093.
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.641 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6cca20ea-21d8-497f-811d-54dce3a7b76e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.646 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5d350c7b-3553-4e9d-9893-9e9b9be776c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 NetworkManager[44907]: <info>  [1759409725.6476] manager: (tap7b0ec11e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Oct  2 08:55:25 np0005466031 systemd-udevd[301494]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.677 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[15a0068e-99c1-4fd0-8a9f-f78334d09f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.680 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3f103c81-a917-4b6b-8314-1eea50944260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 NetworkManager[44907]: <info>  [1759409725.7021] device (tap7b0ec11e-00): carrier: link connected
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.708 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c440bb-2a6c-4fae-a3b6-8eb392b08adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.727 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d399490b-341c-4a49-8cb5-157f23258f5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758129, 'reachable_time': 23250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301523, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.742 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5ea082-254c-429f-b94b-e326b2e9cd7d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:f76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758129, 'tstamp': 758129}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301524, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.761 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[481d5701-13df-481b-be14-4bfb2f5ec750]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758129, 'reachable_time': 23250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301525, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.792 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[85eaa243-b093-4bac-b2c4-f33e72a26068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.850 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[010d7c47-8c0c-43b2-b830-3ece764e2f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.852 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.852 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.852 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.865 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.865 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.865 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:25 np0005466031 NetworkManager[44907]: <info>  [1759409725.8769] manager: (tap7b0ec11e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Oct  2 08:55:25 np0005466031 kernel: tap7b0ec11e-00: entered promiscuous mode
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.884 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:25Z|00551|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.889 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.890 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[044c12b4-0894-4d4b-8457-45035e304b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.891 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:55:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:25.892 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'env', 'PROCESS_TAG=haproxy-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:55:25 np0005466031 nova_compute[235803]: 2025-10-02 12:55:25.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:26 np0005466031 podman[301617]: 2025-10-02 12:55:26.270383983 +0000 UTC m=+0.046678066 container create 57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:55:26 np0005466031 systemd[1]: Started libpod-conmon-57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45.scope.
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:26 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:55:26 np0005466031 podman[301617]: 2025-10-02 12:55:26.247578806 +0000 UTC m=+0.023872939 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:55:26 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01adc6a45466114a1a9ba0bf2d2eee59e6ca6c98311fc23f68d5f49dfb4dccd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:55:26 np0005466031 podman[301617]: 2025-10-02 12:55:26.361076566 +0000 UTC m=+0.137370679 container init 57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:55:26 np0005466031 podman[301617]: 2025-10-02 12:55:26.367224123 +0000 UTC m=+0.143518206 container start 57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:55:26 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301633]: [NOTICE]   (301637) : New worker (301639) forked
Oct  2 08:55:26 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301633]: [NOTICE]   (301637) : Loading success.
Oct  2 08:55:26 np0005466031 podman[301648]: 2025-10-02 12:55:26.620758238 +0000 UTC m=+0.051784813 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:55:26 np0005466031 podman[301649]: 2025-10-02 12:55:26.648618031 +0000 UTC m=+0.077794242 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.678 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 58e2a72f-a2b9-41a0-9c67-607e978d8b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.679 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409726.6782157, 58e2a72f-a2b9-41a0-9c67-607e978d8b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.680 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.685 2 DEBUG nova.compute.manager [None req-a91e03b5-c2ce-40d9-9a5e-6ccc953712ca 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.702 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.706 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.725 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.726 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409726.6824827, 58e2a72f-a2b9-41a0-9c67-607e978d8b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.726 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] VM Started (Lifecycle Event)#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.756 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.761 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.952 2 DEBUG nova.compute.manager [req-be1aa9a2-12bd-4616-9b07-41a160dcbcdc req-7c07307d-e097-4277-a289-e3b24f0ef71c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.953 2 DEBUG oslo_concurrency.lockutils [req-be1aa9a2-12bd-4616-9b07-41a160dcbcdc req-7c07307d-e097-4277-a289-e3b24f0ef71c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.953 2 DEBUG oslo_concurrency.lockutils [req-be1aa9a2-12bd-4616-9b07-41a160dcbcdc req-7c07307d-e097-4277-a289-e3b24f0ef71c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.953 2 DEBUG oslo_concurrency.lockutils [req-be1aa9a2-12bd-4616-9b07-41a160dcbcdc req-7c07307d-e097-4277-a289-e3b24f0ef71c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.954 2 DEBUG nova.compute.manager [req-be1aa9a2-12bd-4616-9b07-41a160dcbcdc req-7c07307d-e097-4277-a289-e3b24f0ef71c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:26 np0005466031 nova_compute[235803]: 2025-10-02 12:55:26.954 2 WARNING nova.compute.manager [req-be1aa9a2-12bd-4616-9b07-41a160dcbcdc req-7c07307d-e097-4277-a289-e3b24f0ef71c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:55:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:27.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:27.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.052 2 DEBUG nova.compute.manager [req-89e4566d-3fdc-458b-912e-1e70c465d26c req-27e614b6-f5f4-4e9f-8b93-6b06e0bd06be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.052 2 DEBUG oslo_concurrency.lockutils [req-89e4566d-3fdc-458b-912e-1e70c465d26c req-27e614b6-f5f4-4e9f-8b93-6b06e0bd06be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.053 2 DEBUG oslo_concurrency.lockutils [req-89e4566d-3fdc-458b-912e-1e70c465d26c req-27e614b6-f5f4-4e9f-8b93-6b06e0bd06be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.053 2 DEBUG oslo_concurrency.lockutils [req-89e4566d-3fdc-458b-912e-1e70c465d26c req-27e614b6-f5f4-4e9f-8b93-6b06e0bd06be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.053 2 DEBUG nova.compute.manager [req-89e4566d-3fdc-458b-912e-1e70c465d26c req-27e614b6-f5f4-4e9f-8b93-6b06e0bd06be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.053 2 WARNING nova.compute.manager [req-89e4566d-3fdc-458b-912e-1e70c465d26c req-27e614b6-f5f4-4e9f-8b93-6b06e0bd06be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.125 2 INFO nova.compute.manager [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Unrescuing#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.125 2 DEBUG oslo_concurrency.lockutils [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.125 2 DEBUG oslo_concurrency.lockutils [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquired lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:29 np0005466031 nova_compute[235803]: 2025-10-02 12:55:29.126 2 DEBUG nova.network.neutron [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:55:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:29.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.400 2 DEBUG nova.network.neutron [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating instance_info_cache with network_info: [{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.442 2 DEBUG oslo_concurrency.lockutils [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Releasing lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.443 2 DEBUG nova.objects.instance [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'flavor' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:30 np0005466031 kernel: tap5b3c9d9a-c3 (unregistering): left promiscuous mode
Oct  2 08:55:30 np0005466031 NetworkManager[44907]: <info>  [1759409730.7231] device (tap5b3c9d9a-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:55:30 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:30Z|00552|binding|INFO|Releasing lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 from this chassis (sb_readonly=0)
Oct  2 08:55:30 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:30Z|00553|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 down in Southbound
Oct  2 08:55:30 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:30Z|00554|binding|INFO|Removing iface tap5b3c9d9a-c3 ovn-installed in OVS
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:30.767 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:eb:77 10.100.0.11'], port_security=['fa:16:3e:ea:eb:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '58e2a72f-a2b9-41a0-9c67-607e978d8b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5b3c9d9a-c3cd-49a5-b917-49aefaefd249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:30 np0005466031 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct  2 08:55:30 np0005466031 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000093.scope: Consumed 4.868s CPU time.
Oct  2 08:55:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:30.769 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis#033[00m
Oct  2 08:55:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:30.770 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:55:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:30.771 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0c77d217-78a7-4cdc-b513-5c5bdd8331da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:30.772 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 namespace which is not needed anymore#033[00m
Oct  2 08:55:30 np0005466031 systemd-machined[192227]: Machine qemu-64-instance-00000093 terminated.
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.905 2 INFO nova.virt.libvirt.driver [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance destroyed successfully.#033[00m
Oct  2 08:55:30 np0005466031 nova_compute[235803]: 2025-10-02 12:55:30.905 2 DEBUG nova.objects.instance [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:31 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301633]: [NOTICE]   (301637) : haproxy version is 2.8.14-c23fe91
Oct  2 08:55:31 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301633]: [NOTICE]   (301637) : path to executable is /usr/sbin/haproxy
Oct  2 08:55:31 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301633]: [WARNING]  (301637) : Exiting Master process...
Oct  2 08:55:31 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301633]: [ALERT]    (301637) : Current worker (301639) exited with code 143 (Terminated)
Oct  2 08:55:31 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301633]: [WARNING]  (301637) : All workers exited. Exiting... (0)
Oct  2 08:55:31 np0005466031 systemd[1]: libpod-57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45.scope: Deactivated successfully.
Oct  2 08:55:31 np0005466031 podman[301763]: 2025-10-02 12:55:31.038209243 +0000 UTC m=+0.181616834 container died 57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:55:31 np0005466031 kernel: tap5b3c9d9a-c3: entered promiscuous mode
Oct  2 08:55:31 np0005466031 systemd-udevd[301742]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:31 np0005466031 NetworkManager[44907]: <info>  [1759409731.0768] manager: (tap5b3c9d9a-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Oct  2 08:55:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:31Z|00555|binding|INFO|Claiming lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for this chassis.
Oct  2 08:55:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:31Z|00556|binding|INFO|5b3c9d9a-c3cd-49a5-b917-49aefaefd249: Claiming fa:16:3e:ea:eb:77 10.100.0.11
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:31 np0005466031 NetworkManager[44907]: <info>  [1759409731.0890] device (tap5b3c9d9a-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:55:31 np0005466031 NetworkManager[44907]: <info>  [1759409731.0909] device (tap5b3c9d9a-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:55:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:31Z|00557|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 ovn-installed in OVS
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:31 np0005466031 systemd-machined[192227]: New machine qemu-65-instance-00000093.
Oct  2 08:55:31 np0005466031 systemd[1]: Started Virtual Machine qemu-65-instance-00000093.
Oct  2 08:55:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:31Z|00558|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 up in Southbound
Oct  2 08:55:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:31.197 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:eb:77 10.100.0.11'], port_security=['fa:16:3e:ea:eb:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '58e2a72f-a2b9-41a0-9c67-607e978d8b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5b3c9d9a-c3cd-49a5-b917-49aefaefd249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:31 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45-userdata-shm.mount: Deactivated successfully.
Oct  2 08:55:31 np0005466031 systemd[1]: var-lib-containers-storage-overlay-01adc6a45466114a1a9ba0bf2d2eee59e6ca6c98311fc23f68d5f49dfb4dccd5-merged.mount: Deactivated successfully.
Oct  2 08:55:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:31.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.707 2 DEBUG nova.compute.manager [req-6691a774-08b4-49d7-97b5-89f3ca3aca06 req-e370c8cb-a2e7-413d-a3f8-cf6ce9457d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.708 2 DEBUG oslo_concurrency.lockutils [req-6691a774-08b4-49d7-97b5-89f3ca3aca06 req-e370c8cb-a2e7-413d-a3f8-cf6ce9457d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.708 2 DEBUG oslo_concurrency.lockutils [req-6691a774-08b4-49d7-97b5-89f3ca3aca06 req-e370c8cb-a2e7-413d-a3f8-cf6ce9457d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.709 2 DEBUG oslo_concurrency.lockutils [req-6691a774-08b4-49d7-97b5-89f3ca3aca06 req-e370c8cb-a2e7-413d-a3f8-cf6ce9457d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.709 2 DEBUG nova.compute.manager [req-6691a774-08b4-49d7-97b5-89f3ca3aca06 req-e370c8cb-a2e7-413d-a3f8-cf6ce9457d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:31 np0005466031 nova_compute[235803]: 2025-10-02 12:55:31.709 2 WARNING nova.compute.manager [req-6691a774-08b4-49d7-97b5-89f3ca3aca06 req-e370c8cb-a2e7-413d-a3f8-cf6ce9457d19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:55:31 np0005466031 podman[301763]: 2025-10-02 12:55:31.77218433 +0000 UTC m=+0.915591921 container cleanup 57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:55:31 np0005466031 systemd[1]: libpod-conmon-57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45.scope: Deactivated successfully.
Oct  2 08:55:32 np0005466031 podman[301838]: 2025-10-02 12:55:32.10990618 +0000 UTC m=+0.308141709 container remove 57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.118 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[213707f5-e32a-4a5f-bbb0-9f133fca2158]: (4, ('Thu Oct  2 12:55:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 (57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45)\n57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45\nThu Oct  2 12:55:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 (57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45)\n57037aa41e99e025c72ad76ae80fa52a54d19a9bc1ad05a3e5bacc9000bb2a45\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.122 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4a660b-adf6-4185-92f2-a87673315132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.124 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:32 np0005466031 kernel: tap7b0ec11e-00: left promiscuous mode
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.133 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[796badd4-cddf-4e38-9e55-eb4db131ffa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.165 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bf504fe1-14c6-42ff-85a5-8d9a251aa38b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.167 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4aae3578-402e-47fa-82fa-1dc47f173f14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.187 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[31e57156-24e2-4914-8991-a6e75b60e7d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758122, 'reachable_time': 40265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301879, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 systemd[1]: run-netns-ovnmeta\x2d7b0ec11e\x2d03f1\x2d4b98\x2dac7a\x2d50b364660bd2.mount: Deactivated successfully.
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.190 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.191 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[3da26344-7f1c-4cc7-bfc3-6a68771a5933]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.192 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.194 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.205 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[82118466-3b2b-4849-aa7e-5c5960755774]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.206 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b0ec11e-01 in ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.208 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b0ec11e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.209 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c5e029-8c4f-4dd3-a826-43eb160d05e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.209 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc67cd4-4ea5-4d06-92a6-88ada54e0472]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.225 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[e145439e-017f-44bf-9aed-64f907392e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.240 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a630017f-3449-4baf-aeca-1781dedffa99]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.270 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bf4e39-95b4-46b7-9917-c9092b55f91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.280 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cde7df28-2f87-4d8b-80c5-6ac7be2107d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 NetworkManager[44907]: <info>  [1759409732.2817] manager: (tap7b0ec11e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/256)
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.317 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d37785b0-7557-4432-afd8-43ffab649ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.320 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ae50fb4b-12aa-4411-b81f-6b9910e410f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 NetworkManager[44907]: <info>  [1759409732.3448] device (tap7b0ec11e-00): carrier: link connected
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.346 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[7237fcca-8424-4360-8397-3ccfe79d0397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.361 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9810cae7-bf3f-4516-b317-8c633dfeb00d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758793, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301904, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.377 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[269c368a-5d41-4acc-bc71-b54a3d7c60a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:f76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758793, 'tstamp': 758793}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301905, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.390 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e3fe6afc-160a-4b88-8c93-88b146cb5c0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758793, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301906, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.415 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9f43ba02-afd2-43d9-816f-8c8a20e4e3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.474 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0edd89-088d-4671-a548-8af684dbf83c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.476 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.476 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.477 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:32 np0005466031 kernel: tap7b0ec11e-00: entered promiscuous mode
Oct  2 08:55:32 np0005466031 NetworkManager[44907]: <info>  [1759409732.4802] manager: (tap7b0ec11e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.484 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:32 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:32Z|00559|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.499 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.500 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[edf1cad5-0be5-4a8e-b0d4-92fa2b9b0610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.501 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.pid.haproxy
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 7b0ec11e-03f1-4b98-ac7a-50b364660bd2
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:55:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:32.502 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'env', 'PROCESS_TAG=haproxy-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b0ec11e-03f1-4b98-ac7a-50b364660bd2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.532 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 58e2a72f-a2b9-41a0-9c67-607e978d8b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.533 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409732.5322597, 58e2a72f-a2b9-41a0-9c67-607e978d8b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.533 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.564 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.570 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.591 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.591 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409732.533053, 58e2a72f-a2b9-41a0-9c67-607e978d8b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.592 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] VM Started (Lifecycle Event)#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.619 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.621 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:32 np0005466031 nova_compute[235803]: 2025-10-02 12:55:32.649 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:55:32 np0005466031 podman[301956]: 2025-10-02 12:55:32.862475173 +0000 UTC m=+0.049704473 container create cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:55:32 np0005466031 systemd[1]: Started libpod-conmon-cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448.scope.
Oct  2 08:55:32 np0005466031 podman[301956]: 2025-10-02 12:55:32.836497805 +0000 UTC m=+0.023727115 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:55:32 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:55:32 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2dd532bcc295f20f645557125912523aebac5679e2b79f8f23823467ba461a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:55:32 np0005466031 podman[301956]: 2025-10-02 12:55:32.962650989 +0000 UTC m=+0.149880329 container init cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:55:32 np0005466031 podman[301956]: 2025-10-02 12:55:32.969419744 +0000 UTC m=+0.156649054 container start cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:55:32 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301971]: [NOTICE]   (301975) : New worker (301977) forked
Oct  2 08:55:33 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301971]: [NOTICE]   (301975) : Loading success.
Oct  2 08:55:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:33.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:33 np0005466031 podman[302159]: 2025-10-02 12:55:33.779516225 +0000 UTC m=+0.059132105 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.867 2 DEBUG nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.868 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.868 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.868 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.868 2 DEBUG nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.868 2 WARNING nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.868 2 DEBUG nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.869 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.869 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.869 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.869 2 DEBUG nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.869 2 WARNING nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.869 2 DEBUG nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.869 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.870 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.870 2 DEBUG oslo_concurrency.lockutils [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.870 2 DEBUG nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:33 np0005466031 nova_compute[235803]: 2025-10-02 12:55:33.870 2 WARNING nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:55:33 np0005466031 podman[302159]: 2025-10-02 12:55:33.881995938 +0000 UTC m=+0.161611828 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 08:55:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:34 np0005466031 podman[302298]: 2025-10-02 12:55:34.441131717 +0000 UTC m=+0.117318172 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:55:34 np0005466031 podman[302298]: 2025-10-02 12:55:34.480090679 +0000 UTC m=+0.156277034 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 08:55:34 np0005466031 podman[302364]: 2025-10-02 12:55:34.679376471 +0000 UTC m=+0.048626032 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4)
Oct  2 08:55:34 np0005466031 podman[302364]: 2025-10-02 12:55:34.692810438 +0000 UTC m=+0.062059999 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, name=keepalived, io.openshift.expose-services=, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container)
Oct  2 08:55:35 np0005466031 nova_compute[235803]: 2025-10-02 12:55:35.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:35.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:36 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:55:36 np0005466031 nova_compute[235803]: 2025-10-02 12:55:36.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:55:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:37.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:37.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:39.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:39.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:39Z|00560|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct  2 08:55:39 np0005466031 nova_compute[235803]: 2025-10-02 12:55:39.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:55:39Z|00561|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct  2 08:55:39 np0005466031 nova_compute[235803]: 2025-10-02 12:55:39.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:40 np0005466031 nova_compute[235803]: 2025-10-02 12:55:40.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:41 np0005466031 nova_compute[235803]: 2025-10-02 12:55:41.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:41.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:41.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:41 np0005466031 nova_compute[235803]: 2025-10-02 12:55:41.627 2 DEBUG nova.compute.manager [None req-b109d996-40f1-4ee7-a07a-6f74cf7bafbe 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:43.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:43.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:45 np0005466031 nova_compute[235803]: 2025-10-02 12:55:45.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 08:55:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:45.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 08:55:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:45.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:46 np0005466031 nova_compute[235803]: 2025-10-02 12:55:46.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:47.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:47.573 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:47 np0005466031 nova_compute[235803]: 2025-10-02 12:55:47.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:47.574 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:55:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:49.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:50 np0005466031 nova_compute[235803]: 2025-10-02 12:55:50.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:55:50.576 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:51 np0005466031 nova_compute[235803]: 2025-10-02 12:55:51.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:51.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:51.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:52 np0005466031 podman[302659]: 2025-10-02 12:55:52.661248192 +0000 UTC m=+0.071263645 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:55:52 np0005466031 podman[302660]: 2025-10-02 12:55:52.707825173 +0000 UTC m=+0.125735053 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:55:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:53.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:53.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:55 np0005466031 nova_compute[235803]: 2025-10-02 12:55:55.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:55.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:55.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:56 np0005466031 nova_compute[235803]: 2025-10-02 12:55:56.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:55:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:57.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:55:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:57.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:57 np0005466031 podman[302707]: 2025-10-02 12:55:57.621467316 +0000 UTC m=+0.051776372 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:55:57 np0005466031 podman[302706]: 2025-10-02 12:55:57.621723034 +0000 UTC m=+0.056675774 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 08:55:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:55:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:59.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:55:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:55:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:59.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:00 np0005466031 nova_compute[235803]: 2025-10-02 12:56:00.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:01 np0005466031 nova_compute[235803]: 2025-10-02 12:56:01.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:01.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:01 np0005466031 nova_compute[235803]: 2025-10-02 12:56:01.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e343 e343: 3 total, 3 up, 3 in
Oct  2 08:56:02 np0005466031 nova_compute[235803]: 2025-10-02 12:56:02.975 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "f7f84b92-e128-4f0a-9040-aebd8234e953" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:02 np0005466031 nova_compute[235803]: 2025-10-02 12:56:02.976 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.005 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.150 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.151 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.160 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.160 2 INFO nova.compute.claims [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.325 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:03.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:03 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3883695521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.768 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.773 2 DEBUG nova.compute.provider_tree [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.815 2 DEBUG nova.scheduler.client.report [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.845 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.846 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.932 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.933 2 DEBUG nova.network.neutron [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.959 2 INFO nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:56:03 np0005466031 nova_compute[235803]: 2025-10-02 12:56:03.984 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.140 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.141 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.142 2 INFO nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Creating image(s)#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.176 2 DEBUG nova.storage.rbd_utils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] rbd image f7f84b92-e128-4f0a-9040-aebd8234e953_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.208 2 DEBUG nova.storage.rbd_utils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] rbd image f7f84b92-e128-4f0a-9040-aebd8234e953_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.235 2 DEBUG nova.storage.rbd_utils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] rbd image f7f84b92-e128-4f0a-9040-aebd8234e953_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.238 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.312 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.313 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.314 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.314 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.340 2 DEBUG nova.storage.rbd_utils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] rbd image f7f84b92-e128-4f0a-9040-aebd8234e953_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.344 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f7f84b92-e128-4f0a-9040-aebd8234e953_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:04 np0005466031 nova_compute[235803]: 2025-10-02 12:56:04.377 2 DEBUG nova.policy [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd676cdd850145d89e214075074d1c8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '054571901053487f96bb43a2cd1d5537', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:56:05 np0005466031 nova_compute[235803]: 2025-10-02 12:56:05.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:05 np0005466031 nova_compute[235803]: 2025-10-02 12:56:05.422 2 DEBUG nova.network.neutron [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Successfully created port: b249132c-7480-4fc3-aef7-30e357ec1a4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:56:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:56:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:05.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:56:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:05 np0005466031 nova_compute[235803]: 2025-10-02 12:56:05.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.473 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f7f84b92-e128-4f0a-9040-aebd8234e953_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.545 2 DEBUG nova.storage.rbd_utils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] resizing rbd image f7f84b92-e128-4f0a-9040-aebd8234e953_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.685 2 DEBUG nova.network.neutron [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Successfully updated port: b249132c-7480-4fc3-aef7-30e357ec1a4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.691 2 DEBUG nova.objects.instance [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lazy-loading 'migration_context' on Instance uuid f7f84b92-e128-4f0a-9040-aebd8234e953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.782 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "refresh_cache-f7f84b92-e128-4f0a-9040-aebd8234e953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.783 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquired lock "refresh_cache-f7f84b92-e128-4f0a-9040-aebd8234e953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.783 2 DEBUG nova.network.neutron [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.835 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.836 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Ensure instance console log exists: /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.836 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.836 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.837 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.954 2 DEBUG nova.compute.manager [req-02db13d0-d8ee-4da4-9aed-5c54f78ca39c req-8d2c0ada-3729-4606-8580-6b95b4d73283 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received event network-changed-b249132c-7480-4fc3-aef7-30e357ec1a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.954 2 DEBUG nova.compute.manager [req-02db13d0-d8ee-4da4-9aed-5c54f78ca39c req-8d2c0ada-3729-4606-8580-6b95b4d73283 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Refreshing instance network info cache due to event network-changed-b249132c-7480-4fc3-aef7-30e357ec1a4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:56:06 np0005466031 nova_compute[235803]: 2025-10-02 12:56:06.954 2 DEBUG oslo_concurrency.lockutils [req-02db13d0-d8ee-4da4-9aed-5c54f78ca39c req-8d2c0ada-3729-4606-8580-6b95b4d73283 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f7f84b92-e128-4f0a-9040-aebd8234e953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:07 np0005466031 nova_compute[235803]: 2025-10-02 12:56:07.284 2 DEBUG nova.network.neutron [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:56:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.343 2 DEBUG nova.network.neutron [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Updating instance_info_cache with network_info: [{"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:08 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Oct  2 08:56:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:56:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 11K writes, 59K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1697 writes, 8180 keys, 1697 commit groups, 1.0 writes per commit group, ingest: 16.98 MB, 0.03 MB/s#012Interval WAL: 1697 writes, 1697 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.1      0.94              0.20        35    0.027       0      0       0.0       0.0#012  L6      1/0   10.25 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    113.3     95.5      3.44              0.95        34    0.101    215K    18K       0.0       0.0#012 Sum      1/0   10.25 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     89.0     91.4      4.38              1.14        69    0.064    215K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6     48.8     49.9      1.35              0.18        10    0.135     42K   2601       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    113.3     95.5      3.44              0.95        34    0.101    215K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.3      0.93              0.20        34    0.027       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.070, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.39 GB write, 0.10 MB/s write, 0.38 GB read, 0.09 MB/s read, 4.4 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 43.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000245 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2552,42.17 MB,13.8726%) FilterBlock(69,603.55 KB,0.193882%) IndexBlock(69,1.02 MB,0.336341%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.685 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Releasing lock "refresh_cache-f7f84b92-e128-4f0a-9040-aebd8234e953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.686 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Instance network_info: |[{"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.686 2 DEBUG oslo_concurrency.lockutils [req-02db13d0-d8ee-4da4-9aed-5c54f78ca39c req-8d2c0ada-3729-4606-8580-6b95b4d73283 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f7f84b92-e128-4f0a-9040-aebd8234e953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.687 2 DEBUG nova.network.neutron [req-02db13d0-d8ee-4da4-9aed-5c54f78ca39c req-8d2c0ada-3729-4606-8580-6b95b4d73283 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Refreshing network info cache for port b249132c-7480-4fc3-aef7-30e357ec1a4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.690 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Start _get_guest_xml network_info=[{"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.694 2 WARNING nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.701 2 DEBUG nova.virt.libvirt.host [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.701 2 DEBUG nova.virt.libvirt.host [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.717 2 DEBUG nova.virt.libvirt.host [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.718 2 DEBUG nova.virt.libvirt.host [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.720 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.720 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.721 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.721 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.722 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.722 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.722 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.723 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.723 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.723 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.724 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.724 2 DEBUG nova.virt.hardware [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:56:08 np0005466031 nova_compute[235803]: 2025-10-02 12:56:08.729 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:56:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2440780115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:56:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:09 np0005466031 nova_compute[235803]: 2025-10-02 12:56:09.423 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:09 np0005466031 nova_compute[235803]: 2025-10-02 12:56:09.547 2 DEBUG nova.storage.rbd_utils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] rbd image f7f84b92-e128-4f0a-9040-aebd8234e953_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:09 np0005466031 nova_compute[235803]: 2025-10-02 12:56:09.550 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:09.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:56:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1041890516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.004 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.006 2 DEBUG nova.virt.libvirt.vif [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1276848269',display_name='tempest-ServerPasswordTestJSON-server-1276848269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1276848269',id=154,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='054571901053487f96bb43a2cd1d5537',ramdisk_id='',reservation_id='r-11a435ik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1882177483',owner_user_name='tempest-ServerPasswordTestJSON-1882177483-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:04Z,user_data=None,user_id='cd676cdd850145d89e214075074d1c8a',uuid=f7f84b92-e128-4f0a-9040-aebd8234e953,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.007 2 DEBUG nova.network.os_vif_util [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Converting VIF {"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.007 2 DEBUG nova.network.os_vif_util [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=b249132c-7480-4fc3-aef7-30e357ec1a4e,network=Network(c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb249132c-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.009 2 DEBUG nova.objects.instance [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lazy-loading 'pci_devices' on Instance uuid f7f84b92-e128-4f0a-9040-aebd8234e953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.437 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <uuid>f7f84b92-e128-4f0a-9040-aebd8234e953</uuid>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <name>instance-0000009a</name>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerPasswordTestJSON-server-1276848269</nova:name>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:56:08</nova:creationTime>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <nova:user uuid="cd676cdd850145d89e214075074d1c8a">tempest-ServerPasswordTestJSON-1882177483-project-member</nova:user>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <nova:project uuid="054571901053487f96bb43a2cd1d5537">tempest-ServerPasswordTestJSON-1882177483</nova:project>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <nova:port uuid="b249132c-7480-4fc3-aef7-30e357ec1a4e">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <entry name="serial">f7f84b92-e128-4f0a-9040-aebd8234e953</entry>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <entry name="uuid">f7f84b92-e128-4f0a-9040-aebd8234e953</entry>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f7f84b92-e128-4f0a-9040-aebd8234e953_disk">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f7f84b92-e128-4f0a-9040-aebd8234e953_disk.config">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:b7:bf:aa"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <target dev="tapb249132c-74"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953/console.log" append="off"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:56:10 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:56:10 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:56:10 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:56:10 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.438 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Preparing to wait for external event network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.439 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.439 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.439 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.440 2 DEBUG nova.virt.libvirt.vif [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1276848269',display_name='tempest-ServerPasswordTestJSON-server-1276848269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1276848269',id=154,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='054571901053487f96bb43a2cd1d5537',ramdisk_id='',reservation_id='r-11a435ik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1882177483',owner_user_name='tempest-ServerPasswordTestJSON-1882177483-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:04Z,user_data=None,user_id='cd676cdd850145d89e214075074d1c8a',uuid=f7f84b92-e128-4f0a-9040-aebd8234e953,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.440 2 DEBUG nova.network.os_vif_util [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Converting VIF {"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.441 2 DEBUG nova.network.os_vif_util [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=b249132c-7480-4fc3-aef7-30e357ec1a4e,network=Network(c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb249132c-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.441 2 DEBUG os_vif [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=b249132c-7480-4fc3-aef7-30e357ec1a4e,network=Network(c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb249132c-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.442 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb249132c-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb249132c-74, col_values=(('external_ids', {'iface-id': 'b249132c-7480-4fc3-aef7-30e357ec1a4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:bf:aa', 'vm-uuid': 'f7f84b92-e128-4f0a-9040-aebd8234e953'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:10 np0005466031 NetworkManager[44907]: <info>  [1759409770.4478] manager: (tapb249132c-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.454 2 INFO os_vif [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=b249132c-7480-4fc3-aef7-30e357ec1a4e,network=Network(c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb249132c-74')#033[00m
Oct  2 08:56:10 np0005466031 nova_compute[235803]: 2025-10-02 12:56:10.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.033 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.033 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.033 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.033 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.034 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.070 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.070 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.071 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] No VIF found with MAC fa:16:3e:b7:bf:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.071 2 INFO nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Using config drive#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.098 2 DEBUG nova.storage.rbd_utils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] rbd image f7f84b92-e128-4f0a-9040-aebd8234e953_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4169217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.467 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.621 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.621 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.624 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.624 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.770 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.771 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4131MB free_disk=20.795391082763672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.772 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:11 np0005466031 nova_compute[235803]: 2025-10-02 12:56:11.772 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.119 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 58e2a72f-a2b9-41a0-9c67-607e978d8b88 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.119 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance f7f84b92-e128-4f0a-9040-aebd8234e953 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.120 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.120 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.217 2 INFO nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Creating config drive at /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953/disk.config#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.222 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikisy9gw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.294 2 DEBUG nova.network.neutron [req-02db13d0-d8ee-4da4-9aed-5c54f78ca39c req-8d2c0ada-3729-4606-8580-6b95b4d73283 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Updated VIF entry in instance network info cache for port b249132c-7480-4fc3-aef7-30e357ec1a4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.296 2 DEBUG nova.network.neutron [req-02db13d0-d8ee-4da4-9aed-5c54f78ca39c req-8d2c0ada-3729-4606-8580-6b95b4d73283 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Updating instance_info_cache with network_info: [{"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.322 2 DEBUG oslo_concurrency.lockutils [req-02db13d0-d8ee-4da4-9aed-5c54f78ca39c req-8d2c0ada-3729-4606-8580-6b95b4d73283 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f7f84b92-e128-4f0a-9040-aebd8234e953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.335 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.365 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikisy9gw" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.396 2 DEBUG nova.storage.rbd_utils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] rbd image f7f84b92-e128-4f0a-9040-aebd8234e953_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.400 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953/disk.config f7f84b92-e128-4f0a-9040-aebd8234e953_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1941349339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.772 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.779 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.868 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.905 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.906 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:12 np0005466031 nova_compute[235803]: 2025-10-02 12:56:12.907 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:13.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:13.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:13 np0005466031 nova_compute[235803]: 2025-10-02 12:56:13.924 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:13 np0005466031 nova_compute[235803]: 2025-10-02 12:56:13.925 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:56:13 np0005466031 nova_compute[235803]: 2025-10-02 12:56:13.925 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:56:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:15 np0005466031 nova_compute[235803]: 2025-10-02 12:56:15.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:15 np0005466031 nova_compute[235803]: 2025-10-02 12:56:15.306 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:56:15 np0005466031 nova_compute[235803]: 2025-10-02 12:56:15.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:15.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:15.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:15 np0005466031 nova_compute[235803]: 2025-10-02 12:56:15.647 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:15 np0005466031 nova_compute[235803]: 2025-10-02 12:56:15.648 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:15 np0005466031 nova_compute[235803]: 2025-10-02 12:56:15.648 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:56:15 np0005466031 nova_compute[235803]: 2025-10-02 12:56:15.648 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:17.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:17.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.263 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating instance_info_cache with network_info: [{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.303 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.303 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.304 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.304 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.573 2 DEBUG oslo_concurrency.processutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953/disk.config f7f84b92-e128-4f0a-9040-aebd8234e953_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.573 2 INFO nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Deleting local config drive /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953/disk.config because it was imported into RBD.#033[00m
Oct  2 08:56:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e344 e344: 3 total, 3 up, 3 in
Oct  2 08:56:18 np0005466031 kernel: tapb249132c-74: entered promiscuous mode
Oct  2 08:56:18 np0005466031 NetworkManager[44907]: <info>  [1759409778.6284] manager: (tapb249132c-74): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Oct  2 08:56:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:18Z|00562|binding|INFO|Claiming lport b249132c-7480-4fc3-aef7-30e357ec1a4e for this chassis.
Oct  2 08:56:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:18Z|00563|binding|INFO|b249132c-7480-4fc3-aef7-30e357ec1a4e: Claiming fa:16:3e:b7:bf:aa 10.100.0.6
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:18 np0005466031 systemd-machined[192227]: New machine qemu-66-instance-0000009a.
Oct  2 08:56:18 np0005466031 systemd[1]: Started Virtual Machine qemu-66-instance-0000009a.
Oct  2 08:56:18 np0005466031 systemd-udevd[303173]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:18 np0005466031 NetworkManager[44907]: <info>  [1759409778.6968] device (tapb249132c-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:56:18 np0005466031 NetworkManager[44907]: <info>  [1759409778.6976] device (tapb249132c-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:56:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:18Z|00564|binding|INFO|Setting lport b249132c-7480-4fc3-aef7-30e357ec1a4e ovn-installed in OVS
Oct  2 08:56:18 np0005466031 nova_compute[235803]: 2025-10-02 12:56:18.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:19Z|00565|binding|INFO|Setting lport b249132c-7480-4fc3-aef7-30e357ec1a4e up in Southbound
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.120 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:bf:aa 10.100.0.6'], port_security=['fa:16:3e:b7:bf:aa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f7f84b92-e128-4f0a-9040-aebd8234e953', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '054571901053487f96bb43a2cd1d5537', 'neutron:revision_number': '2', 'neutron:security_group_ids': '65bacda5-0ba4-46f4-8758-f4d8ffc3c27d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4b3b0f4-76fb-43d3-b06a-7aed3b711fb5, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=b249132c-7480-4fc3-aef7-30e357ec1a4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.121 141898 INFO neutron.agent.ovn.metadata.agent [-] Port b249132c-7480-4fc3-aef7-30e357ec1a4e in datapath c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc bound to our chassis#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.123 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.136 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7abf08ca-4368-47f0-9f87-c212aaec1637]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.137 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc71ca1a4-f1 in ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.139 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc71ca1a4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.139 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e9980d6b-1c08-4da8-b66a-f0ce2329d512]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.139 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[84b185de-f00c-4c72-a5a3-54b5ade85a97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.154 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4de25479-0d61-42c4-bd81-66622cbacf6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.169 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[878288de-4f31-4b16-83d2-98fecd7f4532]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.207 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0de2a53b-17ac-4502-9bd9-833e11a3506b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.215 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[658329cf-f8df-4be3-a63e-96b5e795d2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 NetworkManager[44907]: <info>  [1759409779.2160] manager: (tapc71ca1a4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/260)
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.252 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c43e8d9a-5c46-49aa-8ebc-b935b46b2041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.255 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a23c695c-33de-497a-81ad-9ba71149fc78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:19 np0005466031 NetworkManager[44907]: <info>  [1759409779.2760] device (tapc71ca1a4-f0): carrier: link connected
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.280 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[84518b3b-6d61-4d08-8d13-910ddc643944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.296 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cd45b832-da8a-448e-b716-69d5b62f46ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc71ca1a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:b1:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763486, 'reachable_time': 39587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303208, 'error': None, 'target': 'ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.309 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b28db0-179d-4275-bf5d-fd9414b9e0ca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:b14c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763486, 'tstamp': 763486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303209, 'error': None, 'target': 'ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.326 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d68dfb74-3774-4ca8-ba7e-66d00eb7ff42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc71ca1a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:b1:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763486, 'reachable_time': 39587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303210, 'error': None, 'target': 'ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.355 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[68748088-8c4e-4b49-aaf2-612b9fa97066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.410 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8067d673-5f7d-4c2a-aa0b-c5ab802a0e95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.411 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc71ca1a4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.412 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.412 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc71ca1a4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005466031 nova_compute[235803]: 2025-10-02 12:56:19.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005466031 NetworkManager[44907]: <info>  [1759409779.4368] manager: (tapc71ca1a4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Oct  2 08:56:19 np0005466031 kernel: tapc71ca1a4-f0: entered promiscuous mode
Oct  2 08:56:19 np0005466031 nova_compute[235803]: 2025-10-02 12:56:19.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.439 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc71ca1a4-f0, col_values=(('external_ids', {'iface-id': 'ef2d4422-b47a-48a0-ae59-a1d6d371c5a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005466031 nova_compute[235803]: 2025-10-02 12:56:19.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:19Z|00566|binding|INFO|Releasing lport ef2d4422-b47a-48a0-ae59-a1d6d371c5a6 from this chassis (sb_readonly=0)
Oct  2 08:56:19 np0005466031 nova_compute[235803]: 2025-10-02 12:56:19.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.458 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.459 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[28ed5e0a-0f1c-4802-b911-c30cd809e970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.460 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc.pid.haproxy
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:56:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:19.461 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc', 'env', 'PROCESS_TAG=haproxy-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:56:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:19.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:19.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:19 np0005466031 nova_compute[235803]: 2025-10-02 12:56:19.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:19 np0005466031 nova_compute[235803]: 2025-10-02 12:56:19.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:56:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e345 e345: 3 total, 3 up, 3 in
Oct  2 08:56:19 np0005466031 podman[303243]: 2025-10-02 12:56:19.857820272 +0000 UTC m=+0.095363099 container create e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:56:19 np0005466031 podman[303243]: 2025-10-02 12:56:19.78590377 +0000 UTC m=+0.023446607 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:56:19 np0005466031 systemd[1]: Started libpod-conmon-e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e.scope.
Oct  2 08:56:19 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:56:19 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88a274fd00e73b82a41af1cff4f63c6ae0398f489375e97f8f40aec97dca0f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:56:19 np0005466031 podman[303243]: 2025-10-02 12:56:19.977690546 +0000 UTC m=+0.215233393 container init e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:56:19 np0005466031 podman[303243]: 2025-10-02 12:56:19.982767192 +0000 UTC m=+0.220310009 container start e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:56:20 np0005466031 neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc[303260]: [NOTICE]   (303264) : New worker (303266) forked
Oct  2 08:56:20 np0005466031 neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc[303260]: [NOTICE]   (303264) : Loading success.
Oct  2 08:56:20 np0005466031 nova_compute[235803]: 2025-10-02 12:56:20.132 2 DEBUG nova.compute.manager [req-4029a22c-297a-4156-9dca-3a3ba41bd2ab req-465e63cf-d153-4573-a58d-2bd29b2beefe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received event network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:20 np0005466031 nova_compute[235803]: 2025-10-02 12:56:20.133 2 DEBUG oslo_concurrency.lockutils [req-4029a22c-297a-4156-9dca-3a3ba41bd2ab req-465e63cf-d153-4573-a58d-2bd29b2beefe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:20 np0005466031 nova_compute[235803]: 2025-10-02 12:56:20.134 2 DEBUG oslo_concurrency.lockutils [req-4029a22c-297a-4156-9dca-3a3ba41bd2ab req-465e63cf-d153-4573-a58d-2bd29b2beefe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:20 np0005466031 nova_compute[235803]: 2025-10-02 12:56:20.134 2 DEBUG oslo_concurrency.lockutils [req-4029a22c-297a-4156-9dca-3a3ba41bd2ab req-465e63cf-d153-4573-a58d-2bd29b2beefe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:20 np0005466031 nova_compute[235803]: 2025-10-02 12:56:20.134 2 DEBUG nova.compute.manager [req-4029a22c-297a-4156-9dca-3a3ba41bd2ab req-465e63cf-d153-4573-a58d-2bd29b2beefe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Processing event network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:56:20 np0005466031 nova_compute[235803]: 2025-10-02 12:56:20.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:20 np0005466031 nova_compute[235803]: 2025-10-02 12:56:20.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.232 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.233 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409781.232409, f7f84b92-e128-4f0a-9040-aebd8234e953 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.234 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] VM Started (Lifecycle Event)#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.235 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.238 2 INFO nova.virt.libvirt.driver [-] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Instance spawned successfully.#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.238 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.276 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.278 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.286 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.286 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.287 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.287 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.287 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.287 2 DEBUG nova.virt.libvirt.driver [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.317 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.317 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409781.2333436, f7f84b92-e128-4f0a-9040-aebd8234e953 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.317 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.355 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.358 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409781.2354255, f7f84b92-e128-4f0a-9040-aebd8234e953 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.358 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.370 2 INFO nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Took 17.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.370 2 DEBUG nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.388 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.390 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.430 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.463 2 INFO nova.compute.manager [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Took 18.36 seconds to build instance.#033[00m
Oct  2 08:56:21 np0005466031 nova_compute[235803]: 2025-10-02 12:56:21.501 2 DEBUG oslo_concurrency.lockutils [None req-d2dad3a9-475f-4bf7-a820-45fe2e63f61d cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:21.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.330 2 DEBUG nova.compute.manager [req-5907a4e0-45d7-453d-96f9-beef9ca395f3 req-03dcb8c2-d209-41e9-a802-464175d2a9c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received event network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.331 2 DEBUG oslo_concurrency.lockutils [req-5907a4e0-45d7-453d-96f9-beef9ca395f3 req-03dcb8c2-d209-41e9-a802-464175d2a9c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.331 2 DEBUG oslo_concurrency.lockutils [req-5907a4e0-45d7-453d-96f9-beef9ca395f3 req-03dcb8c2-d209-41e9-a802-464175d2a9c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.331 2 DEBUG oslo_concurrency.lockutils [req-5907a4e0-45d7-453d-96f9-beef9ca395f3 req-03dcb8c2-d209-41e9-a802-464175d2a9c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.331 2 DEBUG nova.compute.manager [req-5907a4e0-45d7-453d-96f9-beef9ca395f3 req-03dcb8c2-d209-41e9-a802-464175d2a9c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] No waiting events found dispatching network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.332 2 WARNING nova.compute.manager [req-5907a4e0-45d7-453d-96f9-beef9ca395f3 req-03dcb8c2-d209-41e9-a802-464175d2a9c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received unexpected event network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e for instance with vm_state active and task_state None.#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.771 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "f7f84b92-e128-4f0a-9040-aebd8234e953" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.771 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.771 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.772 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.772 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.773 2 INFO nova.compute.manager [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Terminating instance#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.773 2 DEBUG nova.compute.manager [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:56:22 np0005466031 kernel: tapb249132c-74 (unregistering): left promiscuous mode
Oct  2 08:56:22 np0005466031 NetworkManager[44907]: <info>  [1759409782.8108] device (tapb249132c-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:56:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:22Z|00567|binding|INFO|Releasing lport b249132c-7480-4fc3-aef7-30e357ec1a4e from this chassis (sb_readonly=0)
Oct  2 08:56:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:22Z|00568|binding|INFO|Setting lport b249132c-7480-4fc3-aef7-30e357ec1a4e down in Southbound
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:22Z|00569|binding|INFO|Removing iface tapb249132c-74 ovn-installed in OVS
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:22 np0005466031 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Oct  2 08:56:22 np0005466031 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000009a.scope: Consumed 3.550s CPU time.
Oct  2 08:56:22 np0005466031 systemd-machined[192227]: Machine qemu-66-instance-0000009a terminated.
Oct  2 08:56:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:22.888 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:bf:aa 10.100.0.6'], port_security=['fa:16:3e:b7:bf:aa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f7f84b92-e128-4f0a-9040-aebd8234e953', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '054571901053487f96bb43a2cd1d5537', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65bacda5-0ba4-46f4-8758-f4d8ffc3c27d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4b3b0f4-76fb-43d3-b06a-7aed3b711fb5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=b249132c-7480-4fc3-aef7-30e357ec1a4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:22.890 141898 INFO neutron.agent.ovn.metadata.agent [-] Port b249132c-7480-4fc3-aef7-30e357ec1a4e in datapath c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc unbound from our chassis#033[00m
Oct  2 08:56:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:22.892 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:56:22 np0005466031 podman[303320]: 2025-10-02 12:56:22.893534327 +0000 UTC m=+0.052009110 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:56:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:22.895 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2541dd1d-18cc-4017-a98b-715237362a18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:22.896 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc namespace which is not needed anymore#033[00m
Oct  2 08:56:22 np0005466031 podman[303321]: 2025-10-02 12:56:22.951320501 +0000 UTC m=+0.109395742 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:56:22 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:22.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.007 2 INFO nova.virt.libvirt.driver [-] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Instance destroyed successfully.#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.007 2 DEBUG nova.objects.instance [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lazy-loading 'resources' on Instance uuid f7f84b92-e128-4f0a-9040-aebd8234e953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:23 np0005466031 neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc[303260]: [NOTICE]   (303264) : haproxy version is 2.8.14-c23fe91
Oct  2 08:56:23 np0005466031 neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc[303260]: [NOTICE]   (303264) : path to executable is /usr/sbin/haproxy
Oct  2 08:56:23 np0005466031 neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc[303260]: [WARNING]  (303264) : Exiting Master process...
Oct  2 08:56:23 np0005466031 neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc[303260]: [WARNING]  (303264) : Exiting Master process...
Oct  2 08:56:23 np0005466031 neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc[303260]: [ALERT]    (303264) : Current worker (303266) exited with code 143 (Terminated)
Oct  2 08:56:23 np0005466031 neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc[303260]: [WARNING]  (303264) : All workers exited. Exiting... (0)
Oct  2 08:56:23 np0005466031 systemd[1]: libpod-e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e.scope: Deactivated successfully.
Oct  2 08:56:23 np0005466031 podman[303388]: 2025-10-02 12:56:23.021302618 +0000 UTC m=+0.045433900 container died e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:56:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 e346: 3 total, 3 up, 3 in
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.044 2 DEBUG nova.virt.libvirt.vif [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1276848269',display_name='tempest-ServerPasswordTestJSON-server-1276848269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1276848269',id=154,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='054571901053487f96bb43a2cd1d5537',ramdisk_id='',reservation_id='r-11a435ik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1882177483',owner_user_name='tempest-ServerPasswordTestJSON-1882177483-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:22Z,user_data=None,user_id='cd676cdd850145d89e214075074d1c8a',uuid=f7f84b92-e128-4f0a-9040-aebd8234e953,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.044 2 DEBUG nova.network.os_vif_util [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Converting VIF {"id": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "address": "fa:16:3e:b7:bf:aa", "network": {"id": "c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-312209042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "054571901053487f96bb43a2cd1d5537", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb249132c-74", "ovs_interfaceid": "b249132c-7480-4fc3-aef7-30e357ec1a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:23 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e-userdata-shm.mount: Deactivated successfully.
Oct  2 08:56:23 np0005466031 systemd[1]: var-lib-containers-storage-overlay-a88a274fd00e73b82a41af1cff4f63c6ae0398f489375e97f8f40aec97dca0f2-merged.mount: Deactivated successfully.
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.045 2 DEBUG nova.network.os_vif_util [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=b249132c-7480-4fc3-aef7-30e357ec1a4e,network=Network(c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb249132c-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.050 2 DEBUG os_vif [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=b249132c-7480-4fc3-aef7-30e357ec1a4e,network=Network(c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb249132c-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb249132c-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.057 2 INFO os_vif [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=b249132c-7480-4fc3-aef7-30e357ec1a4e,network=Network(c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb249132c-74')#033[00m
Oct  2 08:56:23 np0005466031 podman[303388]: 2025-10-02 12:56:23.058346485 +0000 UTC m=+0.082477777 container cleanup e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:56:23 np0005466031 systemd[1]: libpod-conmon-e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e.scope: Deactivated successfully.
Oct  2 08:56:23 np0005466031 podman[303432]: 2025-10-02 12:56:23.135157268 +0000 UTC m=+0.058042773 container remove e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.142 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddd2f32-6f2e-4d99-af6c-26953b7bf571]: (4, ('Thu Oct  2 12:56:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc (e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e)\ne63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e\nThu Oct  2 12:56:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc (e63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e)\ne63fdf3c1d68e0a4d8abe36a30e6760b883faca53c3231f68540140429325b3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.143 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8792c504-9ac6-4e6f-91fc-92c46fcfce00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.144 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc71ca1a4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:23 np0005466031 kernel: tapc71ca1a4-f0: left promiscuous mode
Oct  2 08:56:23 np0005466031 nova_compute[235803]: 2025-10-02 12:56:23.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.163 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a3377a-d1bc-4468-b1da-5a5178c5cefe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.192 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1e493481-2670-48db-b035-8544d3f10371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.194 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[77ca41d9-febf-4a76-8dc1-2cd478d91ce9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.208 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4876fd01-ea7c-43c1-8a66-052edb894c10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763479, 'reachable_time': 26717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303458, 'error': None, 'target': 'ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.210 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c71ca1a4-ff4b-4026-8cfe-4e698ffd3bfc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:56:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:23.211 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[be24ea12-b3d0-4819-ab82-8f559898c916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:23 np0005466031 systemd[1]: run-netns-ovnmeta\x2dc71ca1a4\x2dff4b\x2d4026\x2d8cfe\x2d4e698ffd3bfc.mount: Deactivated successfully.
Oct  2 08:56:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:23.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:23.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.269 2 INFO nova.virt.libvirt.driver [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Deleting instance files /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953_del#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.271 2 INFO nova.virt.libvirt.driver [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Deletion of /var/lib/nova/instances/f7f84b92-e128-4f0a-9040-aebd8234e953_del complete#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.638 2 INFO nova.compute.manager [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Took 1.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.639 2 DEBUG oslo.service.loopingcall [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.640 2 DEBUG nova.compute.manager [-] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.640 2 DEBUG nova.network.neutron [-] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.709 2 DEBUG nova.compute.manager [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received event network-vif-unplugged-b249132c-7480-4fc3-aef7-30e357ec1a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.709 2 DEBUG oslo_concurrency.lockutils [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.710 2 DEBUG oslo_concurrency.lockutils [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.711 2 DEBUG oslo_concurrency.lockutils [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.712 2 DEBUG nova.compute.manager [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] No waiting events found dispatching network-vif-unplugged-b249132c-7480-4fc3-aef7-30e357ec1a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.712 2 DEBUG nova.compute.manager [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received event network-vif-unplugged-b249132c-7480-4fc3-aef7-30e357ec1a4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.713 2 DEBUG nova.compute.manager [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received event network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.713 2 DEBUG oslo_concurrency.lockutils [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.714 2 DEBUG oslo_concurrency.lockutils [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.714 2 DEBUG oslo_concurrency.lockutils [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.714 2 DEBUG nova.compute.manager [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] No waiting events found dispatching network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:24 np0005466031 nova_compute[235803]: 2025-10-02 12:56:24.715 2 WARNING nova.compute.manager [req-76153881-5f62-47fe-acfd-4e9888e3784d req-05f12232-4317-478a-bb6f-a66ca2a7da19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received unexpected event network-vif-plugged-b249132c-7480-4fc3-aef7-30e357ec1a4e for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:56:25 np0005466031 nova_compute[235803]: 2025-10-02 12:56:25.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:25.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:25.866 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:25.866 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:25.867 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:26 np0005466031 nova_compute[235803]: 2025-10-02 12:56:26.342 2 DEBUG nova.network.neutron [-] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:26 np0005466031 nova_compute[235803]: 2025-10-02 12:56:26.371 2 INFO nova.compute.manager [-] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Took 1.73 seconds to deallocate network for instance.#033[00m
Oct  2 08:56:26 np0005466031 nova_compute[235803]: 2025-10-02 12:56:26.425 2 DEBUG nova.compute.manager [req-1d1d2b1e-0c61-4c84-8cf7-c14d17ac9a35 req-359bf7a9-b8f1-4dcf-b76d-8666acb7f7a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Received event network-vif-deleted-b249132c-7480-4fc3-aef7-30e357ec1a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:26 np0005466031 nova_compute[235803]: 2025-10-02 12:56:26.432 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:26 np0005466031 nova_compute[235803]: 2025-10-02 12:56:26.433 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:26 np0005466031 nova_compute[235803]: 2025-10-02 12:56:26.531 2 DEBUG oslo_concurrency.processutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3334793604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:26 np0005466031 nova_compute[235803]: 2025-10-02 12:56:26.967 2 DEBUG oslo_concurrency.processutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:26 np0005466031 nova_compute[235803]: 2025-10-02 12:56:26.974 2 DEBUG nova.compute.provider_tree [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:27 np0005466031 nova_compute[235803]: 2025-10-02 12:56:27.000 2 DEBUG nova.scheduler.client.report [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:27 np0005466031 nova_compute[235803]: 2025-10-02 12:56:27.029 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:27 np0005466031 nova_compute[235803]: 2025-10-02 12:56:27.068 2 INFO nova.scheduler.client.report [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Deleted allocations for instance f7f84b92-e128-4f0a-9040-aebd8234e953#033[00m
Oct  2 08:56:27 np0005466031 nova_compute[235803]: 2025-10-02 12:56:27.133 2 DEBUG oslo_concurrency.lockutils [None req-7a1264fa-fbbc-4bab-a544-56f785180a53 cd676cdd850145d89e214075074d1c8a 054571901053487f96bb43a2cd1d5537 - - default default] Lock "f7f84b92-e128-4f0a-9040-aebd8234e953" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:27.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:28 np0005466031 nova_compute[235803]: 2025-10-02 12:56:28.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:28 np0005466031 podman[303536]: 2025-10-02 12:56:28.618278215 +0000 UTC m=+0.048404105 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:56:28 np0005466031 nova_compute[235803]: 2025-10-02 12:56:28.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:28 np0005466031 podman[303535]: 2025-10-02 12:56:28.651493172 +0000 UTC m=+0.082537169 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  2 08:56:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:29.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:30 np0005466031 nova_compute[235803]: 2025-10-02 12:56:30.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:30 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:30Z|00570|binding|INFO|Releasing lport 4551f74b-5a9c-4479-827a-bb210e8a0b52 from this chassis (sb_readonly=0)
Oct  2 08:56:30 np0005466031 nova_compute[235803]: 2025-10-02 12:56:30.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.042 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.043 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.059 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.136 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.137 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.144 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.144 2 INFO nova.compute.claims [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.273 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:31.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:31.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1910028212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.706 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.711 2 DEBUG nova.compute.provider_tree [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.730 2 DEBUG nova.scheduler.client.report [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.762 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.762 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.823 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.824 2 DEBUG nova.network.neutron [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.851 2 INFO nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.877 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:56:31 np0005466031 nova_compute[235803]: 2025-10-02 12:56:31.957 2 INFO nova.virt.block_device [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Booting with volume-backed-image 423b8b5f-aab8-418b-8fad-d82c90818bdd at /dev/vda#033[00m
Oct  2 08:56:32 np0005466031 nova_compute[235803]: 2025-10-02 12:56:32.145 2 DEBUG nova.policy [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00be63ea13c84e3d9419078865524099', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:56:33 np0005466031 nova_compute[235803]: 2025-10-02 12:56:33.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:33 np0005466031 nova_compute[235803]: 2025-10-02 12:56:33.136 2 DEBUG nova.network.neutron [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Successfully created port: e507d143-8e0d-443a-ba92-defb6e0097f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:56:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:33.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:33.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:35 np0005466031 nova_compute[235803]: 2025-10-02 12:56:35.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:35 np0005466031 nova_compute[235803]: 2025-10-02 12:56:35.326 2 DEBUG nova.network.neutron [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Successfully updated port: e507d143-8e0d-443a-ba92-defb6e0097f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:56:35 np0005466031 nova_compute[235803]: 2025-10-02 12:56:35.350 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:35 np0005466031 nova_compute[235803]: 2025-10-02 12:56:35.350 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquired lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:35 np0005466031 nova_compute[235803]: 2025-10-02 12:56:35.350 2 DEBUG nova.network.neutron [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:56:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:56:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:35.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:56:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:35.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:36 np0005466031 nova_compute[235803]: 2025-10-02 12:56:36.303 2 DEBUG nova.network.neutron [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:56:36 np0005466031 nova_compute[235803]: 2025-10-02 12:56:36.442 2 DEBUG nova.compute.manager [req-3ee481cf-ef55-4fb1-800b-3a21ca9346fe req-1e82e40a-a487-4dcc-94b8-c231d684f060 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-changed-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:36 np0005466031 nova_compute[235803]: 2025-10-02 12:56:36.442 2 DEBUG nova.compute.manager [req-3ee481cf-ef55-4fb1-800b-3a21ca9346fe req-1e82e40a-a487-4dcc-94b8-c231d684f060 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Refreshing instance network info cache due to event network-changed-e507d143-8e0d-443a-ba92-defb6e0097f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:56:36 np0005466031 nova_compute[235803]: 2025-10-02 12:56:36.443 2 DEBUG oslo_concurrency.lockutils [req-3ee481cf-ef55-4fb1-800b-3a21ca9346fe req-1e82e40a-a487-4dcc-94b8-c231d684f060 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:36 np0005466031 nova_compute[235803]: 2025-10-02 12:56:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:36 np0005466031 nova_compute[235803]: 2025-10-02 12:56:36.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:56:36 np0005466031 nova_compute[235803]: 2025-10-02 12:56:36.660 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:56:37 np0005466031 nova_compute[235803]: 2025-10-02 12:56:37.371 2 DEBUG nova.network.neutron [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Updating instance_info_cache with network_info: [{"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:37 np0005466031 nova_compute[235803]: 2025-10-02 12:56:37.400 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Releasing lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:37 np0005466031 nova_compute[235803]: 2025-10-02 12:56:37.401 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance network_info: |[{"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:56:37 np0005466031 nova_compute[235803]: 2025-10-02 12:56:37.401 2 DEBUG oslo_concurrency.lockutils [req-3ee481cf-ef55-4fb1-800b-3a21ca9346fe req-1e82e40a-a487-4dcc-94b8-c231d684f060 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:37 np0005466031 nova_compute[235803]: 2025-10-02 12:56:37.401 2 DEBUG nova.network.neutron [req-3ee481cf-ef55-4fb1-800b-3a21ca9346fe req-1e82e40a-a487-4dcc-94b8-c231d684f060 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Refreshing network info cache for port e507d143-8e0d-443a-ba92-defb6e0097f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:56:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:37.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:37 np0005466031 nova_compute[235803]: 2025-10-02 12:56:37.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:37 np0005466031 nova_compute[235803]: 2025-10-02 12:56:37.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:56:38 np0005466031 nova_compute[235803]: 2025-10-02 12:56:38.006 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409783.0055683, f7f84b92-e128-4f0a-9040-aebd8234e953 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:38 np0005466031 nova_compute[235803]: 2025-10-02 12:56:38.007 2 INFO nova.compute.manager [-] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:56:38 np0005466031 nova_compute[235803]: 2025-10-02 12:56:38.029 2 DEBUG nova.compute.manager [None req-09c9cc26-700a-4b4a-81bf-a6eb3a3f9876 - - - - - -] [instance: f7f84b92-e128-4f0a-9040-aebd8234e953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:38 np0005466031 nova_compute[235803]: 2025-10-02 12:56:38.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.062025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798062058, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2195, "num_deletes": 261, "total_data_size": 4881162, "memory_usage": 4946032, "flush_reason": "Manual Compaction"}
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798094532, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 3206826, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57986, "largest_seqno": 60176, "table_properties": {"data_size": 3197927, "index_size": 5457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19297, "raw_average_key_size": 20, "raw_value_size": 3179726, "raw_average_value_size": 3382, "num_data_blocks": 236, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409626, "oldest_key_time": 1759409626, "file_creation_time": 1759409798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 32604 microseconds, and 7009 cpu microseconds.
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.094624) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 3206826 bytes OK
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.094642) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.099421) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.099453) EVENT_LOG_v1 {"time_micros": 1759409798099446, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.099474) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 4871308, prev total WAL file size 4871308, number of live WAL files 2.
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.100506) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303133' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(3131KB)], [114(10MB)]
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798100533, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 13950012, "oldest_snapshot_seqno": -1}
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 8562 keys, 13798526 bytes, temperature: kUnknown
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798226848, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 13798526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13739659, "index_size": 36355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 221387, "raw_average_key_size": 25, "raw_value_size": 13585856, "raw_average_value_size": 1586, "num_data_blocks": 1431, "num_entries": 8562, "num_filter_entries": 8562, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.227212) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 13798526 bytes
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.228606) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.3 rd, 109.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.2 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 9105, records dropped: 543 output_compression: NoCompression
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.228631) EVENT_LOG_v1 {"time_micros": 1759409798228619, "job": 72, "event": "compaction_finished", "compaction_time_micros": 126455, "compaction_time_cpu_micros": 31519, "output_level": 6, "num_output_files": 1, "total_output_size": 13798526, "num_input_records": 9105, "num_output_records": 8562, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798229496, "job": 72, "event": "table_file_deletion", "file_number": 116}
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798231373, "job": 72, "event": "table_file_deletion", "file_number": 114}
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.100473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.231409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.231414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.231416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.231417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:38.231419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:39 np0005466031 nova_compute[235803]: 2025-10-02 12:56:39.462 2 DEBUG nova.network.neutron [req-3ee481cf-ef55-4fb1-800b-3a21ca9346fe req-1e82e40a-a487-4dcc-94b8-c231d684f060 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Updated VIF entry in instance network info cache for port e507d143-8e0d-443a-ba92-defb6e0097f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:56:39 np0005466031 nova_compute[235803]: 2025-10-02 12:56:39.463 2 DEBUG nova.network.neutron [req-3ee481cf-ef55-4fb1-800b-3a21ca9346fe req-1e82e40a-a487-4dcc-94b8-c231d684f060 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Updating instance_info_cache with network_info: [{"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:39 np0005466031 nova_compute[235803]: 2025-10-02 12:56:39.482 2 DEBUG oslo_concurrency.lockutils [req-3ee481cf-ef55-4fb1-800b-3a21ca9346fe req-1e82e40a-a487-4dcc-94b8-c231d684f060 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:39.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:39.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:40 np0005466031 nova_compute[235803]: 2025-10-02 12:56:40.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:41.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:41.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:56:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 53K writes, 219K keys, 53K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.05 MB/s#012Cumulative WAL: 53K writes, 19K syncs, 2.75 writes per sync, written: 0.22 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9304 writes, 41K keys, 9304 commit groups, 1.0 writes per commit group, ingest: 41.61 MB, 0.07 MB/s#012Interval WAL: 9304 writes, 3398 syncs, 2.74 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:56:43 np0005466031 nova_compute[235803]: 2025-10-02 12:56:43.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:43.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.093 2 DEBUG os_brick.utils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.094 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.106 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.107 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[e617d6dc-6053-4f65-a625-1430136509e2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.108 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.116 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.116 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[d649e7d8-a717-46f8-85d2-40551d642a0c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.118 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.126 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.126 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e6626f-b2b9-4e0c-957f-72950da65ca3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.128 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[89466c63-5b5d-4d2e-9ffb-5dadbc523232]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.129 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.161 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.164 2 DEBUG os_brick.initiator.connectors.lightos [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.164 2 DEBUG os_brick.initiator.connectors.lightos [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.164 2 DEBUG os_brick.initiator.connectors.lightos [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.165 2 DEBUG os_brick.utils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:56:44 np0005466031 nova_compute[235803]: 2025-10-02 12:56:44.165 2 DEBUG nova.virt.block_device [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Updating existing volume attachment record: 5581d109-be2f-4a04-92e4-999ae5749856 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:56:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.301 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.303 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.303 2 INFO nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Creating image(s)#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.303 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.304 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Ensure instance console log exists: /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.304 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.304 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.305 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.307 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Start _get_guest_xml network_info=[{"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-93f6251c-1119-43b6-b608-e405dc9beae8', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '93f6251c-1119-43b6-b608-e405dc9beae8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1a43d1f5-b1b2-488a-8660-f964ee219489', 'attached_at': '', 'detached_at': '', 'volume_id': '93f6251c-1119-43b6-b608-e405dc9beae8', 'serial': '93f6251c-1119-43b6-b608-e405dc9beae8'}, 'attachment_id': '5581d109-be2f-4a04-92e4-999ae5749856', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.311 2 WARNING nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.315 2 DEBUG nova.virt.libvirt.host [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.315 2 DEBUG nova.virt.libvirt.host [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.319 2 DEBUG nova.virt.libvirt.host [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.319 2 DEBUG nova.virt.libvirt.host [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.320 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.321 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.321 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.321 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.322 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.322 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.322 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.322 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.323 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.323 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.323 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.323 2 DEBUG nova.virt.hardware [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.349 2 DEBUG nova.storage.rbd_utils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.352 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:56:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:56:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:56:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:45.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:45.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:56:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2894943659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.781 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.808 2 DEBUG nova.virt.libvirt.vif [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-503062150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-503062150',id=155,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-6uai0whm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:31Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=1a43d1f5-b1b2-488a-8660-f964ee219489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.809 2 DEBUG nova.network.os_vif_util [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.809 2 DEBUG nova.network.os_vif_util [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9d:cb,bridge_name='br-int',has_traffic_filtering=True,id=e507d143-8e0d-443a-ba92-defb6e0097f8,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape507d143-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.810 2 DEBUG nova.objects.instance [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.828 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <uuid>1a43d1f5-b1b2-488a-8660-f964ee219489</uuid>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <name>instance-0000009b</name>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-503062150</nova:name>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:56:45</nova:creationTime>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <nova:user uuid="00be63ea13c84e3d9419078865524099">tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member</nova:user>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <nova:project uuid="cb2da64acac041cb8d38c3b43fe4dbe9">tempest-ServerBootFromVolumeStableRescueTest-1641553658</nova:project>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <nova:port uuid="e507d143-8e0d-443a-ba92-defb6e0097f8">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <entry name="serial">1a43d1f5-b1b2-488a-8660-f964ee219489</entry>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <entry name="uuid">1a43d1f5-b1b2-488a-8660-f964ee219489</entry>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-93f6251c-1119-43b6-b608-e405dc9beae8">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <serial>93f6251c-1119-43b6-b608-e405dc9beae8</serial>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:40:9d:cb"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <target dev="tape507d143-8e"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/console.log" append="off"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:56:45 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:56:45 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:56:45 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:56:45 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.829 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Preparing to wait for external event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.829 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.829 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.829 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.830 2 DEBUG nova.virt.libvirt.vif [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-503062150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-503062150',id=155,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-6uai0whm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:31Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=1a43d1f5-b1b2-488a-8660-f964ee219489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.830 2 DEBUG nova.network.os_vif_util [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.831 2 DEBUG nova.network.os_vif_util [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:9d:cb,bridge_name='br-int',has_traffic_filtering=True,id=e507d143-8e0d-443a-ba92-defb6e0097f8,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape507d143-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.832 2 DEBUG os_vif [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9d:cb,bridge_name='br-int',has_traffic_filtering=True,id=e507d143-8e0d-443a-ba92-defb6e0097f8,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape507d143-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape507d143-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape507d143-8e, col_values=(('external_ids', {'iface-id': 'e507d143-8e0d-443a-ba92-defb6e0097f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:9d:cb', 'vm-uuid': '1a43d1f5-b1b2-488a-8660-f964ee219489'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005466031 NetworkManager[44907]: <info>  [1759409805.8390] manager: (tape507d143-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.845 2 INFO os_vif [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:9d:cb,bridge_name='br-int',has_traffic_filtering=True,id=e507d143-8e0d-443a-ba92-defb6e0097f8,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape507d143-8e')#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.892 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.893 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.893 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No VIF found with MAC fa:16:3e:40:9d:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.893 2 INFO nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Using config drive#033[00m
Oct  2 08:56:45 np0005466031 nova_compute[235803]: 2025-10-02 12:56:45.915 2 DEBUG nova.storage.rbd_utils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.322 2 INFO nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Creating config drive at /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config#033[00m
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.327 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq471ik9v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.466 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq471ik9v" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.491 2 DEBUG nova.storage.rbd_utils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.494 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.784 2 DEBUG oslo_concurrency.processutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.785 2 INFO nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Deleting local config drive /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config because it was imported into RBD.#033[00m
Oct  2 08:56:46 np0005466031 kernel: tape507d143-8e: entered promiscuous mode
Oct  2 08:56:46 np0005466031 NetworkManager[44907]: <info>  [1759409806.8412] manager: (tape507d143-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Oct  2 08:56:46 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:46Z|00571|binding|INFO|Claiming lport e507d143-8e0d-443a-ba92-defb6e0097f8 for this chassis.
Oct  2 08:56:46 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:46Z|00572|binding|INFO|e507d143-8e0d-443a-ba92-defb6e0097f8: Claiming fa:16:3e:40:9d:cb 10.100.0.5
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:46 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:46Z|00573|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 ovn-installed in OVS
Oct  2 08:56:46 np0005466031 systemd-udevd[303856]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:46 np0005466031 nova_compute[235803]: 2025-10-02 12:56:46.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:46 np0005466031 NetworkManager[44907]: <info>  [1759409806.8757] device (tape507d143-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:56:46 np0005466031 NetworkManager[44907]: <info>  [1759409806.8768] device (tape507d143-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:56:47 np0005466031 ovn_controller[132413]: 2025-10-02T12:56:47Z|00574|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 up in Southbound
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.022 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9d:cb 10.100.0.5'], port_security=['fa:16:3e:40:9d:cb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1a43d1f5-b1b2-488a-8660-f964ee219489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=e507d143-8e0d-443a-ba92-defb6e0097f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.024 141898 INFO neutron.agent.ovn.metadata.agent [-] Port e507d143-8e0d-443a-ba92-defb6e0097f8 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 bound to our chassis#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.025 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.039 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e03d8f-b1a6-4232-adb6-a92dc7f6c696]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:47 np0005466031 systemd-machined[192227]: New machine qemu-67-instance-0000009b.
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.068 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[79a18023-eb2c-49b4-b0d3-4f56c3f89770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.071 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b78365b4-4369-4edd-bf4e-5c2ea19c7862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:47 np0005466031 systemd[1]: Started Virtual Machine qemu-67-instance-0000009b.
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.099 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c8836ffd-dd4e-4b8f-a148-b7980dd5c40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.114 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[703490ff-444a-4607-93f3-5b20473c984f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758793, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303872, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.129 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[739664dc-8ef4-4881-abc5-b428d88094e6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758802, 'tstamp': 758802}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303873, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758805, 'tstamp': 758805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303873, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.131 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:47 np0005466031 nova_compute[235803]: 2025-10-02 12:56:47.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:47 np0005466031 nova_compute[235803]: 2025-10-02 12:56:47.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.135 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.135 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.136 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:47.136 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:56:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:47.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:56:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:48 np0005466031 nova_compute[235803]: 2025-10-02 12:56:48.012 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409808.0118775, 1a43d1f5-b1b2-488a-8660-f964ee219489 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:48 np0005466031 nova_compute[235803]: 2025-10-02 12:56:48.013 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] VM Started (Lifecycle Event)#033[00m
Oct  2 08:56:48 np0005466031 nova_compute[235803]: 2025-10-02 12:56:48.044 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:48 np0005466031 nova_compute[235803]: 2025-10-02 12:56:48.049 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409808.0120807, 1a43d1f5-b1b2-488a-8660-f964ee219489 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:48 np0005466031 nova_compute[235803]: 2025-10-02 12:56:48.049 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:56:48 np0005466031 nova_compute[235803]: 2025-10-02 12:56:48.076 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:48 np0005466031 nova_compute[235803]: 2025-10-02 12:56:48.079 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:48 np0005466031 nova_compute[235803]: 2025-10-02 12:56:48.107 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:56:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.284 2 DEBUG nova.compute.manager [req-0a26cebb-69d7-4b30-adcf-5ea427389565 req-e7a0e2a6-ad7f-40bf-a733-46d2c3ea4d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.285 2 DEBUG oslo_concurrency.lockutils [req-0a26cebb-69d7-4b30-adcf-5ea427389565 req-e7a0e2a6-ad7f-40bf-a733-46d2c3ea4d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.285 2 DEBUG oslo_concurrency.lockutils [req-0a26cebb-69d7-4b30-adcf-5ea427389565 req-e7a0e2a6-ad7f-40bf-a733-46d2c3ea4d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.286 2 DEBUG oslo_concurrency.lockutils [req-0a26cebb-69d7-4b30-adcf-5ea427389565 req-e7a0e2a6-ad7f-40bf-a733-46d2c3ea4d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.286 2 DEBUG nova.compute.manager [req-0a26cebb-69d7-4b30-adcf-5ea427389565 req-e7a0e2a6-ad7f-40bf-a733-46d2c3ea4d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Processing event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.286 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.289 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409809.2891104, 1a43d1f5-b1b2-488a-8660-f964ee219489 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.289 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.291 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.294 2 INFO nova.virt.libvirt.driver [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance spawned successfully.#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.295 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.318 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.325 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.329 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.329 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.329 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.330 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.330 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.330 2 DEBUG nova.virt.libvirt.driver [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.356 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.388 2 INFO nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Took 4.09 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.388 2 DEBUG nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.449 2 INFO nova.compute.manager [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Took 18.33 seconds to build instance.#033[00m
Oct  2 08:56:49 np0005466031 nova_compute[235803]: 2025-10-02 12:56:49.471 2 DEBUG oslo_concurrency.lockutils [None req-a6d47253-0edb-405d-9473-8d9df9b583e7 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:49.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:49.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:50 np0005466031 nova_compute[235803]: 2025-10-02 12:56:50.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:50.348 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:50 np0005466031 nova_compute[235803]: 2025-10-02 12:56:50.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:50.349 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:56:50 np0005466031 nova_compute[235803]: 2025-10-02 12:56:50.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:51 np0005466031 nova_compute[235803]: 2025-10-02 12:56:51.379 2 DEBUG nova.compute.manager [req-4715d537-9817-4535-a744-b517646b169e req-a4accd4a-131e-45ca-abb3-c314d99a9ecd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:51 np0005466031 nova_compute[235803]: 2025-10-02 12:56:51.379 2 DEBUG oslo_concurrency.lockutils [req-4715d537-9817-4535-a744-b517646b169e req-a4accd4a-131e-45ca-abb3-c314d99a9ecd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:51 np0005466031 nova_compute[235803]: 2025-10-02 12:56:51.380 2 DEBUG oslo_concurrency.lockutils [req-4715d537-9817-4535-a744-b517646b169e req-a4accd4a-131e-45ca-abb3-c314d99a9ecd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:51 np0005466031 nova_compute[235803]: 2025-10-02 12:56:51.380 2 DEBUG oslo_concurrency.lockutils [req-4715d537-9817-4535-a744-b517646b169e req-a4accd4a-131e-45ca-abb3-c314d99a9ecd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:51 np0005466031 nova_compute[235803]: 2025-10-02 12:56:51.380 2 DEBUG nova.compute.manager [req-4715d537-9817-4535-a744-b517646b169e req-a4accd4a-131e-45ca-abb3-c314d99a9ecd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:51 np0005466031 nova_compute[235803]: 2025-10-02 12:56:51.380 2 WARNING nova.compute.manager [req-4715d537-9817-4535-a744-b517646b169e req-a4accd4a-131e-45ca-abb3-c314d99a9ecd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:56:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:51.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:56:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:56:52 np0005466031 nova_compute[235803]: 2025-10-02 12:56:52.708 2 INFO nova.compute.manager [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Rescuing#033[00m
Oct  2 08:56:52 np0005466031 nova_compute[235803]: 2025-10-02 12:56:52.709 2 DEBUG oslo_concurrency.lockutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:52 np0005466031 nova_compute[235803]: 2025-10-02 12:56:52.709 2 DEBUG oslo_concurrency.lockutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquired lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:52 np0005466031 nova_compute[235803]: 2025-10-02 12:56:52.709 2 DEBUG nova.network.neutron [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:56:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:53.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:53 np0005466031 podman[304020]: 2025-10-02 12:56:53.634723765 +0000 UTC m=+0.059044272 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:56:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:53.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:53 np0005466031 podman[304021]: 2025-10-02 12:56:53.699342617 +0000 UTC m=+0.124023225 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:56:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:54 np0005466031 nova_compute[235803]: 2025-10-02 12:56:54.944 2 DEBUG nova.network.neutron [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Updating instance_info_cache with network_info: [{"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:54 np0005466031 nova_compute[235803]: 2025-10-02 12:56:54.974 2 DEBUG oslo_concurrency.lockutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Releasing lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:55 np0005466031 nova_compute[235803]: 2025-10-02 12:56:55.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:55 np0005466031 nova_compute[235803]: 2025-10-02 12:56:55.358 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:56:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:56:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:55.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:56:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:55 np0005466031 nova_compute[235803]: 2025-10-02 12:56:55.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:57.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:57.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:56:58.351 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:59.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:59 np0005466031 podman[304068]: 2025-10-02 12:56:59.622525065 +0000 UTC m=+0.055457859 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:56:59 np0005466031 podman[304069]: 2025-10-02 12:56:59.62269445 +0000 UTC m=+0.052501024 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.635193) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819635227, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 503, "num_deletes": 251, "total_data_size": 648538, "memory_usage": 659352, "flush_reason": "Manual Compaction"}
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819640450, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 427620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60181, "largest_seqno": 60679, "table_properties": {"data_size": 424910, "index_size": 746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6701, "raw_average_key_size": 19, "raw_value_size": 419428, "raw_average_value_size": 1205, "num_data_blocks": 32, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409798, "oldest_key_time": 1759409798, "file_creation_time": 1759409819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 5286 microseconds, and 1889 cpu microseconds.
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.640479) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 427620 bytes OK
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.640493) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.642506) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.642519) EVENT_LOG_v1 {"time_micros": 1759409819642515, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.642552) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 645522, prev total WAL file size 645522, number of live WAL files 2.
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.643080) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(417KB)], [117(13MB)]
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819643154, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14226146, "oldest_snapshot_seqno": -1}
Oct  2 08:56:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:56:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:59.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 8396 keys, 12292399 bytes, temperature: kUnknown
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819734854, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 12292399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12235998, "index_size": 34337, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20997, "raw_key_size": 218650, "raw_average_key_size": 26, "raw_value_size": 12086311, "raw_average_value_size": 1439, "num_data_blocks": 1340, "num_entries": 8396, "num_filter_entries": 8396, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759409819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.735105) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12292399 bytes
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.736481) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.0 rd, 134.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(62.0) write-amplify(28.7) OK, records in: 8910, records dropped: 514 output_compression: NoCompression
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.736501) EVENT_LOG_v1 {"time_micros": 1759409819736491, "job": 74, "event": "compaction_finished", "compaction_time_micros": 91761, "compaction_time_cpu_micros": 32753, "output_level": 6, "num_output_files": 1, "total_output_size": 12292399, "num_input_records": 8910, "num_output_records": 8396, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819736699, "job": 74, "event": "table_file_deletion", "file_number": 119}
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819738529, "job": 74, "event": "table_file_deletion", "file_number": 117}
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.642948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.738575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.738579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.738580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.738582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-12:56:59.738583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:00 np0005466031 nova_compute[235803]: 2025-10-02 12:57:00.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:00 np0005466031 nova_compute[235803]: 2025-10-02 12:57:00.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:01.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:01.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:01 np0005466031 nova_compute[235803]: 2025-10-02 12:57:01.650 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:03Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:9d:cb 10.100.0.5
Oct  2 08:57:03 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:03Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:9d:cb 10.100.0.5
Oct  2 08:57:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:03.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:03.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:04 np0005466031 nova_compute[235803]: 2025-10-02 12:57:04.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:05 np0005466031 nova_compute[235803]: 2025-10-02 12:57:05.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:57:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548337609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:57:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:57:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548337609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:57:05 np0005466031 nova_compute[235803]: 2025-10-02 12:57:05.401 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:57:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:05.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:05 np0005466031 nova_compute[235803]: 2025-10-02 12:57:05.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:07 np0005466031 nova_compute[235803]: 2025-10-02 12:57:07.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:07.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:08 np0005466031 nova_compute[235803]: 2025-10-02 12:57:08.414 2 INFO nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:57:08 np0005466031 kernel: tape507d143-8e (unregistering): left promiscuous mode
Oct  2 08:57:08 np0005466031 NetworkManager[44907]: <info>  [1759409828.5962] device (tape507d143-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:57:08 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:08Z|00575|binding|INFO|Releasing lport e507d143-8e0d-443a-ba92-defb6e0097f8 from this chassis (sb_readonly=0)
Oct  2 08:57:08 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:08Z|00576|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 down in Southbound
Oct  2 08:57:08 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:08Z|00577|binding|INFO|Removing iface tape507d143-8e ovn-installed in OVS
Oct  2 08:57:08 np0005466031 nova_compute[235803]: 2025-10-02 12:57:08.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:08 np0005466031 nova_compute[235803]: 2025-10-02 12:57:08.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:08 np0005466031 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Oct  2 08:57:08 np0005466031 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000009b.scope: Consumed 14.059s CPU time.
Oct  2 08:57:08 np0005466031 systemd-machined[192227]: Machine qemu-67-instance-0000009b terminated.
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.830 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9d:cb 10.100.0.5'], port_security=['fa:16:3e:40:9d:cb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1a43d1f5-b1b2-488a-8660-f964ee219489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=e507d143-8e0d-443a-ba92-defb6e0097f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.831 141898 INFO neutron.agent.ovn.metadata.agent [-] Port e507d143-8e0d-443a-ba92-defb6e0097f8 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.832 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.848 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[81bd8a50-b209-4f4b-b169-95d650d5bc66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:08 np0005466031 nova_compute[235803]: 2025-10-02 12:57:08.857 2 INFO nova.virt.libvirt.driver [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance destroyed successfully.#033[00m
Oct  2 08:57:08 np0005466031 nova_compute[235803]: 2025-10-02 12:57:08.858 2 DEBUG nova.objects.instance [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.880 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c684176b-d52d-4d2f-9483-924e33b799b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.883 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5f14b64d-126e-4846-b9fc-e08821d00bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.908 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[60d73640-5ff3-4f11-980d-d6278f0bf406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.926 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[974f8893-b38c-4dfb-9aa1-2c0a6ab63177]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758793, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304185, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.942 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6098ef-024e-4d0b-a301-6b1321062edf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758802, 'tstamp': 758802}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304186, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758805, 'tstamp': 758805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304186, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.944 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:08 np0005466031 nova_compute[235803]: 2025-10-02 12:57:08.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:08 np0005466031 nova_compute[235803]: 2025-10-02 12:57:08.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.950 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.950 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.950 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:08.950 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:09.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:09.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:09 np0005466031 nova_compute[235803]: 2025-10-02 12:57:09.728 2 INFO nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Attempting a stable device rescue#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.144 2 DEBUG nova.compute.manager [req-b01542c8-d24b-40fe-807d-91eec79950a5 req-91509541-3644-4ace-8e24-ebe5f12b8dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.144 2 DEBUG oslo_concurrency.lockutils [req-b01542c8-d24b-40fe-807d-91eec79950a5 req-91509541-3644-4ace-8e24-ebe5f12b8dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.144 2 DEBUG oslo_concurrency.lockutils [req-b01542c8-d24b-40fe-807d-91eec79950a5 req-91509541-3644-4ace-8e24-ebe5f12b8dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.144 2 DEBUG oslo_concurrency.lockutils [req-b01542c8-d24b-40fe-807d-91eec79950a5 req-91509541-3644-4ace-8e24-ebe5f12b8dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.145 2 DEBUG nova.compute.manager [req-b01542c8-d24b-40fe-807d-91eec79950a5 req-91509541-3644-4ace-8e24-ebe5f12b8dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.145 2 WARNING nova.compute.manager [req-b01542c8-d24b-40fe-807d-91eec79950a5 req-91509541-3644-4ace-8e24-ebe5f12b8dc6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.258 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.263 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.263 2 INFO nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Creating image(s)#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.289 2 DEBUG nova.storage.rbd_utils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.293 2 DEBUG nova.objects.instance [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.340 2 DEBUG nova.storage.rbd_utils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.362 2 DEBUG nova.storage.rbd_utils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.368 2 DEBUG oslo_concurrency.lockutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "ecf63f52c5ec92a4953b7a5e737984af3bf4d729" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.369 2 DEBUG oslo_concurrency.lockutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "ecf63f52c5ec92a4953b7a5e737984af3bf4d729" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.629 2 DEBUG nova.virt.libvirt.imagebackend [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/dbd87d84-17a6-4523-8be2-18de440d9003/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/dbd87d84-17a6-4523-8be2-18de440d9003/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.938 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.944 2 DEBUG nova.virt.libvirt.imagebackend [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/dbd87d84-17a6-4523-8be2-18de440d9003/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:57:10 np0005466031 nova_compute[235803]: 2025-10-02 12:57:10.944 2 DEBUG nova.storage.rbd_utils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] cloning images/dbd87d84-17a6-4523-8be2-18de440d9003@snap to None/1a43d1f5-b1b2-488a-8660-f964ee219489_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:57:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:11.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:11 np0005466031 nova_compute[235803]: 2025-10-02 12:57:11.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:11.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:11 np0005466031 nova_compute[235803]: 2025-10-02 12:57:11.692 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:11 np0005466031 nova_compute[235803]: 2025-10-02 12:57:11.692 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:11 np0005466031 nova_compute[235803]: 2025-10-02 12:57:11.693 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:11 np0005466031 nova_compute[235803]: 2025-10-02 12:57:11.693 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:57:11 np0005466031 nova_compute[235803]: 2025-10-02 12:57:11.693 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:11 np0005466031 nova_compute[235803]: 2025-10-02 12:57:11.832 2 DEBUG oslo_concurrency.lockutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "ecf63f52c5ec92a4953b7a5e737984af3bf4d729" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:11 np0005466031 nova_compute[235803]: 2025-10-02 12:57:11.964 2 DEBUG nova.objects.instance [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1790729380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.132 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.681 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.687 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Start _get_guest_xml network_info=[{"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "vif_mac": "fa:16:3e:40:9d:cb"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'dbd87d84-17a6-4523-8be2-18de440d9003', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-93f6251c-1119-43b6-b608-e405dc9beae8', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '93f6251c-1119-43b6-b608-e405dc9beae8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1a43d1f5-b1b2-488a-8660-f964ee219489', 'attached_at': '', 'detached_at': '', 'volume_id': '93f6251c-1119-43b6-b608-e405dc9beae8', 'serial': '93f6251c-1119-43b6-b608-e405dc9beae8'}, 'attachment_id': '5581d109-be2f-4a04-92e4-999ae5749856', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.688 2 DEBUG nova.objects.instance [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'resources' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.747 2 WARNING nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.752 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.752 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.754 2 DEBUG nova.virt.libvirt.host [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.754 2 DEBUG nova.virt.libvirt.host [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.756 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.756 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.758 2 DEBUG nova.virt.libvirt.host [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.759 2 DEBUG nova.virt.libvirt.host [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.760 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.760 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.761 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.761 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.761 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.761 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.761 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.762 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.762 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.762 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.763 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.763 2 DEBUG nova.virt.hardware [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.763 2 DEBUG nova.objects.instance [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.865 2 DEBUG oslo_concurrency.processutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.916 2 DEBUG nova.compute.manager [req-34b983a1-a8b6-45c4-ae53-8ad8cfbb7084 req-0322d85d-820b-4072-8019-d7dbc0fb8732 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.916 2 DEBUG oslo_concurrency.lockutils [req-34b983a1-a8b6-45c4-ae53-8ad8cfbb7084 req-0322d85d-820b-4072-8019-d7dbc0fb8732 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.917 2 DEBUG oslo_concurrency.lockutils [req-34b983a1-a8b6-45c4-ae53-8ad8cfbb7084 req-0322d85d-820b-4072-8019-d7dbc0fb8732 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.918 2 DEBUG oslo_concurrency.lockutils [req-34b983a1-a8b6-45c4-ae53-8ad8cfbb7084 req-0322d85d-820b-4072-8019-d7dbc0fb8732 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.918 2 DEBUG nova.compute.manager [req-34b983a1-a8b6-45c4-ae53-8ad8cfbb7084 req-0322d85d-820b-4072-8019-d7dbc0fb8732 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:12 np0005466031 nova_compute[235803]: 2025-10-02 12:57:12.919 2 WARNING nova.compute.manager [req-34b983a1-a8b6-45c4-ae53-8ad8cfbb7084 req-0322d85d-820b-4072-8019-d7dbc0fb8732 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.085 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.087 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4174MB free_disk=20.849960327148438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.087 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.088 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.272 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 58e2a72f-a2b9-41a0-9c67-607e978d8b88 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.273 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 1a43d1f5-b1b2-488a-8660-f964ee219489 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.273 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.273 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.338 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1667620999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.473 2 DEBUG oslo_concurrency.processutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.509 2 DEBUG oslo_concurrency.processutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:13.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4205440800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.820 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.827 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.862 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.903 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.904 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2894204180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.956 2 DEBUG oslo_concurrency.processutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.958 2 DEBUG nova.virt.libvirt.vif [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-503062150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-503062150',id=155,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:49Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-6uai0whm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:49Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=1a43d1f5-b1b2-488a-8660-f964ee219489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "vif_mac": "fa:16:3e:40:9d:cb"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.960 2 DEBUG nova.network.os_vif_util [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "vif_mac": "fa:16:3e:40:9d:cb"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.961 2 DEBUG nova.network.os_vif_util [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:9d:cb,bridge_name='br-int',has_traffic_filtering=True,id=e507d143-8e0d-443a-ba92-defb6e0097f8,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape507d143-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:57:13 np0005466031 nova_compute[235803]: 2025-10-02 12:57:13.964 2 DEBUG nova.objects.instance [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.011 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <uuid>1a43d1f5-b1b2-488a-8660-f964ee219489</uuid>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <name>instance-0000009b</name>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-503062150</nova:name>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:57:12</nova:creationTime>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <nova:user uuid="00be63ea13c84e3d9419078865524099">tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member</nova:user>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <nova:project uuid="cb2da64acac041cb8d38c3b43fe4dbe9">tempest-ServerBootFromVolumeStableRescueTest-1641553658</nova:project>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <nova:port uuid="e507d143-8e0d-443a-ba92-defb6e0097f8">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <entry name="serial">1a43d1f5-b1b2-488a-8660-f964ee219489</entry>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <entry name="uuid">1a43d1f5-b1b2-488a-8660-f964ee219489</entry>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-93f6251c-1119-43b6-b608-e405dc9beae8">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <serial>93f6251c-1119-43b6-b608-e405dc9beae8</serial>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/1a43d1f5-b1b2-488a-8660-f964ee219489_disk.rescue">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <boot order="1"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:40:9d:cb"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <target dev="tape507d143-8e"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/console.log" append="off"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:57:14 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:57:14 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:57:14 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:57:14 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.021 2 INFO nova.virt.libvirt.driver [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance destroyed successfully.#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.101 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.101 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.101 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.101 2 DEBUG nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] No VIF found with MAC fa:16:3e:40:9d:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.102 2 INFO nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Using config drive#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.123 2 DEBUG nova.storage.rbd_utils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.151 2 DEBUG nova.objects.instance [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.197 2 DEBUG nova.objects.instance [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'keypairs' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.617 2 INFO nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Creating config drive at /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config.rescue#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.622 2 DEBUG oslo_concurrency.processutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplrbqu4v2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.779 2 DEBUG oslo_concurrency.processutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplrbqu4v2" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.905 2 DEBUG nova.storage.rbd_utils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] rbd image 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:14 np0005466031 nova_compute[235803]: 2025-10-02 12:57:14.909 2 DEBUG oslo_concurrency.processutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config.rescue 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:15 np0005466031 nova_compute[235803]: 2025-10-02 12:57:15.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:15 np0005466031 nova_compute[235803]: 2025-10-02 12:57:15.363 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:15 np0005466031 nova_compute[235803]: 2025-10-02 12:57:15.363 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:57:15 np0005466031 nova_compute[235803]: 2025-10-02 12:57:15.363 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:57:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:15.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:15 np0005466031 nova_compute[235803]: 2025-10-02 12:57:15.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.105 2 DEBUG oslo_concurrency.processutils [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config.rescue 1a43d1f5-b1b2-488a-8660-f964ee219489_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.105 2 INFO nova.virt.libvirt.driver [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Deleting local config drive /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:57:16 np0005466031 kernel: tape507d143-8e: entered promiscuous mode
Oct  2 08:57:16 np0005466031 NetworkManager[44907]: <info>  [1759409836.1821] manager: (tape507d143-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Oct  2 08:57:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:16Z|00578|binding|INFO|Claiming lport e507d143-8e0d-443a-ba92-defb6e0097f8 for this chassis.
Oct  2 08:57:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:16Z|00579|binding|INFO|e507d143-8e0d-443a-ba92-defb6e0097f8: Claiming fa:16:3e:40:9d:cb 10.100.0.5
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:16Z|00580|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 ovn-installed in OVS
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:16 np0005466031 systemd-udevd[304509]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:57:16 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:16Z|00581|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 up in Southbound
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.216 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9d:cb 10.100.0.5'], port_security=['fa:16:3e:40:9d:cb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1a43d1f5-b1b2-488a-8660-f964ee219489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=e507d143-8e0d-443a-ba92-defb6e0097f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.217 141898 INFO neutron.agent.ovn.metadata.agent [-] Port e507d143-8e0d-443a-ba92-defb6e0097f8 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 bound to our chassis#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.218 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:57:16 np0005466031 NetworkManager[44907]: <info>  [1759409836.2302] device (tape507d143-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:57:16 np0005466031 NetworkManager[44907]: <info>  [1759409836.2316] device (tape507d143-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.233 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a95c0f99-30fc-4717-a3d2-39dd300ada7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:16 np0005466031 systemd-machined[192227]: New machine qemu-68-instance-0000009b.
Oct  2 08:57:16 np0005466031 systemd[1]: Started Virtual Machine qemu-68-instance-0000009b.
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.268 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa49259-14be-4048-826f-6e422a42aa94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.271 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[37ae7617-314e-4460-adfb-edbd34add441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.302 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5407a118-7deb-4ca6-bdd0-8f015003a9b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.321 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[097cbea4-e347-470c-8566-8a3f43b616a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758793, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304524, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.337 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f1157d3a-5b7c-4f67-96c5-3695b174de36]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758802, 'tstamp': 758802}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304526, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758805, 'tstamp': 758805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304526, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.339 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.342 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.342 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.342 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:16 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:16.343 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.405 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.405 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.405 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.405 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.648 2 DEBUG nova.compute.manager [req-ef647954-21b3-4e5b-b230-4033e854751f req-a4cbb5b4-d0e3-45ef-addb-b651ae338182 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.648 2 DEBUG oslo_concurrency.lockutils [req-ef647954-21b3-4e5b-b230-4033e854751f req-a4cbb5b4-d0e3-45ef-addb-b651ae338182 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.648 2 DEBUG oslo_concurrency.lockutils [req-ef647954-21b3-4e5b-b230-4033e854751f req-a4cbb5b4-d0e3-45ef-addb-b651ae338182 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.649 2 DEBUG oslo_concurrency.lockutils [req-ef647954-21b3-4e5b-b230-4033e854751f req-a4cbb5b4-d0e3-45ef-addb-b651ae338182 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.649 2 DEBUG nova.compute.manager [req-ef647954-21b3-4e5b-b230-4033e854751f req-a4cbb5b4-d0e3-45ef-addb-b651ae338182 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:16 np0005466031 nova_compute[235803]: 2025-10-02 12:57:16.649 2 WARNING nova.compute.manager [req-ef647954-21b3-4e5b-b230-4033e854751f req-a4cbb5b4-d0e3-45ef-addb-b651ae338182 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:57:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:17.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:57:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:17.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.012 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 1a43d1f5-b1b2-488a-8660-f964ee219489 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.012 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409838.0117438, 1a43d1f5-b1b2-488a-8660-f964ee219489 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.012 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.016 2 DEBUG nova.compute.manager [None req-efe05c31-d4f3-4a84-9362-5883bebbe508 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.081 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.084 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.262 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409838.0140595, 1a43d1f5-b1b2-488a-8660-f964ee219489 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.262 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] VM Started (Lifecycle Event)#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.308 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.311 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.836 2 DEBUG nova.compute.manager [req-41bc9c14-21bd-4bfe-ad36-ea1ec1b23aa6 req-e6894fda-7c5f-4fc7-8149-5e07edbb6803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.836 2 DEBUG oslo_concurrency.lockutils [req-41bc9c14-21bd-4bfe-ad36-ea1ec1b23aa6 req-e6894fda-7c5f-4fc7-8149-5e07edbb6803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.837 2 DEBUG oslo_concurrency.lockutils [req-41bc9c14-21bd-4bfe-ad36-ea1ec1b23aa6 req-e6894fda-7c5f-4fc7-8149-5e07edbb6803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.837 2 DEBUG oslo_concurrency.lockutils [req-41bc9c14-21bd-4bfe-ad36-ea1ec1b23aa6 req-e6894fda-7c5f-4fc7-8149-5e07edbb6803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.837 2 DEBUG nova.compute.manager [req-41bc9c14-21bd-4bfe-ad36-ea1ec1b23aa6 req-e6894fda-7c5f-4fc7-8149-5e07edbb6803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:18 np0005466031 nova_compute[235803]: 2025-10-02 12:57:18.837 2 WARNING nova.compute.manager [req-41bc9c14-21bd-4bfe-ad36-ea1ec1b23aa6 req-e6894fda-7c5f-4fc7-8149-5e07edbb6803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:57:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.302 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating instance_info_cache with network_info: [{"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.350 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-58e2a72f-a2b9-41a0-9c67-607e978d8b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.350 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.351 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.351 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.351 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.385 2 INFO nova.compute.manager [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Unrescuing#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.385 2 DEBUG oslo_concurrency.lockutils [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.386 2 DEBUG oslo_concurrency.lockutils [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquired lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.386 2 DEBUG nova.network.neutron [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.432 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Triggering sync for uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.432 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Triggering sync for uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.433 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.433 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.433 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.434 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.434 2 INFO nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.435 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.473 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:19 np0005466031 nova_compute[235803]: 2025-10-02 12:57:19.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:57:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:19.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:20 np0005466031 nova_compute[235803]: 2025-10-02 12:57:20.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:20 np0005466031 nova_compute[235803]: 2025-10-02 12:57:20.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:21.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:21.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:22 np0005466031 nova_compute[235803]: 2025-10-02 12:57:22.539 2 DEBUG nova.network.neutron [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Updating instance_info_cache with network_info: [{"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:22 np0005466031 nova_compute[235803]: 2025-10-02 12:57:22.574 2 DEBUG oslo_concurrency.lockutils [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Releasing lock "refresh_cache-1a43d1f5-b1b2-488a-8660-f964ee219489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:22 np0005466031 nova_compute[235803]: 2025-10-02 12:57:22.575 2 DEBUG nova.objects.instance [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'flavor' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:22 np0005466031 kernel: tape507d143-8e (unregistering): left promiscuous mode
Oct  2 08:57:22 np0005466031 NetworkManager[44907]: <info>  [1759409842.8172] device (tape507d143-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:57:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:22Z|00582|binding|INFO|Releasing lport e507d143-8e0d-443a-ba92-defb6e0097f8 from this chassis (sb_readonly=0)
Oct  2 08:57:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:22Z|00583|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 down in Southbound
Oct  2 08:57:22 np0005466031 nova_compute[235803]: 2025-10-02 12:57:22.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:22Z|00584|binding|INFO|Removing iface tape507d143-8e ovn-installed in OVS
Oct  2 08:57:22 np0005466031 nova_compute[235803]: 2025-10-02 12:57:22.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.832 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9d:cb 10.100.0.5'], port_security=['fa:16:3e:40:9d:cb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1a43d1f5-b1b2-488a-8660-f964ee219489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=e507d143-8e0d-443a-ba92-defb6e0097f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.833 141898 INFO neutron.agent.ovn.metadata.agent [-] Port e507d143-8e0d-443a-ba92-defb6e0097f8 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.834 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:57:22 np0005466031 nova_compute[235803]: 2025-10-02 12:57:22.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.849 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b1dee17d-5a2f-46a7-87ff-a15473f08fa0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.883 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b18e4546-bdcd-454d-8444-5f5e10a5003d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.885 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3367a4f0-b35c-430b-9c38-08f7c82525d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:22 np0005466031 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Oct  2 08:57:22 np0005466031 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000009b.scope: Consumed 5.755s CPU time.
Oct  2 08:57:22 np0005466031 systemd-machined[192227]: Machine qemu-68-instance-0000009b terminated.
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.918 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f9648d30-3630-4293-97cb-bde98bdcf51a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.935 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d211c9d7-92f6-4b7b-9529-f1f77cdd1040]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758793, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304602, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.951 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[82958ca8-0d89-4706-8031-94ef5207df7d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758802, 'tstamp': 758802}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304603, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758805, 'tstamp': 758805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304603, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.952 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:22 np0005466031 nova_compute[235803]: 2025-10-02 12:57:22.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466031 nova_compute[235803]: 2025-10-02 12:57:22.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.958 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.958 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.958 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:22.959 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.048 2 INFO nova.virt.libvirt.driver [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance destroyed successfully.#033[00m
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.048 2 DEBUG nova.objects.instance [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:23 np0005466031 kernel: tape507d143-8e: entered promiscuous mode
Oct  2 08:57:23 np0005466031 systemd-udevd[304595]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:57:23 np0005466031 NetworkManager[44907]: <info>  [1759409843.2053] manager: (tape507d143-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Oct  2 08:57:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:23Z|00585|binding|INFO|Claiming lport e507d143-8e0d-443a-ba92-defb6e0097f8 for this chassis.
Oct  2 08:57:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:23Z|00586|binding|INFO|e507d143-8e0d-443a-ba92-defb6e0097f8: Claiming fa:16:3e:40:9d:cb 10.100.0.5
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:23 np0005466031 NetworkManager[44907]: <info>  [1759409843.2171] device (tape507d143-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:57:23 np0005466031 NetworkManager[44907]: <info>  [1759409843.2187] device (tape507d143-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:57:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:23Z|00587|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 ovn-installed in OVS
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:23 np0005466031 systemd-machined[192227]: New machine qemu-69-instance-0000009b.
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:23 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:23Z|00588|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 up in Southbound
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.230 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9d:cb 10.100.0.5'], port_security=['fa:16:3e:40:9d:cb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1a43d1f5-b1b2-488a-8660-f964ee219489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=e507d143-8e0d-443a-ba92-defb6e0097f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.231 141898 INFO neutron.agent.ovn.metadata.agent [-] Port e507d143-8e0d-443a-ba92-defb6e0097f8 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 bound to our chassis#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.233 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.246 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c031b2a8-60d4-40fb-9c23-1537bd48a5be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:23 np0005466031 systemd[1]: Started Virtual Machine qemu-69-instance-0000009b.
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.279 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5c65b2b4-63b4-41f9-b3b5-40cda0d94f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.281 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[230dbdf5-4990-456c-b7b3-aafa46708d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.309 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3d2e81-e112-434d-9523-dc13bdce466d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.326 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3264db19-8251-413c-ac23-36c032380b41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758793, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304641, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.342 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9929cf-31e2-4836-9cc2-d743b3c1be7d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758802, 'tstamp': 758802}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304642, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758805, 'tstamp': 758805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304642, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.343 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.345 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.346 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.346 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:23.346 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.651 2 DEBUG nova.compute.manager [req-4226ad34-45fd-44b6-83b9-a36c22a20eac req-6f798323-7dfe-4dbf-87a8-c7f0ba0ead6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.652 2 DEBUG oslo_concurrency.lockutils [req-4226ad34-45fd-44b6-83b9-a36c22a20eac req-6f798323-7dfe-4dbf-87a8-c7f0ba0ead6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.652 2 DEBUG oslo_concurrency.lockutils [req-4226ad34-45fd-44b6-83b9-a36c22a20eac req-6f798323-7dfe-4dbf-87a8-c7f0ba0ead6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.652 2 DEBUG oslo_concurrency.lockutils [req-4226ad34-45fd-44b6-83b9-a36c22a20eac req-6f798323-7dfe-4dbf-87a8-c7f0ba0ead6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.652 2 DEBUG nova.compute.manager [req-4226ad34-45fd-44b6-83b9-a36c22a20eac req-6f798323-7dfe-4dbf-87a8-c7f0ba0ead6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:23 np0005466031 nova_compute[235803]: 2025-10-02 12:57:23.652 2 WARNING nova.compute.manager [req-4226ad34-45fd-44b6-83b9-a36c22a20eac req-6f798323-7dfe-4dbf-87a8-c7f0ba0ead6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:57:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:23.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.188 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 1a43d1f5-b1b2-488a-8660-f964ee219489 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.189 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409844.1880598, 1a43d1f5-b1b2-488a-8660-f964ee219489 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.189 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.215 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.219 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.240 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.241 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409844.1908023, 1a43d1f5-b1b2-488a-8660-f964ee219489 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.241 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] VM Started (Lifecycle Event)#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.268 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.272 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:57:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:24 np0005466031 nova_compute[235803]: 2025-10-02 12:57:24.323 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:57:24 np0005466031 podman[304704]: 2025-10-02 12:57:24.655152095 +0000 UTC m=+0.067702941 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:57:24 np0005466031 podman[304705]: 2025-10-02 12:57:24.68344256 +0000 UTC m=+0.097989124 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:25.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:25.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.850 2 DEBUG nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.851 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.851 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.852 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.852 2 DEBUG nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.852 2 WARNING nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.853 2 DEBUG nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.853 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.853 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.854 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.854 2 DEBUG nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.854 2 WARNING nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.855 2 DEBUG nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.855 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.855 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.856 2 DEBUG oslo_concurrency.lockutils [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.856 2 DEBUG nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.856 2 WARNING nova.compute.manager [req-e5b03891-9a61-4112-a2fe-37901bb8690e req-5236f7ac-5067-4495-881a-542e12654221 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:57:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:25.866 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:25.867 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:25.867 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:25 np0005466031 nova_compute[235803]: 2025-10-02 12:57:25.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:27 np0005466031 nova_compute[235803]: 2025-10-02 12:57:27.119 2 DEBUG nova.compute.manager [None req-821d59d7-7c80-4240-bfed-707dbe2aa318 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:57:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:27.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:57:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:27.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.275 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.275 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.276 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.276 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.276 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.277 2 INFO nova.compute.manager [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Terminating instance#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.278 2 DEBUG nova.compute.manager [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:57:29 np0005466031 kernel: tape507d143-8e (unregistering): left promiscuous mode
Oct  2 08:57:29 np0005466031 NetworkManager[44907]: <info>  [1759409849.3344] device (tape507d143-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:57:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:29Z|00589|binding|INFO|Releasing lport e507d143-8e0d-443a-ba92-defb6e0097f8 from this chassis (sb_readonly=0)
Oct  2 08:57:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:29Z|00590|binding|INFO|Setting lport e507d143-8e0d-443a-ba92-defb6e0097f8 down in Southbound
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:29Z|00591|binding|INFO|Removing iface tape507d143-8e ovn-installed in OVS
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.349 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:9d:cb 10.100.0.5'], port_security=['fa:16:3e:40:9d:cb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1a43d1f5-b1b2-488a-8660-f964ee219489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=e507d143-8e0d-443a-ba92-defb6e0097f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.350 141898 INFO neutron.agent.ovn.metadata.agent [-] Port e507d143-8e0d-443a-ba92-defb6e0097f8 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.351 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.367 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9a36e049-368b-44b6-9aa2-2a332dd37e59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.397 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[523d2f61-6b8b-4a69-8efa-883f3b2746d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.400 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[45bae200-f543-48bc-ab90-bb667f207bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005466031 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Oct  2 08:57:29 np0005466031 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000009b.scope: Consumed 5.955s CPU time.
Oct  2 08:57:29 np0005466031 systemd-machined[192227]: Machine qemu-69-instance-0000009b terminated.
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.427 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f2739e-0604-4110-99df-d4ecd94e8cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.445 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[25529e12-54f5-45cf-98d6-09ee6839abaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b0ec11e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:0f:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758793, 'reachable_time': 37682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304812, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.464 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[472b9e9a-ff81-40e8-8ff7-63a894527880]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758802, 'tstamp': 758802}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304813, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7b0ec11e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758805, 'tstamp': 758805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304813, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.465 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.517 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b0ec11e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.517 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.518 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b0ec11e-00, col_values=(('external_ids', {'iface-id': '4551f74b-5a9c-4479-827a-bb210e8a0b52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:29.518 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.528 2 INFO nova.virt.libvirt.driver [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Instance destroyed successfully.#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.528 2 DEBUG nova.objects.instance [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'resources' on Instance uuid 1a43d1f5-b1b2-488a-8660-f964ee219489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.549 2 DEBUG nova.virt.libvirt.vif [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-503062150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-503062150',id=155,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-6uai0whm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:27Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=1a43d1f5-b1b2-488a-8660-f964ee219489,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.549 2 DEBUG nova.network.os_vif_util [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "e507d143-8e0d-443a-ba92-defb6e0097f8", "address": "fa:16:3e:40:9d:cb", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape507d143-8e", "ovs_interfaceid": "e507d143-8e0d-443a-ba92-defb6e0097f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.550 2 DEBUG nova.network.os_vif_util [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:9d:cb,bridge_name='br-int',has_traffic_filtering=True,id=e507d143-8e0d-443a-ba92-defb6e0097f8,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape507d143-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.550 2 DEBUG os_vif [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:9d:cb,bridge_name='br-int',has_traffic_filtering=True,id=e507d143-8e0d-443a-ba92-defb6e0097f8,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape507d143-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape507d143-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.557 2 INFO os_vif [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:9d:cb,bridge_name='br-int',has_traffic_filtering=True,id=e507d143-8e0d-443a-ba92-defb6e0097f8,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape507d143-8e')#033[00m
Oct  2 08:57:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:29.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:29.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.679 2 DEBUG nova.compute.manager [req-100d66db-957a-439d-a35d-2644b4736a94 req-dd372f21-197d-47e6-a0bc-dfb0c3d9ce18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.680 2 DEBUG oslo_concurrency.lockutils [req-100d66db-957a-439d-a35d-2644b4736a94 req-dd372f21-197d-47e6-a0bc-dfb0c3d9ce18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.680 2 DEBUG oslo_concurrency.lockutils [req-100d66db-957a-439d-a35d-2644b4736a94 req-dd372f21-197d-47e6-a0bc-dfb0c3d9ce18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.680 2 DEBUG oslo_concurrency.lockutils [req-100d66db-957a-439d-a35d-2644b4736a94 req-dd372f21-197d-47e6-a0bc-dfb0c3d9ce18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.681 2 DEBUG nova.compute.manager [req-100d66db-957a-439d-a35d-2644b4736a94 req-dd372f21-197d-47e6-a0bc-dfb0c3d9ce18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.681 2 DEBUG nova.compute.manager [req-100d66db-957a-439d-a35d-2644b4736a94 req-dd372f21-197d-47e6-a0bc-dfb0c3d9ce18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-unplugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.816 2 INFO nova.virt.libvirt.driver [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Deleting instance files /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489_del#033[00m
Oct  2 08:57:29 np0005466031 nova_compute[235803]: 2025-10-02 12:57:29.817 2 INFO nova.virt.libvirt.driver [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Deletion of /var/lib/nova/instances/1a43d1f5-b1b2-488a-8660-f964ee219489_del complete#033[00m
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.007 2 INFO nova.compute.manager [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.007 2 DEBUG oslo.service.loopingcall [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.008 2 DEBUG nova.compute.manager [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.008 2 DEBUG nova.network.neutron [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:30 np0005466031 podman[304843]: 2025-10-02 12:57:30.633307576 +0000 UTC m=+0.058675762 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:57:30 np0005466031 podman[304844]: 2025-10-02 12:57:30.633345337 +0000 UTC m=+0.058668581 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.692 2 DEBUG nova.network.neutron [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.727 2 INFO nova.compute.manager [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.825 2 DEBUG nova.compute.manager [req-dc1d0a04-233e-4f65-9bda-7f533ec4be6d req-a452da67-96da-408e-a61b-159b6fd102ed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-deleted-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:30 np0005466031 nova_compute[235803]: 2025-10-02 12:57:30.957 2 INFO nova.compute.manager [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.011 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.011 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.089 2 DEBUG oslo_concurrency.processutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/827417965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.528 2 DEBUG oslo_concurrency.processutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.534 2 DEBUG nova.compute.provider_tree [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.552 2 DEBUG nova.scheduler.client.report [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.574 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.608 2 INFO nova.scheduler.client.report [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Deleted allocations for instance 1a43d1f5-b1b2-488a-8660-f964ee219489#033[00m
Oct  2 08:57:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:31.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:31.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.682 2 DEBUG oslo_concurrency.lockutils [None req-2c8d0677-a8be-4e29-afb0-420d49a2a41d 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.766 2 DEBUG nova.compute.manager [req-8125826a-d77c-4fac-bc02-1e2c1d83b590 req-2e045094-b8da-4c5e-8e38-ae489a973ea3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.767 2 DEBUG oslo_concurrency.lockutils [req-8125826a-d77c-4fac-bc02-1e2c1d83b590 req-2e045094-b8da-4c5e-8e38-ae489a973ea3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.767 2 DEBUG oslo_concurrency.lockutils [req-8125826a-d77c-4fac-bc02-1e2c1d83b590 req-2e045094-b8da-4c5e-8e38-ae489a973ea3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.768 2 DEBUG oslo_concurrency.lockutils [req-8125826a-d77c-4fac-bc02-1e2c1d83b590 req-2e045094-b8da-4c5e-8e38-ae489a973ea3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1a43d1f5-b1b2-488a-8660-f964ee219489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.768 2 DEBUG nova.compute.manager [req-8125826a-d77c-4fac-bc02-1e2c1d83b590 req-2e045094-b8da-4c5e-8e38-ae489a973ea3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] No waiting events found dispatching network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:31 np0005466031 nova_compute[235803]: 2025-10-02 12:57:31.768 2 WARNING nova.compute.manager [req-8125826a-d77c-4fac-bc02-1e2c1d83b590 req-2e045094-b8da-4c5e-8e38-ae489a973ea3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Received unexpected event network-vif-plugged-e507d143-8e0d-443a-ba92-defb6e0097f8 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:57:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:33.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:33.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e347 e347: 3 total, 3 up, 3 in
Oct  2 08:57:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:34 np0005466031 nova_compute[235803]: 2025-10-02 12:57:34.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:35 np0005466031 nova_compute[235803]: 2025-10-02 12:57:35.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:35.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:35.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:37.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:37.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.046 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.047 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.047 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.047 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.047 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.048 2 INFO nova.compute.manager [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Terminating instance#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.050 2 DEBUG nova.compute.manager [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:57:39 np0005466031 kernel: tap5b3c9d9a-c3 (unregistering): left promiscuous mode
Oct  2 08:57:39 np0005466031 NetworkManager[44907]: <info>  [1759409859.2313] device (tap5b3c9d9a-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:57:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:39Z|00592|binding|INFO|Releasing lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 from this chassis (sb_readonly=0)
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:39Z|00593|binding|INFO|Setting lport 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 down in Southbound
Oct  2 08:57:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:57:39Z|00594|binding|INFO|Removing iface tap5b3c9d9a-c3 ovn-installed in OVS
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.245 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:eb:77 10.100.0.11'], port_security=['fa:16:3e:ea:eb:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '58e2a72f-a2b9-41a0-9c67-607e978d8b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb2da64acac041cb8d38c3b43fe4dbe9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2737f8f0-7e89-4464-a3d3-e646093fcb3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2270862e-9c24-4dad-92e2-cc0c5d5c9a3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=5b3c9d9a-c3cd-49a5-b917-49aefaefd249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.247 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 5b3c9d9a-c3cd-49a5-b917-49aefaefd249 in datapath 7b0ec11e-03f1-4b98-ac7a-50b364660bd2 unbound from our chassis#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.248 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b0ec11e-03f1-4b98-ac7a-50b364660bd2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.249 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d61838ee-8ee8-4e05-b1c7-32b1d8fcfc16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.250 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 namespace which is not needed anymore#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:39 np0005466031 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct  2 08:57:39 np0005466031 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000093.scope: Consumed 1.926s CPU time.
Oct  2 08:57:39 np0005466031 systemd-machined[192227]: Machine qemu-65-instance-00000093 terminated.
Oct  2 08:57:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:39 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301971]: [NOTICE]   (301975) : haproxy version is 2.8.14-c23fe91
Oct  2 08:57:39 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301971]: [NOTICE]   (301975) : path to executable is /usr/sbin/haproxy
Oct  2 08:57:39 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301971]: [WARNING]  (301975) : Exiting Master process...
Oct  2 08:57:39 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301971]: [ALERT]    (301975) : Current worker (301977) exited with code 143 (Terminated)
Oct  2 08:57:39 np0005466031 neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2[301971]: [WARNING]  (301975) : All workers exited. Exiting... (0)
Oct  2 08:57:39 np0005466031 systemd[1]: libpod-cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448.scope: Deactivated successfully.
Oct  2 08:57:39 np0005466031 podman[304933]: 2025-10-02 12:57:39.390191467 +0000 UTC m=+0.062586894 container died cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:57:39 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448-userdata-shm.mount: Deactivated successfully.
Oct  2 08:57:39 np0005466031 systemd[1]: var-lib-containers-storage-overlay-f2dd532bcc295f20f645557125912523aebac5679e2b79f8f23823467ba461a8-merged.mount: Deactivated successfully.
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.479 2 INFO nova.virt.libvirt.driver [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Instance destroyed successfully.#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.479 2 DEBUG nova.objects.instance [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lazy-loading 'resources' on Instance uuid 58e2a72f-a2b9-41a0-9c67-607e978d8b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.495 2 DEBUG nova.virt.libvirt.vif [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:54:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-996317369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-996317369',id=147,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb2da64acac041cb8d38c3b43fe4dbe9',ramdisk_id='',reservation_id='r-dhn09nvd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1641553658-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:41Z,user_data=None,user_id='00be63ea13c84e3d9419078865524099',uuid=58e2a72f-a2b9-41a0-9c67-607e978d8b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.495 2 DEBUG nova.network.os_vif_util [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converting VIF {"id": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "address": "fa:16:3e:ea:eb:77", "network": {"id": "7b0ec11e-03f1-4b98-ac7a-50b364660bd2", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1331936544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb2da64acac041cb8d38c3b43fe4dbe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b3c9d9a-c3", "ovs_interfaceid": "5b3c9d9a-c3cd-49a5-b917-49aefaefd249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.496 2 DEBUG nova.network.os_vif_util [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:eb:77,bridge_name='br-int',has_traffic_filtering=True,id=5b3c9d9a-c3cd-49a5-b917-49aefaefd249,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b3c9d9a-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.496 2 DEBUG os_vif [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:eb:77,bridge_name='br-int',has_traffic_filtering=True,id=5b3c9d9a-c3cd-49a5-b917-49aefaefd249,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b3c9d9a-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b3c9d9a-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.505 2 INFO os_vif [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:eb:77,bridge_name='br-int',has_traffic_filtering=True,id=5b3c9d9a-c3cd-49a5-b917-49aefaefd249,network=Network(7b0ec11e-03f1-4b98-ac7a-50b364660bd2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b3c9d9a-c3')#033[00m
Oct  2 08:57:39 np0005466031 podman[304933]: 2025-10-02 12:57:39.607623062 +0000 UTC m=+0.280018489 container cleanup cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:57:39 np0005466031 systemd[1]: libpod-conmon-cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448.scope: Deactivated successfully.
Oct  2 08:57:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:39.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:39.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:39 np0005466031 podman[304992]: 2025-10-02 12:57:39.789176863 +0000 UTC m=+0.160962099 container remove cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.795 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf4b8ca-ebe9-4a6d-9798-6920ef368894]: (4, ('Thu Oct  2 12:57:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 (cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448)\ncd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448\nThu Oct  2 12:57:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 (cd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448)\ncd1f6940d70c589a41f066fbec583a2eae4fe73dbe59dcf30a41125785fd5448\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.796 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[082612b1-a1e2-4c87-81bc-d885e68bfa06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.797 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b0ec11e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:39 np0005466031 kernel: tap7b0ec11e-00: left promiscuous mode
Oct  2 08:57:39 np0005466031 nova_compute[235803]: 2025-10-02 12:57:39.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.818 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4df7c10b-f940-4284-899c-97119ca986db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.846 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1095203b-5a3a-45e1-b4af-47d99f90df38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.847 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[abc54a27-3379-4eda-be8e-6e22444674c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.861 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b853bbe2-6eb2-4a8c-992c-1b60f1ac3628]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758785, 'reachable_time': 36567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305007, 'error': None, 'target': 'ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.863 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b0ec11e-03f1-4b98-ac7a-50b364660bd2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:57:39 np0005466031 systemd[1]: run-netns-ovnmeta\x2d7b0ec11e\x2d03f1\x2d4b98\x2dac7a\x2d50b364660bd2.mount: Deactivated successfully.
Oct  2 08:57:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:39.864 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[2437b553-1d6a-4543-903b-526e3c52c8dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.642 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.642 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.643 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.643 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.643 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.643 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-unplugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.643 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.644 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.644 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.644 2 DEBUG oslo_concurrency.lockutils [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.644 2 DEBUG nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] No waiting events found dispatching network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:40 np0005466031 nova_compute[235803]: 2025-10-02 12:57:40.644 2 WARNING nova.compute.manager [req-9af8e69b-5a58-496c-9b21-4926933cd8d6 req-f84c5e86-1090-4fd7-aa5d-e1fa1ca19b55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received unexpected event network-vif-plugged-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:57:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:41.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:41.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:42 np0005466031 nova_compute[235803]: 2025-10-02 12:57:42.597 2 INFO nova.virt.libvirt.driver [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Deleting instance files /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88_del#033[00m
Oct  2 08:57:42 np0005466031 nova_compute[235803]: 2025-10-02 12:57:42.598 2 INFO nova.virt.libvirt.driver [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Deletion of /var/lib/nova/instances/58e2a72f-a2b9-41a0-9c67-607e978d8b88_del complete#033[00m
Oct  2 08:57:42 np0005466031 nova_compute[235803]: 2025-10-02 12:57:42.656 2 INFO nova.compute.manager [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Took 3.61 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:57:42 np0005466031 nova_compute[235803]: 2025-10-02 12:57:42.656 2 DEBUG oslo.service.loopingcall [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:57:42 np0005466031 nova_compute[235803]: 2025-10-02 12:57:42.656 2 DEBUG nova.compute.manager [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:57:42 np0005466031 nova_compute[235803]: 2025-10-02 12:57:42.656 2 DEBUG nova.network.neutron [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:57:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e348 e348: 3 total, 3 up, 3 in
Oct  2 08:57:43 np0005466031 nova_compute[235803]: 2025-10-02 12:57:43.312 2 DEBUG nova.network.neutron [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:43 np0005466031 nova_compute[235803]: 2025-10-02 12:57:43.331 2 INFO nova.compute.manager [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Took 0.67 seconds to deallocate network for instance.#033[00m
Oct  2 08:57:43 np0005466031 nova_compute[235803]: 2025-10-02 12:57:43.357 2 DEBUG nova.compute.manager [req-113bd5ff-8f10-4be7-9632-4bf2b2e1fc8f req-a5a4dcd6-998c-4eaf-bda6-7ab9b029a247 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Received event network-vif-deleted-5b3c9d9a-c3cd-49a5-b917-49aefaefd249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:43 np0005466031 nova_compute[235803]: 2025-10-02 12:57:43.520 2 INFO nova.compute.manager [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:57:43 np0005466031 nova_compute[235803]: 2025-10-02 12:57:43.579 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:43 np0005466031 nova_compute[235803]: 2025-10-02 12:57:43.580 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:43.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:43.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:43 np0005466031 nova_compute[235803]: 2025-10-02 12:57:43.814 2 DEBUG oslo_concurrency.processutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3758325169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.259 2 DEBUG oslo_concurrency.processutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.268 2 DEBUG nova.compute.provider_tree [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.287 2 DEBUG nova.scheduler.client.report [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:57:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.308 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.336 2 INFO nova.scheduler.client.report [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Deleted allocations for instance 58e2a72f-a2b9-41a0-9c67-607e978d8b88#033[00m
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.399 2 DEBUG oslo_concurrency.lockutils [None req-4203dd24-7e82-4ba8-8535-d66846deff8e 00be63ea13c84e3d9419078865524099 cb2da64acac041cb8d38c3b43fe4dbe9 - - default default] Lock "58e2a72f-a2b9-41a0-9c67-607e978d8b88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.527 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409849.5263379, 1a43d1f5-b1b2-488a-8660-f964ee219489 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.527 2 INFO nova.compute.manager [-] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:57:44 np0005466031 nova_compute[235803]: 2025-10-02 12:57:44.550 2 DEBUG nova.compute.manager [None req-04ff22d9-a83e-4620-9a4e-b36110be82ec - - - - - -] [instance: 1a43d1f5-b1b2-488a-8660-f964ee219489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:45 np0005466031 nova_compute[235803]: 2025-10-02 12:57:45.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:45.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:45.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e349 e349: 3 total, 3 up, 3 in
Oct  2 08:57:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:49 np0005466031 nova_compute[235803]: 2025-10-02 12:57:49.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:49.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:49.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:50 np0005466031 nova_compute[235803]: 2025-10-02 12:57:50.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:51.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:51.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:52.736 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:52 np0005466031 nova_compute[235803]: 2025-10-02 12:57:52.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:52.737 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.174 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "201e9977-0b48-484d-8463-4ff8484498bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.175 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.190 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.255 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.256 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.264 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.265 2 INFO nova.compute.claims [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.364 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:53.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 e350: 3 total, 3 up, 3 in
Oct  2 08:57:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:53 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3185490209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.866 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.873 2 DEBUG nova.compute.provider_tree [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.890 2 DEBUG nova.scheduler.client.report [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.914 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.915 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.961 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:57:53 np0005466031 nova_compute[235803]: 2025-10-02 12:57:53.962 2 DEBUG nova.network.neutron [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.002 2 INFO nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.026 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.131 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.133 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.133 2 INFO nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Creating image(s)#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.160 2 DEBUG nova.storage.rbd_utils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 201e9977-0b48-484d-8463-4ff8484498bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.186 2 DEBUG nova.storage.rbd_utils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 201e9977-0b48-484d-8463-4ff8484498bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.211 2 DEBUG nova.storage.rbd_utils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 201e9977-0b48-484d-8463-4ff8484498bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.215 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.282 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.283 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.284 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.284 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.372 2 DEBUG nova.storage.rbd_utils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 201e9977-0b48-484d-8463-4ff8484498bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.376 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 201e9977-0b48-484d-8463-4ff8484498bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:57:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:57:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.407 2 DEBUG nova.policy [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.478 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409859.4774592, 58e2a72f-a2b9-41a0-9c67-607e978d8b88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.479 2 INFO nova.compute.manager [-] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.500 2 DEBUG nova.compute.manager [None req-396f57ce-909a-4cb7-9bdb-8cee9ce4fcad - - - - - -] [instance: 58e2a72f-a2b9-41a0-9c67-607e978d8b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:54 np0005466031 nova_compute[235803]: 2025-10-02 12:57:54.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:55 np0005466031 nova_compute[235803]: 2025-10-02 12:57:55.150 2 DEBUG nova.network.neutron [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Successfully created port: fc519e38-826c-47a6-a534-67b32458974d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:57:55 np0005466031 nova_compute[235803]: 2025-10-02 12:57:55.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:55 np0005466031 podman[305337]: 2025-10-02 12:57:55.625982978 +0000 UTC m=+0.053504722 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:57:55 np0005466031 podman[305338]: 2025-10-02 12:57:55.663770827 +0000 UTC m=+0.091028464 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:57:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:55.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:55.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:55 np0005466031 nova_compute[235803]: 2025-10-02 12:57:55.752 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 201e9977-0b48-484d-8463-4ff8484498bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:55 np0005466031 nova_compute[235803]: 2025-10-02 12:57:55.900 2 DEBUG nova.storage.rbd_utils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 201e9977-0b48-484d-8463-4ff8484498bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.036 2 DEBUG nova.objects.instance [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 201e9977-0b48-484d-8463-4ff8484498bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.112 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.112 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Ensure instance console log exists: /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.113 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.113 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.113 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.678 2 DEBUG nova.network.neutron [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Successfully updated port: fc519e38-826c-47a6-a534-67b32458974d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.696 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-201e9977-0b48-484d-8463-4ff8484498bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.696 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-201e9977-0b48-484d-8463-4ff8484498bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.696 2 DEBUG nova.network.neutron [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.829 2 DEBUG nova.compute.manager [req-7fd3ad77-91fc-4e0f-a321-8fd1e859e6b9 req-75703617-8ada-4a4a-b77c-2b1d0f9dc62a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received event network-changed-fc519e38-826c-47a6-a534-67b32458974d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.829 2 DEBUG nova.compute.manager [req-7fd3ad77-91fc-4e0f-a321-8fd1e859e6b9 req-75703617-8ada-4a4a-b77c-2b1d0f9dc62a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Refreshing instance network info cache due to event network-changed-fc519e38-826c-47a6-a534-67b32458974d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.829 2 DEBUG oslo_concurrency.lockutils [req-7fd3ad77-91fc-4e0f-a321-8fd1e859e6b9 req-75703617-8ada-4a4a-b77c-2b1d0f9dc62a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-201e9977-0b48-484d-8463-4ff8484498bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:56 np0005466031 nova_compute[235803]: 2025-10-02 12:57:56.941 2 DEBUG nova.network.neutron [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:57:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:57.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.041 2 DEBUG nova.network.neutron [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Updating instance_info_cache with network_info: [{"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.059 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-201e9977-0b48-484d-8463-4ff8484498bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.060 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Instance network_info: |[{"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.060 2 DEBUG oslo_concurrency.lockutils [req-7fd3ad77-91fc-4e0f-a321-8fd1e859e6b9 req-75703617-8ada-4a4a-b77c-2b1d0f9dc62a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-201e9977-0b48-484d-8463-4ff8484498bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.061 2 DEBUG nova.network.neutron [req-7fd3ad77-91fc-4e0f-a321-8fd1e859e6b9 req-75703617-8ada-4a4a-b77c-2b1d0f9dc62a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Refreshing network info cache for port fc519e38-826c-47a6-a534-67b32458974d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.063 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Start _get_guest_xml network_info=[{"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.067 2 WARNING nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.071 2 DEBUG nova.virt.libvirt.host [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.072 2 DEBUG nova.virt.libvirt.host [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.077 2 DEBUG nova.virt.libvirt.host [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.078 2 DEBUG nova.virt.libvirt.host [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.079 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.079 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.080 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.080 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.080 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.080 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.080 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.081 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.081 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.081 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.082 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.082 2 DEBUG nova.virt.hardware [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.084 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:59 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4123427180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.542 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.579 2 DEBUG nova.storage.rbd_utils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 201e9977-0b48-484d-8463-4ff8484498bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:59 np0005466031 nova_compute[235803]: 2025-10-02 12:57:59.583 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:59.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:57:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:57:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:59.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:57:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:57:59.739 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:58:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2241388146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.030 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.034 2 DEBUG nova.virt.libvirt.vif [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-645252589',display_name='tempest-TestNetworkBasicOps-server-645252589',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-645252589',id=160,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE0f6RSAlvvTz//rGZzdM0INxJgvsGuO9EKDvMAzOgJFDC6XQDuxQyoFolakbBR2ntmHMkocOrDkEDQjt8yPJzmgMsdO7QBV8P0/QPVX8scJgw8dmu2DWuSZU/ASABQPAg==',key_name='tempest-TestNetworkBasicOps-367999511',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-uh0yev6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:54Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=201e9977-0b48-484d-8463-4ff8484498bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.035 2 DEBUG nova.network.os_vif_util [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.036 2 DEBUG nova.network.os_vif_util [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e1:59,bridge_name='br-int',has_traffic_filtering=True,id=fc519e38-826c-47a6-a534-67b32458974d,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc519e38-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.037 2 DEBUG nova.objects.instance [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 201e9977-0b48-484d-8463-4ff8484498bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.058 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <uuid>201e9977-0b48-484d-8463-4ff8484498bf</uuid>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <name>instance-000000a0</name>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkBasicOps-server-645252589</nova:name>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:57:59</nova:creationTime>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <nova:port uuid="fc519e38-826c-47a6-a534-67b32458974d">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <entry name="serial">201e9977-0b48-484d-8463-4ff8484498bf</entry>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <entry name="uuid">201e9977-0b48-484d-8463-4ff8484498bf</entry>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/201e9977-0b48-484d-8463-4ff8484498bf_disk">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/201e9977-0b48-484d-8463-4ff8484498bf_disk.config">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:d0:e1:59"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <target dev="tapfc519e38-82"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf/console.log" append="off"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:58:00 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:58:00 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:58:00 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:58:00 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.059 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Preparing to wait for external event network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.060 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "201e9977-0b48-484d-8463-4ff8484498bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.060 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.060 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.061 2 DEBUG nova.virt.libvirt.vif [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-645252589',display_name='tempest-TestNetworkBasicOps-server-645252589',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-645252589',id=160,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE0f6RSAlvvTz//rGZzdM0INxJgvsGuO9EKDvMAzOgJFDC6XQDuxQyoFolakbBR2ntmHMkocOrDkEDQjt8yPJzmgMsdO7QBV8P0/QPVX8scJgw8dmu2DWuSZU/ASABQPAg==',key_name='tempest-TestNetworkBasicOps-367999511',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-uh0yev6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:54Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=201e9977-0b48-484d-8463-4ff8484498bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.061 2 DEBUG nova.network.os_vif_util [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.062 2 DEBUG nova.network.os_vif_util [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e1:59,bridge_name='br-int',has_traffic_filtering=True,id=fc519e38-826c-47a6-a534-67b32458974d,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc519e38-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.062 2 DEBUG os_vif [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e1:59,bridge_name='br-int',has_traffic_filtering=True,id=fc519e38-826c-47a6-a534-67b32458974d,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc519e38-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.064 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc519e38-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.067 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc519e38-82, col_values=(('external_ids', {'iface-id': 'fc519e38-826c-47a6-a534-67b32458974d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:e1:59', 'vm-uuid': '201e9977-0b48-484d-8463-4ff8484498bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:00 np0005466031 NetworkManager[44907]: <info>  [1759409880.0692] manager: (tapfc519e38-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.075 2 INFO os_vif [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e1:59,bridge_name='br-int',has_traffic_filtering=True,id=fc519e38-826c-47a6-a534-67b32458974d,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc519e38-82')#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.136 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.137 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.137 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:d0:e1:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.137 2 INFO nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Using config drive#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.159 2 DEBUG nova.storage.rbd_utils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 201e9977-0b48-484d-8463-4ff8484498bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.667 2 INFO nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Creating config drive at /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf/disk.config#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.672 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar8skkna execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.749 2 DEBUG nova.network.neutron [req-7fd3ad77-91fc-4e0f-a321-8fd1e859e6b9 req-75703617-8ada-4a4a-b77c-2b1d0f9dc62a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Updated VIF entry in instance network info cache for port fc519e38-826c-47a6-a534-67b32458974d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.750 2 DEBUG nova.network.neutron [req-7fd3ad77-91fc-4e0f-a321-8fd1e859e6b9 req-75703617-8ada-4a4a-b77c-2b1d0f9dc62a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Updating instance_info_cache with network_info: [{"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.779 2 DEBUG oslo_concurrency.lockutils [req-7fd3ad77-91fc-4e0f-a321-8fd1e859e6b9 req-75703617-8ada-4a4a-b77c-2b1d0f9dc62a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-201e9977-0b48-484d-8463-4ff8484498bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.808 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar8skkna" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.839 2 DEBUG nova.storage.rbd_utils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 201e9977-0b48-484d-8463-4ff8484498bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:00 np0005466031 nova_compute[235803]: 2025-10-02 12:58:00.843 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf/disk.config 201e9977-0b48-484d-8463-4ff8484498bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.198 2 DEBUG oslo_concurrency.processutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf/disk.config 201e9977-0b48-484d-8463-4ff8484498bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.199 2 INFO nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Deleting local config drive /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf/disk.config because it was imported into RBD.#033[00m
Oct  2 08:58:01 np0005466031 kernel: tapfc519e38-82: entered promiscuous mode
Oct  2 08:58:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:01Z|00595|binding|INFO|Claiming lport fc519e38-826c-47a6-a534-67b32458974d for this chassis.
Oct  2 08:58:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:01Z|00596|binding|INFO|fc519e38-826c-47a6-a534-67b32458974d: Claiming fa:16:3e:d0:e1:59 10.100.0.28
Oct  2 08:58:01 np0005466031 NetworkManager[44907]: <info>  [1759409881.2523] manager: (tapfc519e38-82): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.262 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:e1:59 10.100.0.28'], port_security=['fa:16:3e:d0:e1:59 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': '201e9977-0b48-484d-8463-4ff8484498bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '98956923-c4b5-4f9c-898f-15ab7973f1df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62f8ddde-a575-4c3a-bc2f-e3ff31baaf2d, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=fc519e38-826c-47a6-a534-67b32458974d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.263 141898 INFO neutron.agent.ovn.metadata.agent [-] Port fc519e38-826c-47a6-a534-67b32458974d in datapath bc72facc-29fc-4f60-8da4-b2b18aba70d2 bound to our chassis#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.264 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc72facc-29fc-4f60-8da4-b2b18aba70d2#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.277 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[83053d56-1c64-4f25-870a-c1f56498d4e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.278 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc72facc-21 in ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.280 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc72facc-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.280 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[75e3c87e-b586-4363-a74e-19dabc4dceb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.281 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[59680217-2280-4fb6-aae0-c8129e637842]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.295 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[6e805746-8166-4e9c-831c-ca828bd7016e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:01 np0005466031 systemd-machined[192227]: New machine qemu-70-instance-000000a0.
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:01Z|00597|binding|INFO|Setting lport fc519e38-826c-47a6-a534-67b32458974d ovn-installed in OVS
Oct  2 08:58:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:01Z|00598|binding|INFO|Setting lport fc519e38-826c-47a6-a534-67b32458974d up in Southbound
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.308 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[24c63152-5fcd-41a8-ba3f-29ff82251d1e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 systemd[1]: Started Virtual Machine qemu-70-instance-000000a0.
Oct  2 08:58:01 np0005466031 systemd-udevd[305630]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.340 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[772cf65c-0dc8-415a-b492-2c211ff715b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 NetworkManager[44907]: <info>  [1759409881.3466] manager: (tapbc72facc-20): new Veth device (/org/freedesktop/NetworkManager/Devices/268)
Oct  2 08:58:01 np0005466031 systemd-udevd[305636]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.346 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0996c02a-ceb6-46b0-99ef-68442fa2dce7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 NetworkManager[44907]: <info>  [1759409881.3487] device (tapfc519e38-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:58:01 np0005466031 NetworkManager[44907]: <info>  [1759409881.3500] device (tapfc519e38-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:58:01 np0005466031 podman[305590]: 2025-10-02 12:58:01.362193418 +0000 UTC m=+0.082275672 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.380 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0961b45a-ce26-4a0e-a382-41dc4c63ccb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.382 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f85cf2-13c4-4f2e-bdb9-748884b180de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 podman[305591]: 2025-10-02 12:58:01.386447456 +0000 UTC m=+0.101318850 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:58:01 np0005466031 NetworkManager[44907]: <info>  [1759409881.4032] device (tapbc72facc-20): carrier: link connected
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.409 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9101a4-7e24-4011-93cb-1b3666cf8f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.425 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[746cf25b-ca2b-4a9f-8da4-e9a6464967b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc72facc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:b8:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773699, 'reachable_time': 25540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305665, 'error': None, 'target': 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.440 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ae87e8c4-43f9-46f0-ad17-8be35ff407f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:b83f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 773699, 'tstamp': 773699}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305666, 'error': None, 'target': 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.456 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fe87a767-e45f-441e-8366-d356e97da1d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc72facc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:b8:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773699, 'reachable_time': 25540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305667, 'error': None, 'target': 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.495 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a95600b2-ee1c-4bd7-b5cc-fa89fa25dbd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.552 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[05c172d9-7209-4eb0-8e69-6113a53ee616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.553 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc72facc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.553 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.554 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc72facc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:01 np0005466031 NetworkManager[44907]: <info>  [1759409881.5562] manager: (tapbc72facc-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct  2 08:58:01 np0005466031 kernel: tapbc72facc-20: entered promiscuous mode
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.561 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc72facc-20, col_values=(('external_ids', {'iface-id': '00b1af69-0dd2-4e03-9090-dc7ccfcae6b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:01 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:01Z|00599|binding|INFO|Releasing lport 00b1af69-0dd2-4e03-9090-dc7ccfcae6b6 from this chassis (sb_readonly=0)
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.566 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc72facc-29fc-4f60-8da4-b2b18aba70d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc72facc-29fc-4f60-8da4-b2b18aba70d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.567 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e42a4c28-4a3e-47bc-a6ba-93d1c2f6a167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.568 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-bc72facc-29fc-4f60-8da4-b2b18aba70d2
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/bc72facc-29fc-4f60-8da4-b2b18aba70d2.pid.haproxy
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID bc72facc-29fc-4f60-8da4-b2b18aba70d2
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:58:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:01.568 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'env', 'PROCESS_TAG=haproxy-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc72facc-29fc-4f60-8da4-b2b18aba70d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.603 2 DEBUG nova.compute.manager [req-715748c3-2f78-4c2d-b4ae-b0ce9fe9ae5b req-2682cc52-81f6-4c49-9c98-fa64148ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received event network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.604 2 DEBUG oslo_concurrency.lockutils [req-715748c3-2f78-4c2d-b4ae-b0ce9fe9ae5b req-2682cc52-81f6-4c49-9c98-fa64148ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "201e9977-0b48-484d-8463-4ff8484498bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.605 2 DEBUG oslo_concurrency.lockutils [req-715748c3-2f78-4c2d-b4ae-b0ce9fe9ae5b req-2682cc52-81f6-4c49-9c98-fa64148ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.605 2 DEBUG oslo_concurrency.lockutils [req-715748c3-2f78-4c2d-b4ae-b0ce9fe9ae5b req-2682cc52-81f6-4c49-9c98-fa64148ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.605 2 DEBUG nova.compute.manager [req-715748c3-2f78-4c2d-b4ae-b0ce9fe9ae5b req-2682cc52-81f6-4c49-9c98-fa64148ca3bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Processing event network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:58:01 np0005466031 nova_compute[235803]: 2025-10-02 12:58:01.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:01.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:58:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:58:01 np0005466031 podman[305697]: 2025-10-02 12:58:01.92418739 +0000 UTC m=+0.043726301 container create 005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 08:58:01 np0005466031 systemd[1]: Started libpod-conmon-005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257.scope.
Oct  2 08:58:01 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:58:01 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c153031815e29bc83e043abfb06b3914817b68cdb6a98f30ca922e1871fc0348/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:58:01 np0005466031 podman[305697]: 2025-10-02 12:58:01.900807036 +0000 UTC m=+0.020345977 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:58:02 np0005466031 podman[305697]: 2025-10-02 12:58:02.000719435 +0000 UTC m=+0.120258356 container init 005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:58:02 np0005466031 podman[305697]: 2025-10-02 12:58:02.005958086 +0000 UTC m=+0.125496997 container start 005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:58:02 np0005466031 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[305713]: [NOTICE]   (305731) : New worker (305737) forked
Oct  2 08:58:02 np0005466031 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[305713]: [NOTICE]   (305731) : Loading success.
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.548 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.550 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409882.548525, 201e9977-0b48-484d-8463-4ff8484498bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.550 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] VM Started (Lifecycle Event)#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.554 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.557 2 INFO nova.virt.libvirt.driver [-] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Instance spawned successfully.#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.558 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.576 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.579 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.598 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.598 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.599 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.599 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.599 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.600 2 DEBUG nova.virt.libvirt.driver [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.613 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.613 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409882.5486429, 201e9977-0b48-484d-8463-4ff8484498bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.613 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.653 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.660 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409882.554015, 201e9977-0b48-484d-8463-4ff8484498bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.660 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.721 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.724 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.754 2 INFO nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Took 8.62 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.755 2 DEBUG nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.763 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.862 2 INFO nova.compute.manager [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Took 9.63 seconds to build instance.#033[00m
Oct  2 08:58:02 np0005466031 nova_compute[235803]: 2025-10-02 12:58:02.898 2 DEBUG oslo_concurrency.lockutils [None req-40d56306-6a52-4a70-9928-92843ae2fcc8 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:58:03 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:58:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:03.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:03 np0005466031 nova_compute[235803]: 2025-10-02 12:58:03.696 2 DEBUG nova.compute.manager [req-ce58722b-7bf6-49d9-995f-b4fb67438563 req-73f522b5-d184-451f-a3cd-67afd60164f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received event network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:03 np0005466031 nova_compute[235803]: 2025-10-02 12:58:03.697 2 DEBUG oslo_concurrency.lockutils [req-ce58722b-7bf6-49d9-995f-b4fb67438563 req-73f522b5-d184-451f-a3cd-67afd60164f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "201e9977-0b48-484d-8463-4ff8484498bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:03 np0005466031 nova_compute[235803]: 2025-10-02 12:58:03.697 2 DEBUG oslo_concurrency.lockutils [req-ce58722b-7bf6-49d9-995f-b4fb67438563 req-73f522b5-d184-451f-a3cd-67afd60164f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:03 np0005466031 nova_compute[235803]: 2025-10-02 12:58:03.697 2 DEBUG oslo_concurrency.lockutils [req-ce58722b-7bf6-49d9-995f-b4fb67438563 req-73f522b5-d184-451f-a3cd-67afd60164f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:03 np0005466031 nova_compute[235803]: 2025-10-02 12:58:03.697 2 DEBUG nova.compute.manager [req-ce58722b-7bf6-49d9-995f-b4fb67438563 req-73f522b5-d184-451f-a3cd-67afd60164f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] No waiting events found dispatching network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:03 np0005466031 nova_compute[235803]: 2025-10-02 12:58:03.698 2 WARNING nova.compute.manager [req-ce58722b-7bf6-49d9-995f-b4fb67438563 req-73f522b5-d184-451f-a3cd-67afd60164f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received unexpected event network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d for instance with vm_state active and task_state None.#033[00m
Oct  2 08:58:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:05 np0005466031 nova_compute[235803]: 2025-10-02 12:58:05.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:05 np0005466031 nova_compute[235803]: 2025-10-02 12:58:05.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:05 np0005466031 nova_compute[235803]: 2025-10-02 12:58:05.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:05.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:05.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:07 np0005466031 nova_compute[235803]: 2025-10-02 12:58:07.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:07.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:07.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:09.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:10 np0005466031 nova_compute[235803]: 2025-10-02 12:58:10.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:10 np0005466031 nova_compute[235803]: 2025-10-02 12:58:10.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:11.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:11.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:12 np0005466031 nova_compute[235803]: 2025-10-02 12:58:12.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:12 np0005466031 nova_compute[235803]: 2025-10-02 12:58:12.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:12 np0005466031 nova_compute[235803]: 2025-10-02 12:58:12.664 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:12 np0005466031 nova_compute[235803]: 2025-10-02 12:58:12.664 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:12 np0005466031 nova_compute[235803]: 2025-10-02 12:58:12.664 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:12 np0005466031 nova_compute[235803]: 2025-10-02 12:58:12.664 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:58:12 np0005466031 nova_compute[235803]: 2025-10-02 12:58:12.665 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/815310072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.129 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.214 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.215 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.353 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.354 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4012MB free_disk=20.743385314941406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.354 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.354 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.424 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 201e9977-0b48-484d-8463-4ff8484498bf actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.424 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.425 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.492 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:13.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:13.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3666288749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.928 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.933 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.953 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.993 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:58:13 np0005466031 nova_compute[235803]: 2025-10-02 12:58:13.993 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:14Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d0:e1:59 10.100.0.28
Oct  2 08:58:14 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:14Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d0:e1:59 10.100.0.28
Oct  2 08:58:15 np0005466031 nova_compute[235803]: 2025-10-02 12:58:15.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:15 np0005466031 nova_compute[235803]: 2025-10-02 12:58:15.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:15.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:15 np0005466031 nova_compute[235803]: 2025-10-02 12:58:15.992 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:15 np0005466031 nova_compute[235803]: 2025-10-02 12:58:15.992 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:58:16 np0005466031 nova_compute[235803]: 2025-10-02 12:58:16.010 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:58:16 np0005466031 nova_compute[235803]: 2025-10-02 12:58:16.010 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:17 np0005466031 nova_compute[235803]: 2025-10-02 12:58:17.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:17.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:17.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:19 np0005466031 nova_compute[235803]: 2025-10-02 12:58:19.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:19 np0005466031 nova_compute[235803]: 2025-10-02 12:58:19.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:58:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:19.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:20 np0005466031 nova_compute[235803]: 2025-10-02 12:58:20.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:20 np0005466031 nova_compute[235803]: 2025-10-02 12:58:20.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:21.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:21.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:22 np0005466031 nova_compute[235803]: 2025-10-02 12:58:22.919 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:22 np0005466031 nova_compute[235803]: 2025-10-02 12:58:22.919 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:22 np0005466031 nova_compute[235803]: 2025-10-02 12:58:22.933 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.001 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.002 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.007 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.007 2 INFO nova.compute.claims [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.109 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1139249947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.631 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.636 2 DEBUG nova.compute.provider_tree [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.650 2 DEBUG nova.scheduler.client.report [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.692 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.692 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:58:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:58:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:23.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.754 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.754 2 DEBUG nova.network.neutron [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.809 2 INFO nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.828 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.925 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.927 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.927 2 INFO nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Creating image(s)#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.954 2 DEBUG nova.storage.rbd_utils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:23 np0005466031 nova_compute[235803]: 2025-10-02 12:58:23.982 2 DEBUG nova.storage.rbd_utils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.010 2 DEBUG nova.storage.rbd_utils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.014 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.057 2 DEBUG nova.policy [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57a1608ca1fc4bef8b6bc6ad68be3999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.109 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.110 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.111 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.111 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.255 2 DEBUG nova.storage.rbd_utils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:24 np0005466031 nova_compute[235803]: 2025-10-02 12:58:24.259 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.015 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.082 2 DEBUG nova.storage.rbd_utils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] resizing rbd image 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.206 2 DEBUG nova.objects.instance [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.220 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.220 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Ensure instance console log exists: /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.221 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.221 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.221 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:25 np0005466031 nova_compute[235803]: 2025-10-02 12:58:25.670 2 DEBUG nova.network.neutron [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Successfully created port: 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:58:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:25.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:25.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:25.867 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:25.868 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:25.868 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:26 np0005466031 podman[306116]: 2025-10-02 12:58:26.633141348 +0000 UTC m=+0.061306457 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:58:26 np0005466031 podman[306117]: 2025-10-02 12:58:26.66341613 +0000 UTC m=+0.091330802 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:58:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:27.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:27 np0005466031 nova_compute[235803]: 2025-10-02 12:58:27.765 2 DEBUG nova.network.neutron [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Successfully updated port: 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:58:27 np0005466031 nova_compute[235803]: 2025-10-02 12:58:27.807 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:58:27 np0005466031 nova_compute[235803]: 2025-10-02 12:58:27.807 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquired lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:58:27 np0005466031 nova_compute[235803]: 2025-10-02 12:58:27.808 2 DEBUG nova.network.neutron [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:58:27 np0005466031 nova_compute[235803]: 2025-10-02 12:58:27.923 2 DEBUG nova.compute.manager [req-000e8123-e818-4830-a4df-f46f661d572d req-1546b948-d79a-4473-bb25-54a3f9b96a1d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-changed-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:27 np0005466031 nova_compute[235803]: 2025-10-02 12:58:27.923 2 DEBUG nova.compute.manager [req-000e8123-e818-4830-a4df-f46f661d572d req-1546b948-d79a-4473-bb25-54a3f9b96a1d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Refreshing instance network info cache due to event network-changed-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:58:27 np0005466031 nova_compute[235803]: 2025-10-02 12:58:27.924 2 DEBUG oslo_concurrency.lockutils [req-000e8123-e818-4830-a4df-f46f661d572d req-1546b948-d79a-4473-bb25-54a3f9b96a1d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:58:28 np0005466031 nova_compute[235803]: 2025-10-02 12:58:28.129 2 DEBUG nova.network.neutron [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.377 2 DEBUG nova.network.neutron [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating instance_info_cache with network_info: [{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.399 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Releasing lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.399 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance network_info: |[{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.399 2 DEBUG oslo_concurrency.lockutils [req-000e8123-e818-4830-a4df-f46f661d572d req-1546b948-d79a-4473-bb25-54a3f9b96a1d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.399 2 DEBUG nova.network.neutron [req-000e8123-e818-4830-a4df-f46f661d572d req-1546b948-d79a-4473-bb25-54a3f9b96a1d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Refreshing network info cache for port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.402 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Start _get_guest_xml network_info=[{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.405 2 WARNING nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.410 2 DEBUG nova.virt.libvirt.host [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.410 2 DEBUG nova.virt.libvirt.host [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.413 2 DEBUG nova.virt.libvirt.host [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.413 2 DEBUG nova.virt.libvirt.host [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.414 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.415 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.415 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.415 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.415 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.416 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.416 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.416 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.416 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.417 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.417 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.417 2 DEBUG nova.virt.hardware [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.420 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:29.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:58:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/697967858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.858 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.881 2 DEBUG nova.storage.rbd_utils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:29 np0005466031 nova_compute[235803]: 2025-10-02 12:58:29.884 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:58:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1978523503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.338 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.340 2 DEBUG nova.virt.libvirt.vif [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:58:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.340 2 DEBUG nova.network.os_vif_util [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.341 2 DEBUG nova.network.os_vif_util [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.342 2 DEBUG nova.objects.instance [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.366 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <uuid>3ae8ab55-a114-4284-9d2d-e70ba073cb66</uuid>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <name>instance-000000a1</name>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachVolumeTestJSON-server-197733369</nova:name>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:58:29</nova:creationTime>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <nova:user uuid="57a1608ca1fc4bef8b6bc6ad68be3999">tempest-AttachVolumeTestJSON-68983480-project-member</nova:user>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <nova:project uuid="0b15f29eb32d4c5cba98baa238cc12e1">tempest-AttachVolumeTestJSON-68983480</nova:project>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <nova:port uuid="7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <entry name="serial">3ae8ab55-a114-4284-9d2d-e70ba073cb66</entry>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <entry name="uuid">3ae8ab55-a114-4284-9d2d-e70ba073cb66</entry>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk.config">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:4f:3f:6f"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <target dev="tap7fa0e3d4-af"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/console.log" append="off"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:58:30 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:58:30 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:58:30 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:58:30 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.367 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Preparing to wait for external event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.367 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.367 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.367 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.368 2 DEBUG nova.virt.libvirt.vif [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:58:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.368 2 DEBUG nova.network.os_vif_util [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.369 2 DEBUG nova.network.os_vif_util [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.369 2 DEBUG os_vif [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.369 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fa0e3d4-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7fa0e3d4-af, col_values=(('external_ids', {'iface-id': '7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:3f:6f', 'vm-uuid': '3ae8ab55-a114-4284-9d2d-e70ba073cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:30 np0005466031 NetworkManager[44907]: <info>  [1759409910.3756] manager: (tap7fa0e3d4-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.381 2 INFO os_vif [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af')#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.451 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.451 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.451 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No VIF found with MAC fa:16:3e:4f:3f:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.451 2 INFO nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Using config drive#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.473 2 DEBUG nova.storage.rbd_utils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.852 2 INFO nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Creating config drive at /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/disk.config#033[00m
Oct  2 08:58:30 np0005466031 nova_compute[235803]: 2025-10-02 12:58:30.858 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7b6d_krp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.015 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7b6d_krp" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.047 2 DEBUG nova.storage.rbd_utils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] rbd image 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.051 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/disk.config 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.279 2 DEBUG nova.network.neutron [req-000e8123-e818-4830-a4df-f46f661d572d req-1546b948-d79a-4473-bb25-54a3f9b96a1d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updated VIF entry in instance network info cache for port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.280 2 DEBUG nova.network.neutron [req-000e8123-e818-4830-a4df-f46f661d572d req-1546b948-d79a-4473-bb25-54a3f9b96a1d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating instance_info_cache with network_info: [{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.304 2 DEBUG oslo_concurrency.lockutils [req-000e8123-e818-4830-a4df-f46f661d572d req-1546b948-d79a-4473-bb25-54a3f9b96a1d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.403 2 DEBUG oslo_concurrency.processutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/disk.config 3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.404 2 INFO nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Deleting local config drive /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/disk.config because it was imported into RBD.#033[00m
Oct  2 08:58:31 np0005466031 kernel: tap7fa0e3d4-af: entered promiscuous mode
Oct  2 08:58:31 np0005466031 NetworkManager[44907]: <info>  [1759409911.4520] manager: (tap7fa0e3d4-af): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Oct  2 08:58:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:31Z|00600|binding|INFO|Claiming lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for this chassis.
Oct  2 08:58:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:31Z|00601|binding|INFO|7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101: Claiming fa:16:3e:4f:3f:6f 10.100.0.3
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.463 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:6f 10.100.0.3'], port_security=['fa:16:3e:4f:3f:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3ae8ab55-a114-4284-9d2d-e70ba073cb66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf287730-8b39-470a-9870-d19a70f15c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfe3e2cd-013f-433d-b677-80f8e7ef6a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205bb72b-7c7b-4eea-8f2e-e72a1fd482ed, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.465 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 in datapath cf287730-8b39-470a-9870-d19a70f15c4d bound to our chassis#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.466 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf287730-8b39-470a-9870-d19a70f15c4d#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.478 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c1922b13-d598-4f6e-9b2d-f7c9871d342c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.479 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf287730-81 in ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.480 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf287730-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.480 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[20b655d0-91a1-45e7-8a57-e86c3d152e47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.481 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b9494d25-76d4-4b56-9975-e03268f7b39a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.504 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ce7c46-eb26-4818-80d6-192d865e4f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 systemd-machined[192227]: New machine qemu-71-instance-000000a1.
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.531 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdf87e5-6578-402d-95e4-9aa3dc80f99e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 systemd[1]: Started Virtual Machine qemu-71-instance-000000a1.
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:31Z|00602|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 ovn-installed in OVS
Oct  2 08:58:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:31Z|00603|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 up in Southbound
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:31 np0005466031 systemd-udevd[306391]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:58:31 np0005466031 podman[306349]: 2025-10-02 12:58:31.557757155 +0000 UTC m=+0.070515063 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:58:31 np0005466031 NetworkManager[44907]: <info>  [1759409911.5607] device (tap7fa0e3d4-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:58:31 np0005466031 NetworkManager[44907]: <info>  [1759409911.5618] device (tap7fa0e3d4-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.563 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[407081c5-7f2d-408b-9a16-4620969cdb21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 NetworkManager[44907]: <info>  [1759409911.5696] manager: (tapcf287730-80): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.568 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b610ae14-c3d0-48a1-a761-4249380b6de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 podman[306347]: 2025-10-02 12:58:31.586979616 +0000 UTC m=+0.099728984 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.597 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b03c3a33-3fbf-457d-a7af-97dfb5bd3ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.600 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[43f1f9fd-e094-4b0c-a901-a2af4760bfa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 NetworkManager[44907]: <info>  [1759409911.6202] device (tapcf287730-80): carrier: link connected
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.625 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[70f1f7d9-13c5-422a-9af1-ac10ee1a19e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.644 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[716c49a8-d83b-4682-8f02-c2636f2af61d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf287730-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:e9:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776721, 'reachable_time': 37764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306423, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.659 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[85a3d068-7333-4a1f-9236-598d13b4d953]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:e9a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776721, 'tstamp': 776721}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306424, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.675 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5129567b-0263-4fc5-a0f2-178f758884f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf287730-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:e9:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776721, 'reachable_time': 37764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306425, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.709 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8612cafc-dd85-43c6-8557-700e90e86b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:58:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.743 2 DEBUG nova.compute.manager [req-d8b29d4b-4a4f-4e25-a8d5-4527f389223e req-8c91344f-2c2d-4cff-8f18-3b752eddb801 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.743 2 DEBUG oslo_concurrency.lockutils [req-d8b29d4b-4a4f-4e25-a8d5-4527f389223e req-8c91344f-2c2d-4cff-8f18-3b752eddb801 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.744 2 DEBUG oslo_concurrency.lockutils [req-d8b29d4b-4a4f-4e25-a8d5-4527f389223e req-8c91344f-2c2d-4cff-8f18-3b752eddb801 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.744 2 DEBUG oslo_concurrency.lockutils [req-d8b29d4b-4a4f-4e25-a8d5-4527f389223e req-8c91344f-2c2d-4cff-8f18-3b752eddb801 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.744 2 DEBUG nova.compute.manager [req-d8b29d4b-4a4f-4e25-a8d5-4527f389223e req-8c91344f-2c2d-4cff-8f18-3b752eddb801 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Processing event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:58:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:31.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.772 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b88a98f6-74b5-4339-b6fa-59116cafd0e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.773 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf287730-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.774 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.774 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf287730-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:31 np0005466031 NetworkManager[44907]: <info>  [1759409911.7766] manager: (tapcf287730-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct  2 08:58:31 np0005466031 kernel: tapcf287730-80: entered promiscuous mode
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.781 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf287730-80, col_values=(('external_ids', {'iface-id': '9ae1fd94-b5f3-4333-9533-d979eb84ea8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:31 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:31Z|00604|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.785 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.786 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[63c2cd0a-5966-43b3-9187-e6131ae1108d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.787 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-cf287730-8b39-470a-9870-d19a70f15c4d
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID cf287730-8b39-470a-9870-d19a70f15c4d
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:58:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:31.787 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'env', 'PROCESS_TAG=haproxy-cf287730-8b39-470a-9870-d19a70f15c4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf287730-8b39-470a-9870-d19a70f15c4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:58:31 np0005466031 nova_compute[235803]: 2025-10-02 12:58:31.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:32 np0005466031 podman[306458]: 2025-10-02 12:58:32.201105711 +0000 UTC m=+0.066079835 container create 16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 08:58:32 np0005466031 systemd[1]: Started libpod-conmon-16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150.scope.
Oct  2 08:58:32 np0005466031 podman[306458]: 2025-10-02 12:58:32.164776284 +0000 UTC m=+0.029750408 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:58:32 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:58:32 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a625647d938d90b0482d585c62509c78d7e422974a3c2bcb16b6b86de558598/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:58:32 np0005466031 podman[306458]: 2025-10-02 12:58:32.294725738 +0000 UTC m=+0.159699862 container init 16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:58:32 np0005466031 podman[306458]: 2025-10-02 12:58:32.300419332 +0000 UTC m=+0.165393456 container start 16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:58:32 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[306473]: [NOTICE]   (306477) : New worker (306479) forked
Oct  2 08:58:32 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[306473]: [NOTICE]   (306477) : Loading success.
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.456 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409913.455786, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.456 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] VM Started (Lifecycle Event)#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.458 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.462 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.466 2 INFO nova.virt.libvirt.driver [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance spawned successfully.#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.467 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.489 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.495 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.498 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.499 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.500 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.500 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.500 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.501 2 DEBUG nova.virt.libvirt.driver [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.539 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.539 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409913.4559407, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.540 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.562 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.566 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409913.4610312, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.566 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.573 2 INFO nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Took 9.65 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.574 2 DEBUG nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.583 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.587 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.613 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.641 2 INFO nova.compute.manager [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Took 10.67 seconds to build instance.#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.664 2 DEBUG oslo_concurrency.lockutils [None req-fb1b1889-c418-4de2-a6a1-4d07da188e94 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:58:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:33.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:58:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:33.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.977 2 DEBUG nova.compute.manager [req-2254e701-0254-40cf-a309-6091c5a0a5c5 req-7f0409eb-f106-4c89-aa2f-7befc4984c9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.978 2 DEBUG oslo_concurrency.lockutils [req-2254e701-0254-40cf-a309-6091c5a0a5c5 req-7f0409eb-f106-4c89-aa2f-7befc4984c9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.979 2 DEBUG oslo_concurrency.lockutils [req-2254e701-0254-40cf-a309-6091c5a0a5c5 req-7f0409eb-f106-4c89-aa2f-7befc4984c9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.979 2 DEBUG oslo_concurrency.lockutils [req-2254e701-0254-40cf-a309-6091c5a0a5c5 req-7f0409eb-f106-4c89-aa2f-7befc4984c9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.980 2 DEBUG nova.compute.manager [req-2254e701-0254-40cf-a309-6091c5a0a5c5 req-7f0409eb-f106-4c89-aa2f-7befc4984c9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:33 np0005466031 nova_compute[235803]: 2025-10-02 12:58:33.980 2 WARNING nova.compute.manager [req-2254e701-0254-40cf-a309-6091c5a0a5c5 req-7f0409eb-f106-4c89-aa2f-7befc4984c9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:58:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:35 np0005466031 NetworkManager[44907]: <info>  [1759409915.2942] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Oct  2 08:58:35 np0005466031 NetworkManager[44907]: <info>  [1759409915.2955] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:35Z|00605|binding|INFO|Releasing lport 00b1af69-0dd2-4e03-9090-dc7ccfcae6b6 from this chassis (sb_readonly=0)
Oct  2 08:58:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:35Z|00606|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.709 2 DEBUG nova.compute.manager [req-fe61c42a-99c0-4e41-b0de-c9c779795576 req-30d5aa73-cc26-467c-afa1-362df85b7810 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-changed-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.710 2 DEBUG nova.compute.manager [req-fe61c42a-99c0-4e41-b0de-c9c779795576 req-30d5aa73-cc26-467c-afa1-362df85b7810 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Refreshing instance network info cache due to event network-changed-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.710 2 DEBUG oslo_concurrency.lockutils [req-fe61c42a-99c0-4e41-b0de-c9c779795576 req-30d5aa73-cc26-467c-afa1-362df85b7810 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.710 2 DEBUG oslo_concurrency.lockutils [req-fe61c42a-99c0-4e41-b0de-c9c779795576 req-30d5aa73-cc26-467c-afa1-362df85b7810 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.711 2 DEBUG nova.network.neutron [req-fe61c42a-99c0-4e41-b0de-c9c779795576 req-30d5aa73-cc26-467c-afa1-362df85b7810 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Refreshing network info cache for port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:58:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:35Z|00607|binding|INFO|Releasing lport 00b1af69-0dd2-4e03-9090-dc7ccfcae6b6 from this chassis (sb_readonly=0)
Oct  2 08:58:35 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:35Z|00608|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct  2 08:58:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:35.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:35.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:35 np0005466031 nova_compute[235803]: 2025-10-02 12:58:35.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:37 np0005466031 nova_compute[235803]: 2025-10-02 12:58:37.269 2 DEBUG nova.network.neutron [req-fe61c42a-99c0-4e41-b0de-c9c779795576 req-30d5aa73-cc26-467c-afa1-362df85b7810 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updated VIF entry in instance network info cache for port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:58:37 np0005466031 nova_compute[235803]: 2025-10-02 12:58:37.270 2 DEBUG nova.network.neutron [req-fe61c42a-99c0-4e41-b0de-c9c779795576 req-30d5aa73-cc26-467c-afa1-362df85b7810 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating instance_info_cache with network_info: [{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:37 np0005466031 nova_compute[235803]: 2025-10-02 12:58:37.290 2 DEBUG oslo_concurrency.lockutils [req-fe61c42a-99c0-4e41-b0de-c9c779795576 req-30d5aa73-cc26-467c-afa1-362df85b7810 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:37.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:39.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:39.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:40 np0005466031 nova_compute[235803]: 2025-10-02 12:58:40.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:40 np0005466031 nova_compute[235803]: 2025-10-02 12:58:40.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:41.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:41.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:43.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:43.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:45 np0005466031 nova_compute[235803]: 2025-10-02 12:58:45.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466031 nova_compute[235803]: 2025-10-02 12:58:45.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:45.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:45.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:46 np0005466031 nova_compute[235803]: 2025-10-02 12:58:46.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:47 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:47Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:3f:6f 10.100.0.3
Oct  2 08:58:47 np0005466031 ovn_controller[132413]: 2025-10-02T12:58:47Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:3f:6f 10.100.0.3
Oct  2 08:58:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:47.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:58:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:49.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:58:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:49.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:50 np0005466031 nova_compute[235803]: 2025-10-02 12:58:50.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:50 np0005466031 nova_compute[235803]: 2025-10-02 12:58:50.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:50 np0005466031 nova_compute[235803]: 2025-10-02 12:58:50.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:51.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27bdf6f0 =====
Oct  2 08:58:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27bdf6f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:51 np0005466031 radosgw[82465]: beast: 0x7f1c27bdf6f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:51.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:53.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:58:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:53.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:58:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:55 np0005466031 nova_compute[235803]: 2025-10-02 12:58:55.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:55 np0005466031 nova_compute[235803]: 2025-10-02 12:58:55.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:55.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:55.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:57 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Oct  2 08:58:57 np0005466031 podman[306594]: 2025-10-02 12:58:57.618303183 +0000 UTC m=+0.051691431 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 08:58:57 np0005466031 podman[306595]: 2025-10-02 12:58:57.644210809 +0000 UTC m=+0.076694701 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:58:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:57.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:57.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:59 np0005466031 nova_compute[235803]: 2025-10-02 12:58:59.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:59.311 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:58:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:58:59.312 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:58:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:58:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:59.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:00 np0005466031 nova_compute[235803]: 2025-10-02 12:59:00.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:00 np0005466031 nova_compute[235803]: 2025-10-02 12:59:00.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:01 np0005466031 nova_compute[235803]: 2025-10-02 12:59:01.650 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:01.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:02 np0005466031 podman[306668]: 2025-10-02 12:59:02.56973761 +0000 UTC m=+0.053985026 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:59:02 np0005466031 podman[306667]: 2025-10-02 12:59:02.597898732 +0000 UTC m=+0.083135137 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:59:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:03.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:03.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:59:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:59:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:05 np0005466031 nova_compute[235803]: 2025-10-02 12:59:05.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:05 np0005466031 nova_compute[235803]: 2025-10-02 12:59:05.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:05 np0005466031 nova_compute[235803]: 2025-10-02 12:59:05.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:05.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:59:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:05.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:59:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:07.313 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:07.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:07.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:09 np0005466031 nova_compute[235803]: 2025-10-02 12:59:09.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:09.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:09.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:09 np0005466031 nova_compute[235803]: 2025-10-02 12:59:09.979 2 DEBUG oslo_concurrency.lockutils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:09 np0005466031 nova_compute[235803]: 2025-10-02 12:59:09.979 2 DEBUG oslo_concurrency.lockutils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:09 np0005466031 nova_compute[235803]: 2025-10-02 12:59:09.999 2 DEBUG nova.objects.instance [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:10 np0005466031 nova_compute[235803]: 2025-10-02 12:59:10.042 2 DEBUG oslo_concurrency.lockutils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:10 np0005466031 nova_compute[235803]: 2025-10-02 12:59:10.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:10 np0005466031 nova_compute[235803]: 2025-10-02 12:59:10.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:11.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:11.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.126 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "201e9977-0b48-484d-8463-4ff8484498bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.127 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.127 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "201e9977-0b48-484d-8463-4ff8484498bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.127 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.127 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.129 2 INFO nova.compute.manager [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Terminating instance#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.129 2 DEBUG nova.compute.manager [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.169 2 DEBUG oslo_concurrency.lockutils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.170 2 DEBUG oslo_concurrency.lockutils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.170 2 INFO nova.compute.manager [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Attaching volume 73427f9e-6c2e-43a3-a2f1-b8df606e61c3 to /dev/vdb#033[00m
Oct  2 08:59:12 np0005466031 kernel: tapfc519e38-82 (unregistering): left promiscuous mode
Oct  2 08:59:12 np0005466031 NetworkManager[44907]: <info>  [1759409952.2410] device (tapfc519e38-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:59:12 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:12Z|00609|binding|INFO|Releasing lport fc519e38-826c-47a6-a534-67b32458974d from this chassis (sb_readonly=0)
Oct  2 08:59:12 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:12Z|00610|binding|INFO|Setting lport fc519e38-826c-47a6-a534-67b32458974d down in Southbound
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:12Z|00611|binding|INFO|Removing iface tapfc519e38-82 ovn-installed in OVS
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.280 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:e1:59 10.100.0.28'], port_security=['fa:16:3e:d0:e1:59 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': '201e9977-0b48-484d-8463-4ff8484498bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': '98956923-c4b5-4f9c-898f-15ab7973f1df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62f8ddde-a575-4c3a-bc2f-e3ff31baaf2d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=fc519e38-826c-47a6-a534-67b32458974d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.281 141898 INFO neutron.agent.ovn.metadata.agent [-] Port fc519e38-826c-47a6-a534-67b32458974d in datapath bc72facc-29fc-4f60-8da4-b2b18aba70d2 unbound from our chassis#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.282 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc72facc-29fc-4f60-8da4-b2b18aba70d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.284 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3449abd0-6660-4e2e-99db-c9a930ccfe79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.284 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 namespace which is not needed anymore#033[00m
Oct  2 08:59:12 np0005466031 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d000000a0.scope: Deactivated successfully.
Oct  2 08:59:12 np0005466031 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d000000a0.scope: Consumed 14.838s CPU time.
Oct  2 08:59:12 np0005466031 systemd-machined[192227]: Machine qemu-70-instance-000000a0 terminated.
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.342 2 DEBUG os_brick.utils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.344 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.355 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.355 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[3059a6ce-3086-456a-9e49-6a8511be2bc0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.363 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.366 2 INFO nova.virt.libvirt.driver [-] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Instance destroyed successfully.#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.367 2 DEBUG nova.objects.instance [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 201e9977-0b48-484d-8463-4ff8484498bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.370 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.370 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[efb6bc2d-0011-4276-bdff-1584f01cbfba]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.373 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.384 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.384 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[f637e676-2394-4990-9843-bc7af02ce88b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.386 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6d2113-f228-4cb9-b7fa-4857424ac5e8]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.387 2 DEBUG oslo_concurrency.processutils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:12 np0005466031 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[305713]: [NOTICE]   (305731) : haproxy version is 2.8.14-c23fe91
Oct  2 08:59:12 np0005466031 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[305713]: [NOTICE]   (305731) : path to executable is /usr/sbin/haproxy
Oct  2 08:59:12 np0005466031 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[305713]: [WARNING]  (305731) : Exiting Master process...
Oct  2 08:59:12 np0005466031 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[305713]: [WARNING]  (305731) : Exiting Master process...
Oct  2 08:59:12 np0005466031 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[305713]: [ALERT]    (305731) : Current worker (305737) exited with code 143 (Terminated)
Oct  2 08:59:12 np0005466031 neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2[305713]: [WARNING]  (305731) : All workers exited. Exiting... (0)
Oct  2 08:59:12 np0005466031 systemd[1]: libpod-005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257.scope: Deactivated successfully.
Oct  2 08:59:12 np0005466031 podman[306904]: 2025-10-02 12:59:12.433871974 +0000 UTC m=+0.048705644 container died 005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.440 2 DEBUG nova.virt.libvirt.vif [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-645252589',display_name='tempest-TestNetworkBasicOps-server-645252589',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-645252589',id=160,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE0f6RSAlvvTz//rGZzdM0INxJgvsGuO9EKDvMAzOgJFDC6XQDuxQyoFolakbBR2ntmHMkocOrDkEDQjt8yPJzmgMsdO7QBV8P0/QPVX8scJgw8dmu2DWuSZU/ASABQPAg==',key_name='tempest-TestNetworkBasicOps-367999511',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-uh0yev6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:58:02Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=201e9977-0b48-484d-8463-4ff8484498bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.441 2 DEBUG nova.network.os_vif_util [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "fc519e38-826c-47a6-a534-67b32458974d", "address": "fa:16:3e:d0:e1:59", "network": {"id": "bc72facc-29fc-4f60-8da4-b2b18aba70d2", "bridge": "br-int", "label": "tempest-network-smoke--1350385872", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc519e38-82", "ovs_interfaceid": "fc519e38-826c-47a6-a534-67b32458974d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.445 2 DEBUG nova.network.os_vif_util [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e1:59,bridge_name='br-int',has_traffic_filtering=True,id=fc519e38-826c-47a6-a534-67b32458974d,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc519e38-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.446 2 DEBUG os_vif [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e1:59,bridge_name='br-int',has_traffic_filtering=True,id=fc519e38-826c-47a6-a534-67b32458974d,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc519e38-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc519e38-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.452 2 DEBUG oslo_concurrency.processutils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "nvme version" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.456 2 DEBUG os_brick.initiator.connectors.lightos [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.457 2 DEBUG os_brick.initiator.connectors.lightos [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.457 2 DEBUG os_brick.initiator.connectors.lightos [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.458 2 DEBUG os_brick.utils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] <== get_connector_properties: return (114ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.458 2 DEBUG nova.virt.block_device [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating existing volume attachment record: 31bd947d-5e0a-4d4c-8968-c44c2406b07f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.464 2 INFO os_vif [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:e1:59,bridge_name='br-int',has_traffic_filtering=True,id=fc519e38-826c-47a6-a534-67b32458974d,network=Network(bc72facc-29fc-4f60-8da4-b2b18aba70d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc519e38-82')#033[00m
Oct  2 08:59:12 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257-userdata-shm.mount: Deactivated successfully.
Oct  2 08:59:12 np0005466031 systemd[1]: var-lib-containers-storage-overlay-c153031815e29bc83e043abfb06b3914817b68cdb6a98f30ca922e1871fc0348-merged.mount: Deactivated successfully.
Oct  2 08:59:12 np0005466031 podman[306904]: 2025-10-02 12:59:12.483671429 +0000 UTC m=+0.098505099 container cleanup 005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:59:12 np0005466031 systemd[1]: libpod-conmon-005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257.scope: Deactivated successfully.
Oct  2 08:59:12 np0005466031 podman[306950]: 2025-10-02 12:59:12.544502331 +0000 UTC m=+0.038616033 container remove 005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.553 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3f04eb6d-daca-4c48-be71-713af32beedd]: (4, ('Thu Oct  2 12:59:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 (005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257)\n005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257\nThu Oct  2 12:59:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 (005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257)\n005e47d4af76b4fcc28d32dcfa3351bb1719a3d9fde3077981242640b1a66257\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.555 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1d3c56-ecc1-4d58-ab52-0beaf55f6cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.556 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc72facc-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466031 kernel: tapbc72facc-20: left promiscuous mode
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.578 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb9b590-3350-4730-800e-4e7a1cbbe6ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.606 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[49cfaefe-ee66-4f86-9bce-0de1d9f84e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.607 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc7365c-4ed8-4169-8df7-bb45fb836ffa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.624 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2e294848-a3b2-40f7-a2f0-c89a998efd66]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773692, 'reachable_time': 39177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306968, 'error': None, 'target': 'ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.627 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc72facc-29fc-4f60-8da4-b2b18aba70d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:59:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:12.627 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[7943fc02-74b7-4014-a53f-73e4791aba5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:12 np0005466031 systemd[1]: run-netns-ovnmeta\x2dbc72facc\x2d29fc\x2d4f60\x2d8da4\x2db2b18aba70d2.mount: Deactivated successfully.
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.929 2 INFO nova.virt.libvirt.driver [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Deleting instance files /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf_del#033[00m
Oct  2 08:59:12 np0005466031 nova_compute[235803]: 2025-10-02 12:59:12.930 2 INFO nova.virt.libvirt.driver [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Deletion of /var/lib/nova/instances/201e9977-0b48-484d-8463-4ff8484498bf_del complete#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.103 2 INFO nova.compute.manager [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.104 2 DEBUG oslo.service.loopingcall [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.104 2 DEBUG nova.compute.manager [-] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.104 2 DEBUG nova.network.neutron [-] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.679 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.680 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.680 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.680 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.681 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:13.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:13.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.870 2 DEBUG nova.objects.instance [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.944 2 DEBUG nova.virt.libvirt.driver [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Attempting to attach volume 73427f9e-6c2e-43a3-a2f1-b8df606e61c3 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:59:13 np0005466031 nova_compute[235803]: 2025-10-02 12:59:13.949 2 DEBUG nova.virt.libvirt.guest [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:59:13 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:59:13 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-73427f9e-6c2e-43a3-a2f1-b8df606e61c3">
Oct  2 08:59:13 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:59:13 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:59:13 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:59:13 np0005466031 nova_compute[235803]:  </source>
Oct  2 08:59:13 np0005466031 nova_compute[235803]:  <auth username="openstack">
Oct  2 08:59:13 np0005466031 nova_compute[235803]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:59:13 np0005466031 nova_compute[235803]:  </auth>
Oct  2 08:59:13 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:59:13 np0005466031 nova_compute[235803]:  <serial>73427f9e-6c2e-43a3-a2f1-b8df606e61c3</serial>
Oct  2 08:59:13 np0005466031 nova_compute[235803]: </disk>
Oct  2 08:59:13 np0005466031 nova_compute[235803]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.110 2 DEBUG nova.virt.libvirt.driver [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.110 2 DEBUG nova.virt.libvirt.driver [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.111 2 DEBUG nova.virt.libvirt.driver [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.111 2 DEBUG nova.virt.libvirt.driver [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] No VIF found with MAC fa:16:3e:4f:3f:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:59:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3718952746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.153 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:14 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.321 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.321 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.321 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.463 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received event network-vif-unplugged-fc519e38-826c-47a6-a534-67b32458974d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.464 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "201e9977-0b48-484d-8463-4ff8484498bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.465 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.466 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.466 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] No waiting events found dispatching network-vif-unplugged-fc519e38-826c-47a6-a534-67b32458974d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.466 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received event network-vif-unplugged-fc519e38-826c-47a6-a534-67b32458974d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.466 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received event network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.467 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "201e9977-0b48-484d-8463-4ff8484498bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.467 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.467 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.467 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] No waiting events found dispatching network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.467 2 WARNING nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received unexpected event network-vif-plugged-fc519e38-826c-47a6-a534-67b32458974d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.519 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.520 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4030MB free_disk=20.784870147705078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.521 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.521 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.650 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 201e9977-0b48-484d-8463-4ff8484498bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.651 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 3ae8ab55-a114-4284-9d2d-e70ba073cb66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.651 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.651 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.655 2 DEBUG oslo_concurrency.lockutils [None req-564591b7-ea17-4007-93b9-4cf305d87ca9 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.728 2 DEBUG nova.network.neutron [-] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:14 np0005466031 nova_compute[235803]: 2025-10-02 12:59:14.773 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1857352639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.246 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.253 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.458 2 DEBUG nova.compute.manager [req-82587523-42ed-4e74-94c1-b29c6f6cfa7a req-2a281526-453b-4a6e-b40c-4b5f8aeb844a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Received event network-vif-deleted-fc519e38-826c-47a6-a534-67b32458974d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.459 2 INFO nova.compute.manager [req-82587523-42ed-4e74-94c1-b29c6f6cfa7a req-2a281526-453b-4a6e-b40c-4b5f8aeb844a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Neutron deleted interface fc519e38-826c-47a6-a534-67b32458974d; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.460 2 DEBUG nova.network.neutron [req-82587523-42ed-4e74-94c1-b29c6f6cfa7a req-2a281526-453b-4a6e-b40c-4b5f8aeb844a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.511 2 INFO nova.compute.manager [-] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Took 2.41 seconds to deallocate network for instance.#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.563 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.615 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.616 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.618 2 DEBUG nova.compute.manager [req-82587523-42ed-4e74-94c1-b29c6f6cfa7a req-2a281526-453b-4a6e-b40c-4b5f8aeb844a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Detach interface failed, port_id=fc519e38-826c-47a6-a534-67b32458974d, reason: Instance 201e9977-0b48-484d-8463-4ff8484498bf could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.621 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.621 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:15 np0005466031 nova_compute[235803]: 2025-10-02 12:59:15.692 2 DEBUG oslo_concurrency.processutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:15.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:15.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/505408071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.164 2 DEBUG oslo_concurrency.processutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.170 2 DEBUG nova.compute.provider_tree [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.287 2 DEBUG nova.scheduler.client.report [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.341 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.355 2 DEBUG oslo_concurrency.lockutils [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.355 2 DEBUG oslo_concurrency.lockutils [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.356 2 DEBUG nova.compute.manager [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.358 2 DEBUG nova.compute.manager [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.359 2 DEBUG nova.objects.instance [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.388 2 INFO nova.scheduler.client.report [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 201e9977-0b48-484d-8463-4ff8484498bf#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.419 2 DEBUG nova.virt.libvirt.driver [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.580 2 DEBUG oslo_concurrency.lockutils [None req-b1d4c22b-7653-4385-8d55-c15a5861be83 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "201e9977-0b48-484d-8463-4ff8484498bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.616 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.616 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.835 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.836 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.836 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:59:16 np0005466031 nova_compute[235803]: 2025-10-02 12:59:16.836 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:17 np0005466031 nova_compute[235803]: 2025-10-02 12:59:17.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:17.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:17.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:18 np0005466031 kernel: tap7fa0e3d4-af (unregistering): left promiscuous mode
Oct  2 08:59:18 np0005466031 NetworkManager[44907]: <info>  [1759409958.8531] device (tap7fa0e3d4-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:59:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:18Z|00612|binding|INFO|Releasing lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 from this chassis (sb_readonly=0)
Oct  2 08:59:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:18Z|00613|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 down in Southbound
Oct  2 08:59:18 np0005466031 nova_compute[235803]: 2025-10-02 12:59:18.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:18 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:18Z|00614|binding|INFO|Removing iface tap7fa0e3d4-af ovn-installed in OVS
Oct  2 08:59:18 np0005466031 nova_compute[235803]: 2025-10-02 12:59:18.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:18.882 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:6f 10.100.0.3'], port_security=['fa:16:3e:4f:3f:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3ae8ab55-a114-4284-9d2d-e70ba073cb66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf287730-8b39-470a-9870-d19a70f15c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfe3e2cd-013f-433d-b677-80f8e7ef6a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205bb72b-7c7b-4eea-8f2e-e72a1fd482ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:18.883 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 in datapath cf287730-8b39-470a-9870-d19a70f15c4d unbound from our chassis#033[00m
Oct  2 08:59:18 np0005466031 nova_compute[235803]: 2025-10-02 12:59:18.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:18.887 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf287730-8b39-470a-9870-d19a70f15c4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:59:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:18.888 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[82ef45a3-2146-417b-b6a2-894d2f81b636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:18.889 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d namespace which is not needed anymore#033[00m
Oct  2 08:59:18 np0005466031 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Oct  2 08:59:18 np0005466031 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d000000a1.scope: Consumed 15.550s CPU time.
Oct  2 08:59:18 np0005466031 systemd-machined[192227]: Machine qemu-71-instance-000000a1 terminated.
Oct  2 08:59:19 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[306473]: [NOTICE]   (306477) : haproxy version is 2.8.14-c23fe91
Oct  2 08:59:19 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[306473]: [NOTICE]   (306477) : path to executable is /usr/sbin/haproxy
Oct  2 08:59:19 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[306473]: [WARNING]  (306477) : Exiting Master process...
Oct  2 08:59:19 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[306473]: [ALERT]    (306477) : Current worker (306479) exited with code 143 (Terminated)
Oct  2 08:59:19 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[306473]: [WARNING]  (306477) : All workers exited. Exiting... (0)
Oct  2 08:59:19 np0005466031 systemd[1]: libpod-16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150.scope: Deactivated successfully.
Oct  2 08:59:19 np0005466031 podman[307134]: 2025-10-02 12:59:19.01233311 +0000 UTC m=+0.042779863 container died 16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:59:19 np0005466031 systemd[1]: var-lib-containers-storage-overlay-5a625647d938d90b0482d585c62509c78d7e422974a3c2bcb16b6b86de558598-merged.mount: Deactivated successfully.
Oct  2 08:59:19 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150-userdata-shm.mount: Deactivated successfully.
Oct  2 08:59:19 np0005466031 podman[307134]: 2025-10-02 12:59:19.047122853 +0000 UTC m=+0.077569606 container cleanup 16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:59:19 np0005466031 systemd[1]: libpod-conmon-16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150.scope: Deactivated successfully.
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:19 np0005466031 podman[307165]: 2025-10-02 12:59:19.150520812 +0000 UTC m=+0.080783119 container remove 16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.156 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c56de387-712e-4213-a9d6-9feb1a74c03d]: (4, ('Thu Oct  2 12:59:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d (16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150)\n16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150\nThu Oct  2 12:59:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d (16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150)\n16e709af2c9a7ba4b8f848b634efb4af071011a33c46592c1cd93ee40dd3f150\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.159 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd7a6a0-7b15-454e-a5b5-1c7563ff72b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.160 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf287730-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:19 np0005466031 kernel: tapcf287730-80: left promiscuous mode
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.182 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[18e5d91f-4ae7-469f-b9b3-a31a3b22661d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.218 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0160a11d-804a-4907-8417-414104ade3c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.219 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[68a3f456-c4ed-46c0-924c-07e9ddfbaf1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.238 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c7006937-e86f-4e03-8dcd-89dd9b63db71]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776714, 'reachable_time': 31704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307193, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.240 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:59:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:19.240 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[95a5a1c3-a3b9-48a3-a368-ad22f1ba7fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:19 np0005466031 systemd[1]: run-netns-ovnmeta\x2dcf287730\x2d8b39\x2d470a\x2d9870\x2dd19a70f15c4d.mount: Deactivated successfully.
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.293 2 DEBUG nova.compute.manager [req-3bc6aa1b-8c5c-41a6-a1f3-a3861e5b4043 req-a520a5bd-a5f1-4238-8011-0336b068a6ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.294 2 DEBUG oslo_concurrency.lockutils [req-3bc6aa1b-8c5c-41a6-a1f3-a3861e5b4043 req-a520a5bd-a5f1-4238-8011-0336b068a6ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.294 2 DEBUG oslo_concurrency.lockutils [req-3bc6aa1b-8c5c-41a6-a1f3-a3861e5b4043 req-a520a5bd-a5f1-4238-8011-0336b068a6ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.294 2 DEBUG oslo_concurrency.lockutils [req-3bc6aa1b-8c5c-41a6-a1f3-a3861e5b4043 req-a520a5bd-a5f1-4238-8011-0336b068a6ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.295 2 DEBUG nova.compute.manager [req-3bc6aa1b-8c5c-41a6-a1f3-a3861e5b4043 req-a520a5bd-a5f1-4238-8011-0336b068a6ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.295 2 WARNING nova.compute.manager [req-3bc6aa1b-8c5c-41a6-a1f3-a3861e5b4043 req-a520a5bd-a5f1-4238-8011-0336b068a6ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 08:59:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.436 2 INFO nova.virt.libvirt.driver [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.441 2 INFO nova.virt.libvirt.driver [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance destroyed successfully.#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.442 2 DEBUG nova.objects.instance [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.469 2 DEBUG nova.compute.manager [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:19 np0005466031 nova_compute[235803]: 2025-10-02 12:59:19.536 2 DEBUG oslo_concurrency.lockutils [None req-d7c9ea79-6a0d-4d33-80e8-0f518768ee6b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:19.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:19.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.232 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating instance_info_cache with network_info: [{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.270 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.271 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.271 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.430 2 DEBUG nova.objects.instance [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.455 2 DEBUG oslo_concurrency.lockutils [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.455 2 DEBUG oslo_concurrency.lockutils [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquired lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.455 2 DEBUG nova.network.neutron [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:59:20 np0005466031 nova_compute[235803]: 2025-10-02 12:59:20.456 2 DEBUG nova.objects.instance [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'info_cache' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:21 np0005466031 nova_compute[235803]: 2025-10-02 12:59:21.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:21 np0005466031 nova_compute[235803]: 2025-10-02 12:59:21.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:59:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:21.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:21.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:22 np0005466031 nova_compute[235803]: 2025-10-02 12:59:22.388 2 DEBUG nova.compute.manager [req-0dd097b0-cfe9-45ad-9c6d-b273af234794 req-c3691242-ded4-4415-85f2-e5b87eae46da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:22 np0005466031 nova_compute[235803]: 2025-10-02 12:59:22.389 2 DEBUG oslo_concurrency.lockutils [req-0dd097b0-cfe9-45ad-9c6d-b273af234794 req-c3691242-ded4-4415-85f2-e5b87eae46da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:22 np0005466031 nova_compute[235803]: 2025-10-02 12:59:22.389 2 DEBUG oslo_concurrency.lockutils [req-0dd097b0-cfe9-45ad-9c6d-b273af234794 req-c3691242-ded4-4415-85f2-e5b87eae46da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:22 np0005466031 nova_compute[235803]: 2025-10-02 12:59:22.390 2 DEBUG oslo_concurrency.lockutils [req-0dd097b0-cfe9-45ad-9c6d-b273af234794 req-c3691242-ded4-4415-85f2-e5b87eae46da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:22 np0005466031 nova_compute[235803]: 2025-10-02 12:59:22.390 2 DEBUG nova.compute.manager [req-0dd097b0-cfe9-45ad-9c6d-b273af234794 req-c3691242-ded4-4415-85f2-e5b87eae46da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:22 np0005466031 nova_compute[235803]: 2025-10-02 12:59:22.390 2 WARNING nova.compute.manager [req-0dd097b0-cfe9-45ad-9c6d-b273af234794 req-c3691242-ded4-4415-85f2-e5b87eae46da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 08:59:22 np0005466031 nova_compute[235803]: 2025-10-02 12:59:22.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:23.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:23.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:23 np0005466031 nova_compute[235803]: 2025-10-02 12:59:23.921 2 DEBUG nova.network.neutron [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating instance_info_cache with network_info: [{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:23 np0005466031 nova_compute[235803]: 2025-10-02 12:59:23.955 2 DEBUG oslo_concurrency.lockutils [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Releasing lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:23 np0005466031 nova_compute[235803]: 2025-10-02 12:59:23.984 2 INFO nova.virt.libvirt.driver [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance destroyed successfully.#033[00m
Oct  2 08:59:23 np0005466031 nova_compute[235803]: 2025-10-02 12:59:23.984 2 DEBUG nova.objects.instance [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.008 2 DEBUG nova.objects.instance [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.029 2 DEBUG nova.virt.libvirt.vif [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.030 2 DEBUG nova.network.os_vif_util [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.031 2 DEBUG nova.network.os_vif_util [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.031 2 DEBUG os_vif [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fa0e3d4-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.038 2 INFO os_vif [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af')#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.046 2 DEBUG nova.virt.libvirt.driver [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Start _get_guest_xml network_info=[{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-73427f9e-6c2e-43a3-a2f1-b8df606e61c3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '73427f9e-6c2e-43a3-a2f1-b8df606e61c3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '3ae8ab55-a114-4284-9d2d-e70ba073cb66', 'attached_at': '', 'detached_at': '', 'volume_id': '73427f9e-6c2e-43a3-a2f1-b8df606e61c3', 'serial': '73427f9e-6c2e-43a3-a2f1-b8df606e61c3'}, 'attachment_id': '31bd947d-5e0a-4d4c-8968-c44c2406b07f', 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'guest_format': None, 'boot_index': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.048 2 WARNING nova.virt.libvirt.driver [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.058 2 DEBUG nova.virt.libvirt.host [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.060 2 DEBUG nova.virt.libvirt.host [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.068 2 DEBUG nova.virt.libvirt.host [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.069 2 DEBUG nova.virt.libvirt.host [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.070 2 DEBUG nova.virt.libvirt.driver [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.070 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.071 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.071 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.071 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.071 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.071 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.072 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.072 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.072 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.072 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.073 2 DEBUG nova.virt.hardware [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.073 2 DEBUG nova.objects.instance [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.098 2 DEBUG oslo_concurrency.processutils [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:59:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1884512294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.528 2 DEBUG oslo_concurrency.processutils [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:24 np0005466031 nova_compute[235803]: 2025-10-02 12:59:24.569 2 DEBUG oslo_concurrency.processutils [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:59:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2065133054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.015 2 DEBUG oslo_concurrency.processutils [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.058 2 DEBUG nova.virt.libvirt.vif [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.059 2 DEBUG nova.network.os_vif_util [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.059 2 DEBUG nova.network.os_vif_util [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.060 2 DEBUG nova.objects.instance [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.084 2 DEBUG nova.virt.libvirt.driver [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <uuid>3ae8ab55-a114-4284-9d2d-e70ba073cb66</uuid>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <name>instance-000000a1</name>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachVolumeTestJSON-server-197733369</nova:name>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 12:59:24</nova:creationTime>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <nova:user uuid="57a1608ca1fc4bef8b6bc6ad68be3999">tempest-AttachVolumeTestJSON-68983480-project-member</nova:user>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <nova:project uuid="0b15f29eb32d4c5cba98baa238cc12e1">tempest-AttachVolumeTestJSON-68983480</nova:project>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <nova:port uuid="7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <system>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <entry name="serial">3ae8ab55-a114-4284-9d2d-e70ba073cb66</entry>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <entry name="uuid">3ae8ab55-a114-4284-9d2d-e70ba073cb66</entry>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </system>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <os>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  </os>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <features>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  </features>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  </clock>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  <devices>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk.config">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-73427f9e-6c2e-43a3-a2f1-b8df606e61c3">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </source>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      </auth>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <serial>73427f9e-6c2e-43a3-a2f1-b8df606e61c3</serial>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </disk>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:4f:3f:6f"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <target dev="tap7fa0e3d4-af"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </interface>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/console.log" append="off"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </serial>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <video>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </video>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </rng>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 08:59:25 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 08:59:25 np0005466031 nova_compute[235803]:  </devices>
Oct  2 08:59:25 np0005466031 nova_compute[235803]: </domain>
Oct  2 08:59:25 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.085 2 DEBUG nova.virt.libvirt.driver [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.086 2 DEBUG nova.virt.libvirt.driver [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.086 2 DEBUG nova.virt.libvirt.driver [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.087 2 DEBUG nova.virt.libvirt.vif [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.087 2 DEBUG nova.network.os_vif_util [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.087 2 DEBUG nova.network.os_vif_util [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.088 2 DEBUG os_vif [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.089 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.089 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fa0e3d4-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7fa0e3d4-af, col_values=(('external_ids', {'iface-id': '7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:3f:6f', 'vm-uuid': '3ae8ab55-a114-4284-9d2d-e70ba073cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.0958] manager: (tap7fa0e3d4-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.101 2 INFO os_vif [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af')#033[00m
Oct  2 08:59:25 np0005466031 kernel: tap7fa0e3d4-af: entered promiscuous mode
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.1730] manager: (tap7fa0e3d4-af): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Oct  2 08:59:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:25Z|00615|binding|INFO|Claiming lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for this chassis.
Oct  2 08:59:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:25Z|00616|binding|INFO|7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101: Claiming fa:16:3e:4f:3f:6f 10.100.0.3
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.1879] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.1887] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Oct  2 08:59:25 np0005466031 systemd-udevd[307271]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.2084] device (tap7fa0e3d4-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.207 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:6f 10.100.0.3'], port_security=['fa:16:3e:4f:3f:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3ae8ab55-a114-4284-9d2d-e70ba073cb66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf287730-8b39-470a-9870-d19a70f15c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'cfe3e2cd-013f-433d-b677-80f8e7ef6a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205bb72b-7c7b-4eea-8f2e-e72a1fd482ed, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.208 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 in datapath cf287730-8b39-470a-9870-d19a70f15c4d bound to our chassis#033[00m
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.2094] device (tap7fa0e3d4-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.211 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf287730-8b39-470a-9870-d19a70f15c4d#033[00m
Oct  2 08:59:25 np0005466031 systemd-machined[192227]: New machine qemu-72-instance-000000a1.
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.221 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[302f4fa8-45ae-4935-ae60-6bcbc5ff9644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.222 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf287730-81 in ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.223 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf287730-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.223 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e35416-f766-4c35-a728-04094ce2c252]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.224 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2d00b6-6b24-44f8-91f1-06bb47ad5ad2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.237 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[99426aba-6cbc-4a98-8b4d-e9366e1b2cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.262 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[888fd12f-9733-4e6c-b20b-0c8c62f8f489]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.292 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[50b12a1d-9d36-46e2-98d2-f468d556fec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.3018] manager: (tapcf287730-80): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Oct  2 08:59:25 np0005466031 systemd[1]: Started Virtual Machine qemu-72-instance-000000a1.
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.301 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7632564c-e41c-4b18-b67a-a533f9a508b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 systemd-udevd[307276]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.334 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c9adbc0e-1f60-4a91-a2e3-26d0724db614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.337 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[01da0b03-7f6a-4350-a436-5c579bb594e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.3648] device (tapcf287730-80): carrier: link connected
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.370 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[66bd0020-27b2-4346-9346-5ac2d8358be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.389 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c94bb87b-db4e-4c8d-991c-dcb952cf47b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf287730-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:e9:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782095, 'reachable_time': 41306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307307, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.425 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[26fe7789-3065-44f9-a15e-bc63c28f4fd2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:e9a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782095, 'tstamp': 782095}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307308, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.442 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[172c9992-08c3-41be-8c67-ecf8fcb749fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf287730-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:e9:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782095, 'reachable_time': 41306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307309, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:25Z|00617|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 ovn-installed in OVS
Oct  2 08:59:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:25Z|00618|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 up in Southbound
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.473 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0a59fe82-9113-4a74-912b-4303c029ba5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.544 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[50d3bd1b-f929-45be-ab98-649d6ba9705e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.545 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf287730-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.546 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.546 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf287730-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 NetworkManager[44907]: <info>  [1759409965.5490] manager: (tapcf287730-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Oct  2 08:59:25 np0005466031 kernel: tapcf287730-80: entered promiscuous mode
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.553 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf287730-80, col_values=(('external_ids', {'iface-id': '9ae1fd94-b5f3-4333-9533-d979eb84ea8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:25Z|00619|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct  2 08:59:25 np0005466031 nova_compute[235803]: 2025-10-02 12:59:25.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.569 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.570 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ef371656-d3b9-48d1-8ef2-348fb257bcd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.571 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-cf287730-8b39-470a-9870-d19a70f15c4d
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID cf287730-8b39-470a-9870-d19a70f15c4d
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.573 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'env', 'PROCESS_TAG=haproxy-cf287730-8b39-470a-9870-d19a70f15c4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf287730-8b39-470a-9870-d19a70f15c4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:59:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:25.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:25.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.868 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.870 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 12:59:25.870 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:25 np0005466031 podman[307395]: 2025-10-02 12:59:25.96086184 +0000 UTC m=+0.052876505 container create 4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:59:26 np0005466031 systemd[1]: Started libpod-conmon-4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7.scope.
Oct  2 08:59:26 np0005466031 podman[307395]: 2025-10-02 12:59:25.933327296 +0000 UTC m=+0.025341981 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:59:26 np0005466031 systemd[1]: Started libcrun container.
Oct  2 08:59:26 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a853a68b8989174ecf77f53153c6c1ba34310b2d0ca97b59dc57af073c887b82/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:59:26 np0005466031 podman[307395]: 2025-10-02 12:59:26.063651801 +0000 UTC m=+0.155666456 container init 4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:59:26 np0005466031 podman[307395]: 2025-10-02 12:59:26.069032846 +0000 UTC m=+0.161047501 container start 4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:59:26 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[307417]: [NOTICE]   (307421) : New worker (307423) forked
Oct  2 08:59:26 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[307417]: [NOTICE]   (307421) : Loading success.
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.098 2 DEBUG nova.compute.manager [req-8290e85f-924a-46e4-8f65-a1ea2ef82ce5 req-ca87724e-e151-46d2-b8f9-6a21e2802b13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.098 2 DEBUG oslo_concurrency.lockutils [req-8290e85f-924a-46e4-8f65-a1ea2ef82ce5 req-ca87724e-e151-46d2-b8f9-6a21e2802b13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.099 2 DEBUG oslo_concurrency.lockutils [req-8290e85f-924a-46e4-8f65-a1ea2ef82ce5 req-ca87724e-e151-46d2-b8f9-6a21e2802b13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.099 2 DEBUG oslo_concurrency.lockutils [req-8290e85f-924a-46e4-8f65-a1ea2ef82ce5 req-ca87724e-e151-46d2-b8f9-6a21e2802b13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.099 2 DEBUG nova.compute.manager [req-8290e85f-924a-46e4-8f65-a1ea2ef82ce5 req-ca87724e-e151-46d2-b8f9-6a21e2802b13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.099 2 WARNING nova.compute.manager [req-8290e85f-924a-46e4-8f65-a1ea2ef82ce5 req-ca87724e-e151-46d2-b8f9-6a21e2802b13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.381 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 3ae8ab55-a114-4284-9d2d-e70ba073cb66 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.382 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409966.3811657, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.382 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.384 2 DEBUG nova.compute.manager [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.387 2 INFO nova.virt.libvirt.driver [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance rebooted successfully.#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.387 2 DEBUG nova.compute.manager [None req-21437243-81b2-41d6-82d1-3ada2dca6412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.425 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.427 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.471 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759409966.3823192, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.471 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] VM Started (Lifecycle Event)#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.504 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:26 np0005466031 nova_compute[235803]: 2025-10-02 12:59:26.507 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:59:27 np0005466031 nova_compute[235803]: 2025-10-02 12:59:27.365 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409952.3636217, 201e9977-0b48-484d-8463-4ff8484498bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:27 np0005466031 nova_compute[235803]: 2025-10-02 12:59:27.365 2 INFO nova.compute.manager [-] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:59:27 np0005466031 nova_compute[235803]: 2025-10-02 12:59:27.402 2 DEBUG nova.compute.manager [None req-f4047757-1074-4bb5-a294-33b322c1e396 - - - - - -] [instance: 201e9977-0b48-484d-8463-4ff8484498bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:27.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:28 np0005466031 nova_compute[235803]: 2025-10-02 12:59:28.384 2 DEBUG nova.compute.manager [req-d9bd3754-41c0-426d-9fdb-d1af9610a781 req-4e95221f-415d-40e4-9102-548e58498963 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:28 np0005466031 nova_compute[235803]: 2025-10-02 12:59:28.384 2 DEBUG oslo_concurrency.lockutils [req-d9bd3754-41c0-426d-9fdb-d1af9610a781 req-4e95221f-415d-40e4-9102-548e58498963 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:28 np0005466031 nova_compute[235803]: 2025-10-02 12:59:28.384 2 DEBUG oslo_concurrency.lockutils [req-d9bd3754-41c0-426d-9fdb-d1af9610a781 req-4e95221f-415d-40e4-9102-548e58498963 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:28 np0005466031 nova_compute[235803]: 2025-10-02 12:59:28.384 2 DEBUG oslo_concurrency.lockutils [req-d9bd3754-41c0-426d-9fdb-d1af9610a781 req-4e95221f-415d-40e4-9102-548e58498963 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:28 np0005466031 nova_compute[235803]: 2025-10-02 12:59:28.385 2 DEBUG nova.compute.manager [req-d9bd3754-41c0-426d-9fdb-d1af9610a781 req-4e95221f-415d-40e4-9102-548e58498963 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:28 np0005466031 nova_compute[235803]: 2025-10-02 12:59:28.385 2 WARNING nova.compute.manager [req-d9bd3754-41c0-426d-9fdb-d1af9610a781 req-4e95221f-415d-40e4-9102-548e58498963 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:59:28 np0005466031 podman[307457]: 2025-10-02 12:59:28.526446628 +0000 UTC m=+0.082270542 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:59:28 np0005466031 podman[307458]: 2025-10-02 12:59:28.531606926 +0000 UTC m=+0.085284428 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:59:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:29.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:29.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:30 np0005466031 nova_compute[235803]: 2025-10-02 12:59:30.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:30 np0005466031 nova_compute[235803]: 2025-10-02 12:59:30.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:33 np0005466031 podman[307529]: 2025-10-02 12:59:33.615121942 +0000 UTC m=+0.046010967 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:59:33 np0005466031 podman[307528]: 2025-10-02 12:59:33.615225155 +0000 UTC m=+0.049764205 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:59:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:33.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:33.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:35 np0005466031 nova_compute[235803]: 2025-10-02 12:59:35.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005466031 nova_compute[235803]: 2025-10-02 12:59:35.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:35.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:37.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:39Z|00620|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct  2 08:59:39 np0005466031 nova_compute[235803]: 2025-10-02 12:59:39.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:39.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:39 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:39Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:3f:6f 10.100.0.3
Oct  2 08:59:40 np0005466031 nova_compute[235803]: 2025-10-02 12:59:40.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:40 np0005466031 nova_compute[235803]: 2025-10-02 12:59:40.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:41.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:43.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:44 np0005466031 nova_compute[235803]: 2025-10-02 12:59:44.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:45 np0005466031 nova_compute[235803]: 2025-10-02 12:59:45.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:45 np0005466031 nova_compute[235803]: 2025-10-02 12:59:45.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:45.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:47.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 08:59:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:47.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 08:59:49 np0005466031 ovn_controller[132413]: 2025-10-02T12:59:49Z|00621|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct  2 08:59:49 np0005466031 nova_compute[235803]: 2025-10-02 12:59:49.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:49.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:49.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:50 np0005466031 nova_compute[235803]: 2025-10-02 12:59:50.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:50 np0005466031 nova_compute[235803]: 2025-10-02 12:59:50.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:51.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:59:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:51.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:59:53 np0005466031 nova_compute[235803]: 2025-10-02 12:59:53.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:53.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:53.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:55 np0005466031 nova_compute[235803]: 2025-10-02 12:59:55.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:55 np0005466031 nova_compute[235803]: 2025-10-02 12:59:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:55.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:55.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:57.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:57.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:58 np0005466031 podman[307631]: 2025-10-02 12:59:58.655433365 +0000 UTC m=+0.084835855 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:59:58 np0005466031 podman[307632]: 2025-10-02 12:59:58.663888589 +0000 UTC m=+0.089820319 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:59:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:59.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 08:59:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:59.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:00 np0005466031 nova_compute[235803]: 2025-10-02 13:00:00.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:00 np0005466031 nova_compute[235803]: 2025-10-02 13:00:00.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 09:00:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:01.106 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:01.107 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.191 2 DEBUG oslo_concurrency.lockutils [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.191 2 DEBUG oslo_concurrency.lockutils [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.221 2 INFO nova.compute.manager [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Detaching volume 73427f9e-6c2e-43a3-a2f1-b8df606e61c3#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.409 2 INFO nova.virt.block_device [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Attempting to driver detach volume 73427f9e-6c2e-43a3-a2f1-b8df606e61c3 from mountpoint /dev/vdb#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.419 2 DEBUG nova.virt.libvirt.driver [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Attempting to detach device vdb from instance 3ae8ab55-a114-4284-9d2d-e70ba073cb66 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.420 2 DEBUG nova.virt.libvirt.guest [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-73427f9e-6c2e-43a3-a2f1-b8df606e61c3">
Oct  2 09:00:01 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <serial>73427f9e-6c2e-43a3-a2f1-b8df606e61c3</serial>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:00:01 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.445 2 INFO nova.virt.libvirt.driver [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully detached device vdb from instance 3ae8ab55-a114-4284-9d2d-e70ba073cb66 from the persistent domain config.#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.446 2 DEBUG nova.virt.libvirt.driver [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3ae8ab55-a114-4284-9d2d-e70ba073cb66 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.446 2 DEBUG nova.virt.libvirt.guest [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-73427f9e-6c2e-43a3-a2f1-b8df606e61c3">
Oct  2 09:00:01 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <serial>73427f9e-6c2e-43a3-a2f1-b8df606e61c3</serial>
Oct  2 09:00:01 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct  2 09:00:01 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:00:01 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.557 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759410001.5574024, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.559 2 DEBUG nova.virt.libvirt.driver [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3ae8ab55-a114-4284-9d2d-e70ba073cb66 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.562 2 INFO nova.virt.libvirt.driver [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully detached device vdb from instance 3ae8ab55-a114-4284-9d2d-e70ba073cb66 from the live domain config.#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.832 2 DEBUG nova.objects.instance [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:01.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.881 2 DEBUG oslo_concurrency.lockutils [None req-6306b336-920f-4b01-adf8-ddf114801d72 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:01 np0005466031 nova_compute[235803]: 2025-10-02 13:00:01.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:02.109 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:02 np0005466031 nova_compute[235803]: 2025-10-02 13:00:02.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:02 np0005466031 nova_compute[235803]: 2025-10-02 13:00:02.642 2 DEBUG oslo_concurrency.lockutils [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:02 np0005466031 nova_compute[235803]: 2025-10-02 13:00:02.643 2 DEBUG oslo_concurrency.lockutils [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:02 np0005466031 nova_compute[235803]: 2025-10-02 13:00:02.643 2 DEBUG nova.compute.manager [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:02 np0005466031 nova_compute[235803]: 2025-10-02 13:00:02.647 2 DEBUG nova.compute.manager [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 09:00:02 np0005466031 nova_compute[235803]: 2025-10-02 13:00:02.648 2 DEBUG nova.objects.instance [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:02 np0005466031 nova_compute[235803]: 2025-10-02 13:00:02.670 2 DEBUG nova.virt.libvirt.driver [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:00:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:03.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:04 np0005466031 podman[307682]: 2025-10-02 13:00:04.631585443 +0000 UTC m=+0.061633197 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:00:04 np0005466031 podman[307681]: 2025-10-02 13:00:04.64744079 +0000 UTC m=+0.081363476 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  2 09:00:05 np0005466031 kernel: tap7fa0e3d4-af (unregistering): left promiscuous mode
Oct  2 09:00:05 np0005466031 NetworkManager[44907]: <info>  [1759410005.0950] device (tap7fa0e3d4-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:00:05 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:05Z|00622|binding|INFO|Releasing lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 from this chassis (sb_readonly=0)
Oct  2 09:00:05 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:05Z|00623|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 down in Southbound
Oct  2 09:00:05 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:05Z|00624|binding|INFO|Removing iface tap7fa0e3d4-af ovn-installed in OVS
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.113 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:6f 10.100.0.3'], port_security=['fa:16:3e:4f:3f:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3ae8ab55-a114-4284-9d2d-e70ba073cb66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf287730-8b39-470a-9870-d19a70f15c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cfe3e2cd-013f-433d-b677-80f8e7ef6a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205bb72b-7c7b-4eea-8f2e-e72a1fd482ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.114 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 in datapath cf287730-8b39-470a-9870-d19a70f15c4d unbound from our chassis#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.115 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf287730-8b39-470a-9870-d19a70f15c4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.116 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7bdaf0-1941-4423-ad5c-35b9f3001941]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.117 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d namespace which is not needed anymore#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466031 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Oct  2 09:00:05 np0005466031 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d000000a1.scope: Consumed 14.571s CPU time.
Oct  2 09:00:05 np0005466031 systemd-machined[192227]: Machine qemu-72-instance-000000a1 terminated.
Oct  2 09:00:05 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[307417]: [NOTICE]   (307421) : haproxy version is 2.8.14-c23fe91
Oct  2 09:00:05 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[307417]: [NOTICE]   (307421) : path to executable is /usr/sbin/haproxy
Oct  2 09:00:05 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[307417]: [WARNING]  (307421) : Exiting Master process...
Oct  2 09:00:05 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[307417]: [WARNING]  (307421) : Exiting Master process...
Oct  2 09:00:05 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[307417]: [ALERT]    (307421) : Current worker (307423) exited with code 143 (Terminated)
Oct  2 09:00:05 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[307417]: [WARNING]  (307421) : All workers exited. Exiting... (0)
Oct  2 09:00:05 np0005466031 systemd[1]: libpod-4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7.scope: Deactivated successfully.
Oct  2 09:00:05 np0005466031 podman[307745]: 2025-10-02 13:00:05.275609958 +0000 UTC m=+0.064508010 container died 4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:00:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:00:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4081704541' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:00:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:00:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4081704541' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:00:05 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7-userdata-shm.mount: Deactivated successfully.
Oct  2 09:00:05 np0005466031 systemd[1]: var-lib-containers-storage-overlay-a853a68b8989174ecf77f53153c6c1ba34310b2d0ca97b59dc57af073c887b82-merged.mount: Deactivated successfully.
Oct  2 09:00:05 np0005466031 podman[307745]: 2025-10-02 13:00:05.410267028 +0000 UTC m=+0.199165080 container cleanup 4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:00:05 np0005466031 systemd[1]: libpod-conmon-4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7.scope: Deactivated successfully.
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466031 podman[307785]: 2025-10-02 13:00:05.485527836 +0000 UTC m=+0.054740798 container remove 4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.492 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cb75cc2a-b673-43f6-b529-0f89ebd90d31]: (4, ('Thu Oct  2 01:00:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d (4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7)\n4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7\nThu Oct  2 01:00:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d (4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7)\n4982b73e828eec7032a580f5c1dc64cf3c16eb91bc9835f1a3d1a13abe589cc7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.493 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c74db91f-739b-411b-a29f-12fdcf15adab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.495 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf287730-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466031 kernel: tapcf287730-80: left promiscuous mode
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.516 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc29e6b-5329-4ea6-aaf3-cfaac614af9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.538 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[36b23d61-8fc6-4956-80f7-e219d2080148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.540 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3f8c7b-eaae-4953-8868-c1df4042c651]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.557 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[21b78840-85dc-47dd-895d-c3c1a9580bec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782087, 'reachable_time': 30241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307803, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:05 np0005466031 systemd[1]: run-netns-ovnmeta\x2dcf287730\x2d8b39\x2d470a\x2d9870\x2dd19a70f15c4d.mount: Deactivated successfully.
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.562 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:00:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:05.563 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[955ed19a-1a35-4ae5-afbd-bf5d8c9ef4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.752 2 INFO nova.virt.libvirt.driver [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.759 2 INFO nova.virt.libvirt.driver [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance destroyed successfully.#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.759 2 DEBUG nova.objects.instance [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.781 2 DEBUG nova.compute.manager [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.835 2 DEBUG nova.compute.manager [req-5bdb8085-14d0-4dcf-864d-40656994c30b req-98f3c995-4833-480d-88bb-a6b45f94975d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.836 2 DEBUG oslo_concurrency.lockutils [req-5bdb8085-14d0-4dcf-864d-40656994c30b req-98f3c995-4833-480d-88bb-a6b45f94975d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.836 2 DEBUG oslo_concurrency.lockutils [req-5bdb8085-14d0-4dcf-864d-40656994c30b req-98f3c995-4833-480d-88bb-a6b45f94975d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.836 2 DEBUG oslo_concurrency.lockutils [req-5bdb8085-14d0-4dcf-864d-40656994c30b req-98f3c995-4833-480d-88bb-a6b45f94975d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.836 2 DEBUG nova.compute.manager [req-5bdb8085-14d0-4dcf-864d-40656994c30b req-98f3c995-4833-480d-88bb-a6b45f94975d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.837 2 WARNING nova.compute.manager [req-5bdb8085-14d0-4dcf-864d-40656994c30b req-98f3c995-4833-480d-88bb-a6b45f94975d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 09:00:05 np0005466031 nova_compute[235803]: 2025-10-02 13:00:05.843 2 DEBUG oslo_concurrency.lockutils [None req-5fc99f67-2140-41f1-a262-5e1ed71bc412 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:05.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:07.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:07.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.925 2 DEBUG nova.compute.manager [req-b5d4b4c5-ee4c-4e13-9809-a8a5fdbd8a39 req-bbebe24d-3e04-48c6-858a-5f635b5b8cc2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.925 2 DEBUG oslo_concurrency.lockutils [req-b5d4b4c5-ee4c-4e13-9809-a8a5fdbd8a39 req-bbebe24d-3e04-48c6-858a-5f635b5b8cc2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.925 2 DEBUG oslo_concurrency.lockutils [req-b5d4b4c5-ee4c-4e13-9809-a8a5fdbd8a39 req-bbebe24d-3e04-48c6-858a-5f635b5b8cc2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.925 2 DEBUG oslo_concurrency.lockutils [req-b5d4b4c5-ee4c-4e13-9809-a8a5fdbd8a39 req-bbebe24d-3e04-48c6-858a-5f635b5b8cc2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.926 2 DEBUG nova.compute.manager [req-b5d4b4c5-ee4c-4e13-9809-a8a5fdbd8a39 req-bbebe24d-3e04-48c6-858a-5f635b5b8cc2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.926 2 WARNING nova.compute.manager [req-b5d4b4c5-ee4c-4e13-9809-a8a5fdbd8a39 req-bbebe24d-3e04-48c6-858a-5f635b5b8cc2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 09:00:07 np0005466031 nova_compute[235803]: 2025-10-02 13:00:07.977 2 DEBUG nova.objects.instance [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'flavor' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:08 np0005466031 nova_compute[235803]: 2025-10-02 13:00:08.003 2 DEBUG oslo_concurrency.lockutils [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:08 np0005466031 nova_compute[235803]: 2025-10-02 13:00:08.003 2 DEBUG oslo_concurrency.lockutils [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquired lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:08 np0005466031 nova_compute[235803]: 2025-10-02 13:00:08.003 2 DEBUG nova.network.neutron [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:00:08 np0005466031 nova_compute[235803]: 2025-10-02 13:00:08.004 2 DEBUG nova.objects.instance [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'info_cache' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.339 2 DEBUG nova.network.neutron [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating instance_info_cache with network_info: [{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.359 2 DEBUG oslo_concurrency.lockutils [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Releasing lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.389 2 INFO nova.virt.libvirt.driver [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance destroyed successfully.#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.389 2 DEBUG nova.objects.instance [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.402 2 DEBUG nova.objects.instance [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.417 2 DEBUG nova.virt.libvirt.vif [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.418 2 DEBUG nova.network.os_vif_util [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.418 2 DEBUG nova.network.os_vif_util [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.419 2 DEBUG os_vif [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.420 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fa0e3d4-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.426 2 INFO os_vif [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af')#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.432 2 DEBUG nova.virt.libvirt.driver [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Start _get_guest_xml network_info=[{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.435 2 WARNING nova.virt.libvirt.driver [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.439 2 DEBUG nova.virt.libvirt.host [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.440 2 DEBUG nova.virt.libvirt.host [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.444 2 DEBUG nova.virt.libvirt.host [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.444 2 DEBUG nova.virt.libvirt.host [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.446 2 DEBUG nova.virt.libvirt.driver [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.446 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.446 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.447 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.447 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.447 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.447 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.447 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.448 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.448 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.448 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.448 2 DEBUG nova.virt.hardware [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.449 2 DEBUG nova.objects.instance [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.468 2 DEBUG oslo_concurrency.processutils [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:09.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1424016671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.916 2 DEBUG oslo_concurrency.processutils [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:09 np0005466031 nova_compute[235803]: 2025-10-02 13:00:09.959 2 DEBUG oslo_concurrency.processutils [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/406634081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.521 2 DEBUG oslo_concurrency.processutils [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.523 2 DEBUG nova.virt.libvirt.vif [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.524 2 DEBUG nova.network.os_vif_util [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.525 2 DEBUG nova.network.os_vif_util [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.526 2 DEBUG nova.objects.instance [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.541 2 DEBUG nova.virt.libvirt.driver [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <uuid>3ae8ab55-a114-4284-9d2d-e70ba073cb66</uuid>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <name>instance-000000a1</name>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachVolumeTestJSON-server-197733369</nova:name>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:00:09</nova:creationTime>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <nova:user uuid="57a1608ca1fc4bef8b6bc6ad68be3999">tempest-AttachVolumeTestJSON-68983480-project-member</nova:user>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <nova:project uuid="0b15f29eb32d4c5cba98baa238cc12e1">tempest-AttachVolumeTestJSON-68983480</nova:project>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <nova:port uuid="7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <entry name="serial">3ae8ab55-a114-4284-9d2d-e70ba073cb66</entry>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <entry name="uuid">3ae8ab55-a114-4284-9d2d-e70ba073cb66</entry>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3ae8ab55-a114-4284-9d2d-e70ba073cb66_disk.config">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:4f:3f:6f"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <target dev="tap7fa0e3d4-af"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66/console.log" append="off"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:00:10 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:00:10 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:00:10 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:00:10 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.543 2 DEBUG nova.virt.libvirt.driver [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.543 2 DEBUG nova.virt.libvirt.driver [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.544 2 DEBUG nova.virt.libvirt.vif [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.544 2 DEBUG nova.network.os_vif_util [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.545 2 DEBUG nova.network.os_vif_util [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.545 2 DEBUG os_vif [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.546 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fa0e3d4-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7fa0e3d4-af, col_values=(('external_ids', {'iface-id': '7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:3f:6f', 'vm-uuid': '3ae8ab55-a114-4284-9d2d-e70ba073cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 NetworkManager[44907]: <info>  [1759410010.5526] manager: (tap7fa0e3d4-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.557 2 INFO os_vif [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af')#033[00m
Oct  2 09:00:10 np0005466031 kernel: tap7fa0e3d4-af: entered promiscuous mode
Oct  2 09:00:10 np0005466031 NetworkManager[44907]: <info>  [1759410010.6282] manager: (tap7fa0e3d4-af): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Oct  2 09:00:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:10Z|00625|binding|INFO|Claiming lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for this chassis.
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:10Z|00626|binding|INFO|7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101: Claiming fa:16:3e:4f:3f:6f 10.100.0.3
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.638 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:6f 10.100.0.3'], port_security=['fa:16:3e:4f:3f:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3ae8ab55-a114-4284-9d2d-e70ba073cb66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf287730-8b39-470a-9870-d19a70f15c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cfe3e2cd-013f-433d-b677-80f8e7ef6a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205bb72b-7c7b-4eea-8f2e-e72a1fd482ed, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.640 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 in datapath cf287730-8b39-470a-9870-d19a70f15c4d bound to our chassis#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.641 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf287730-8b39-470a-9870-d19a70f15c4d#033[00m
Oct  2 09:00:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:10Z|00627|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 ovn-installed in OVS
Oct  2 09:00:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:10Z|00628|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 up in Southbound
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.654 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3a417196-6dde-4a87-95ea-9c360b1b2d42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.654 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf287730-81 in ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.656 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf287730-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.656 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c6748c3c-e24c-4da3-bf7d-9a7e839c7c5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.657 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd5cd83-842f-4b84-abbc-20a716d8ce9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 systemd-machined[192227]: New machine qemu-73-instance-000000a1.
Oct  2 09:00:10 np0005466031 systemd-udevd[307936]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.667 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b0a337-ee9e-446a-bf83-bb23d7c9701b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 systemd[1]: Started Virtual Machine qemu-73-instance-000000a1.
Oct  2 09:00:10 np0005466031 NetworkManager[44907]: <info>  [1759410010.6758] device (tap7fa0e3d4-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:00:10 np0005466031 NetworkManager[44907]: <info>  [1759410010.6767] device (tap7fa0e3d4-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.691 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cd376b-12cd-4654-8acb-7ea186c02e2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.718 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[54761474-26ef-47c0-935a-4123a52728c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 NetworkManager[44907]: <info>  [1759410010.7232] manager: (tapcf287730-80): new Veth device (/org/freedesktop/NetworkManager/Devices/284)
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.722 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3677a498-15f1-460f-9ee2-19432f050e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.749 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[40c2580e-af72-40a2-9c3e-ecd242559573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.752 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[9771fec7-96d9-4133-9ba2-8e7e5b794a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 NetworkManager[44907]: <info>  [1759410010.7733] device (tapcf287730-80): carrier: link connected
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.778 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3f4be4-9b07-45f5-9f0a-bc2661f5cc49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.794 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d7353fe6-f51a-4b1a-b13b-f310b689b593]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf287730-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:e9:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786636, 'reachable_time': 18198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307968, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.807 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b7683969-076b-4d05-82ae-2fe55a9cd829]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:e9a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786636, 'tstamp': 786636}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307969, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.821 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0e00039e-b563-4e71-9942-b02c11c72cc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf287730-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:e9:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786636, 'reachable_time': 18198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307970, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.850 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b762d721-dc42-46c6-b2ce-158a9360c88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.892 2 DEBUG nova.compute.manager [req-4164d15b-92c7-4b7c-ab6f-e97c64c2324e req-5f85f385-1839-4e8c-a294-aa98c9ab8723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.892 2 DEBUG oslo_concurrency.lockutils [req-4164d15b-92c7-4b7c-ab6f-e97c64c2324e req-5f85f385-1839-4e8c-a294-aa98c9ab8723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.892 2 DEBUG oslo_concurrency.lockutils [req-4164d15b-92c7-4b7c-ab6f-e97c64c2324e req-5f85f385-1839-4e8c-a294-aa98c9ab8723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.892 2 DEBUG oslo_concurrency.lockutils [req-4164d15b-92c7-4b7c-ab6f-e97c64c2324e req-5f85f385-1839-4e8c-a294-aa98c9ab8723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.893 2 DEBUG nova.compute.manager [req-4164d15b-92c7-4b7c-ab6f-e97c64c2324e req-5f85f385-1839-4e8c-a294-aa98c9ab8723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.893 2 WARNING nova.compute.manager [req-4164d15b-92c7-4b7c-ab6f-e97c64c2324e req-5f85f385-1839-4e8c-a294-aa98c9ab8723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.912 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ce33394f-8826-4e22-ba1e-c4638e95a8aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.913 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf287730-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.913 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.914 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf287730-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 kernel: tapcf287730-80: entered promiscuous mode
Oct  2 09:00:10 np0005466031 NetworkManager[44907]: <info>  [1759410010.9164] manager: (tapcf287730-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.918 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf287730-80, col_values=(('external_ids', {'iface-id': '9ae1fd94-b5f3-4333-9533-d979eb84ea8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:10Z|00629|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct  2 09:00:10 np0005466031 nova_compute[235803]: 2025-10-02 13:00:10.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.934 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.935 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9906b1bc-0482-479a-9531-22e5555526fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.936 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-cf287730-8b39-470a-9870-d19a70f15c4d
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/cf287730-8b39-470a-9870-d19a70f15c4d.pid.haproxy
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID cf287730-8b39-470a-9870-d19a70f15c4d
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:00:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:10.936 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'env', 'PROCESS_TAG=haproxy-cf287730-8b39-470a-9870-d19a70f15c4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf287730-8b39-470a-9870-d19a70f15c4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:00:11 np0005466031 podman[308044]: 2025-10-02 13:00:11.299094624 +0000 UTC m=+0.056370315 container create ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:00:11 np0005466031 systemd[1]: Started libpod-conmon-ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29.scope.
Oct  2 09:00:11 np0005466031 podman[308044]: 2025-10-02 13:00:11.265724433 +0000 UTC m=+0.023000124 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:00:11 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:00:11 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/667022c87f1f270327e329c64f081007eeb50833e3b23deefde6a7ae60b09261/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:00:11 np0005466031 podman[308044]: 2025-10-02 13:00:11.396754608 +0000 UTC m=+0.154030319 container init ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 09:00:11 np0005466031 podman[308044]: 2025-10-02 13:00:11.40235637 +0000 UTC m=+0.159632061 container start ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:00:11 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[308059]: [NOTICE]   (308063) : New worker (308065) forked
Oct  2 09:00:11 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[308059]: [NOTICE]   (308063) : Loading success.
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.456 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 3ae8ab55-a114-4284-9d2d-e70ba073cb66 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.456 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410011.4556708, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.457 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.458 2 DEBUG nova.compute.manager [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.462 2 INFO nova.virt.libvirt.driver [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance rebooted successfully.#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.462 2 DEBUG nova.compute.manager [None req-57413f82-4c16-4785-a9fa-5ea8e479813b 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.505 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.510 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.547 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410011.455947, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.547 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] VM Started (Lifecycle Event)#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.582 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.588 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:11 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:11Z|00630|binding|INFO|Releasing lport 9ae1fd94-b5f3-4333-9533-d979eb84ea8f from this chassis (sb_readonly=0)
Oct  2 09:00:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:11.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:11.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:11 np0005466031 nova_compute[235803]: 2025-10-02 13:00:11.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:13 np0005466031 nova_compute[235803]: 2025-10-02 13:00:13.311 2 DEBUG nova.compute.manager [req-d463e24a-f76b-4159-847a-ff69d25620fb req-8023ca41-399a-4705-b81f-0a84f97dd271 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:13 np0005466031 nova_compute[235803]: 2025-10-02 13:00:13.311 2 DEBUG oslo_concurrency.lockutils [req-d463e24a-f76b-4159-847a-ff69d25620fb req-8023ca41-399a-4705-b81f-0a84f97dd271 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:13 np0005466031 nova_compute[235803]: 2025-10-02 13:00:13.312 2 DEBUG oslo_concurrency.lockutils [req-d463e24a-f76b-4159-847a-ff69d25620fb req-8023ca41-399a-4705-b81f-0a84f97dd271 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:13 np0005466031 nova_compute[235803]: 2025-10-02 13:00:13.312 2 DEBUG oslo_concurrency.lockutils [req-d463e24a-f76b-4159-847a-ff69d25620fb req-8023ca41-399a-4705-b81f-0a84f97dd271 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:13 np0005466031 nova_compute[235803]: 2025-10-02 13:00:13.312 2 DEBUG nova.compute.manager [req-d463e24a-f76b-4159-847a-ff69d25620fb req-8023ca41-399a-4705-b81f-0a84f97dd271 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:13 np0005466031 nova_compute[235803]: 2025-10-02 13:00:13.312 2 WARNING nova.compute.manager [req-d463e24a-f76b-4159-847a-ff69d25620fb req-8023ca41-399a-4705-b81f-0a84f97dd271 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:00:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:13.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:00:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:13.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:00:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.659 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.661 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:00:15 np0005466031 nova_compute[235803]: 2025-10-02 13:00:15.661 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:00:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:00:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:15.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:15.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3382480637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.195 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.272 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.273 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.435 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.436 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4071MB free_disk=20.921802520751953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.437 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.437 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.533 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 3ae8ab55-a114-4284-9d2d-e70ba073cb66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.534 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.535 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.554 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.575 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.575 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.591 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.613 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:00:16 np0005466031 nova_compute[235803]: 2025-10-02 13:00:16.651 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:00:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:00:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:00:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1652227038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:17 np0005466031 nova_compute[235803]: 2025-10-02 13:00:17.150 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:17 np0005466031 nova_compute[235803]: 2025-10-02 13:00:17.155 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:17 np0005466031 nova_compute[235803]: 2025-10-02 13:00:17.189 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:17 np0005466031 nova_compute[235803]: 2025-10-02 13:00:17.230 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:00:17 np0005466031 nova_compute[235803]: 2025-10-02 13:00:17.231 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:17.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:18 np0005466031 nova_compute[235803]: 2025-10-02 13:00:18.232 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:18 np0005466031 nova_compute[235803]: 2025-10-02 13:00:18.233 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:00:18 np0005466031 nova_compute[235803]: 2025-10-02 13:00:18.234 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:00:18 np0005466031 nova_compute[235803]: 2025-10-02 13:00:18.512 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:18 np0005466031 nova_compute[235803]: 2025-10-02 13:00:18.512 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:18 np0005466031 nova_compute[235803]: 2025-10-02 13:00:18.513 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:00:18 np0005466031 nova_compute[235803]: 2025-10-02 13:00:18.513 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:19 np0005466031 nova_compute[235803]: 2025-10-02 13:00:19.815 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating instance_info_cache with network_info: [{"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:19 np0005466031 nova_compute[235803]: 2025-10-02 13:00:19.846 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-3ae8ab55-a114-4284-9d2d-e70ba073cb66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:19 np0005466031 nova_compute[235803]: 2025-10-02 13:00:19.846 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:00:19 np0005466031 nova_compute[235803]: 2025-10-02 13:00:19.847 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:19.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:19.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:19 np0005466031 nova_compute[235803]: 2025-10-02 13:00:19.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:20 np0005466031 nova_compute[235803]: 2025-10-02 13:00:20.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:20 np0005466031 nova_compute[235803]: 2025-10-02 13:00:20.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:20 np0005466031 nova_compute[235803]: 2025-10-02 13:00:20.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:21.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:21.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.316688) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023316747, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2411, "num_deletes": 253, "total_data_size": 5666031, "memory_usage": 5737880, "flush_reason": "Manual Compaction"}
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023367994, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 3704781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60684, "largest_seqno": 63090, "table_properties": {"data_size": 3695028, "index_size": 6119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20943, "raw_average_key_size": 20, "raw_value_size": 3675280, "raw_average_value_size": 3638, "num_data_blocks": 265, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409820, "oldest_key_time": 1759409820, "file_creation_time": 1759410023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 51363 microseconds, and 7445 cpu microseconds.
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.368058) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 3704781 bytes OK
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.368077) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.370320) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.370335) EVENT_LOG_v1 {"time_micros": 1759410023370330, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.370350) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 5655333, prev total WAL file size 5655333, number of live WAL files 2.
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.371731) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(3617KB)], [120(11MB)]
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023371789, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 15997180, "oldest_snapshot_seqno": -1}
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 8880 keys, 14031540 bytes, temperature: kUnknown
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023479756, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 14031540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13970660, "index_size": 37600, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 229666, "raw_average_key_size": 25, "raw_value_size": 13811435, "raw_average_value_size": 1555, "num_data_blocks": 1474, "num_entries": 8880, "num_filter_entries": 8880, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.479978) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 14031540 bytes
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.498467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.1 rd, 129.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.7 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 9406, records dropped: 526 output_compression: NoCompression
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.498503) EVENT_LOG_v1 {"time_micros": 1759410023498490, "job": 76, "event": "compaction_finished", "compaction_time_micros": 108025, "compaction_time_cpu_micros": 30082, "output_level": 6, "num_output_files": 1, "total_output_size": 14031540, "num_input_records": 9406, "num_output_records": 8880, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023499288, "job": 76, "event": "table_file_deletion", "file_number": 122}
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023501714, "job": 76, "event": "table_file_deletion", "file_number": 120}
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.371574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.501774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.501779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.501780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.501782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:23.501784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466031 nova_compute[235803]: 2025-10-02 13:00:23.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:23 np0005466031 nova_compute[235803]: 2025-10-02 13:00:23.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:00:23 np0005466031 nova_compute[235803]: 2025-10-02 13:00:23.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:23.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:24 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:24Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:3f:6f 10.100.0.3
Oct  2 09:00:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:25 np0005466031 nova_compute[235803]: 2025-10-02 13:00:25.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:25 np0005466031 nova_compute[235803]: 2025-10-02 13:00:25.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:25.868 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:25.869 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:25.869 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:25.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:27.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.671495) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028671663, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 309, "num_deletes": 251, "total_data_size": 172344, "memory_usage": 178976, "flush_reason": "Manual Compaction"}
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028675339, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 112857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63095, "largest_seqno": 63399, "table_properties": {"data_size": 110882, "index_size": 203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5645, "raw_average_key_size": 20, "raw_value_size": 106928, "raw_average_value_size": 384, "num_data_blocks": 9, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410023, "oldest_key_time": 1759410023, "file_creation_time": 1759410028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 3840 microseconds, and 1653 cpu microseconds.
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675427) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 112857 bytes OK
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675454) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.676527) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.676562) EVENT_LOG_v1 {"time_micros": 1759410028676540, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.676582) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 170110, prev total WAL file size 170110, number of live WAL files 2.
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.676966) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303036' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(110KB)], [123(13MB)]
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028677000, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 14144397, "oldest_snapshot_seqno": -1}
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 8648 keys, 10298579 bytes, temperature: kUnknown
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028738907, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 10298579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10244043, "index_size": 31832, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 225051, "raw_average_key_size": 26, "raw_value_size": 10093634, "raw_average_value_size": 1167, "num_data_blocks": 1232, "num_entries": 8648, "num_filter_entries": 8648, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.739224) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10298579 bytes
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.740636) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.2 rd, 166.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(216.6) write-amplify(91.3) OK, records in: 9158, records dropped: 510 output_compression: NoCompression
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.740655) EVENT_LOG_v1 {"time_micros": 1759410028740646, "job": 78, "event": "compaction_finished", "compaction_time_micros": 61986, "compaction_time_cpu_micros": 27206, "output_level": 6, "num_output_files": 1, "total_output_size": 10298579, "num_input_records": 9158, "num_output_records": 8648, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028740775, "job": 78, "event": "table_file_deletion", "file_number": 125}
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028743324, "job": 78, "event": "table_file_deletion", "file_number": 123}
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.676921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.743380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.743385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.743387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.743388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:00:28.743390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466031 podman[308451]: 2025-10-02 13:00:28.97257848 +0000 UTC m=+0.058055544 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  2 09:00:29 np0005466031 podman[308452]: 2025-10-02 13:00:29.006328772 +0000 UTC m=+0.091806866 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:00:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:29.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.384 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.384 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.384 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.385 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.385 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.386 2 INFO nova.compute.manager [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Terminating instance#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.387 2 DEBUG nova.compute.manager [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:00:30 np0005466031 kernel: tap7fa0e3d4-af (unregistering): left promiscuous mode
Oct  2 09:00:30 np0005466031 NetworkManager[44907]: <info>  [1759410030.4523] device (tap7fa0e3d4-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:30Z|00631|binding|INFO|Releasing lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 from this chassis (sb_readonly=0)
Oct  2 09:00:30 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:30Z|00632|binding|INFO|Setting lport 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 down in Southbound
Oct  2 09:00:30 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:30Z|00633|binding|INFO|Removing iface tap7fa0e3d4-af ovn-installed in OVS
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.464 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:3f:6f 10.100.0.3'], port_security=['fa:16:3e:4f:3f:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3ae8ab55-a114-4284-9d2d-e70ba073cb66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf287730-8b39-470a-9870-d19a70f15c4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b15f29eb32d4c5cba98baa238cc12e1', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'cfe3e2cd-013f-433d-b677-80f8e7ef6a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205bb72b-7c7b-4eea-8f2e-e72a1fd482ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.465 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 in datapath cf287730-8b39-470a-9870-d19a70f15c4d unbound from our chassis#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.466 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf287730-8b39-470a-9870-d19a70f15c4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.467 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[173720d1-d8e4-4bdb-9a7c-63de6c676253]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.468 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d namespace which is not needed anymore#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Oct  2 09:00:30 np0005466031 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a1.scope: Consumed 13.825s CPU time.
Oct  2 09:00:30 np0005466031 systemd-machined[192227]: Machine qemu-73-instance-000000a1 terminated.
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[308059]: [NOTICE]   (308063) : haproxy version is 2.8.14-c23fe91
Oct  2 09:00:30 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[308059]: [NOTICE]   (308063) : path to executable is /usr/sbin/haproxy
Oct  2 09:00:30 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[308059]: [WARNING]  (308063) : Exiting Master process...
Oct  2 09:00:30 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[308059]: [WARNING]  (308063) : Exiting Master process...
Oct  2 09:00:30 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[308059]: [ALERT]    (308063) : Current worker (308065) exited with code 143 (Terminated)
Oct  2 09:00:30 np0005466031 neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d[308059]: [WARNING]  (308063) : All workers exited. Exiting... (0)
Oct  2 09:00:30 np0005466031 systemd[1]: libpod-ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29.scope: Deactivated successfully.
Oct  2 09:00:30 np0005466031 podman[308544]: 2025-10-02 13:00:30.596388494 +0000 UTC m=+0.045909084 container died ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.621 2 INFO nova.virt.libvirt.driver [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Instance destroyed successfully.#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.622 2 DEBUG nova.objects.instance [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae8ab55-a114-4284-9d2d-e70ba073cb66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:30 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29-userdata-shm.mount: Deactivated successfully.
Oct  2 09:00:30 np0005466031 systemd[1]: var-lib-containers-storage-overlay-667022c87f1f270327e329c64f081007eeb50833e3b23deefde6a7ae60b09261-merged.mount: Deactivated successfully.
Oct  2 09:00:30 np0005466031 podman[308544]: 2025-10-02 13:00:30.643656486 +0000 UTC m=+0.093177076 container cleanup ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:00:30 np0005466031 systemd[1]: libpod-conmon-ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29.scope: Deactivated successfully.
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.672 2 DEBUG nova.virt.libvirt.vif [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-197733369',display_name='tempest-AttachVolumeTestJSON-server-197733369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-197733369',id=161,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ0vPcwtZqWNoz35xlLL8rA9I5zzmnpaTwEg2lG1uREaylQJtWDQ7C8ts/WLwGO12hcHJZb6T5z5A5i0feX0Swe4SvSwLaV8nuj55Atu7NPM6sp4Qjn/LDy82DX01KQOsw==',key_name='tempest-keypair-1518482220',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0b15f29eb32d4c5cba98baa238cc12e1',ramdisk_id='',reservation_id='r-7y34l70i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-68983480',owner_user_name='tempest-AttachVolumeTestJSON-68983480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57a1608ca1fc4bef8b6bc6ad68be3999',uuid=3ae8ab55-a114-4284-9d2d-e70ba073cb66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.672 2 DEBUG nova.network.os_vif_util [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converting VIF {"id": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "address": "fa:16:3e:4f:3f:6f", "network": {"id": "cf287730-8b39-470a-9870-d19a70f15c4d", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-530887956-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b15f29eb32d4c5cba98baa238cc12e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fa0e3d4-af", "ovs_interfaceid": "7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.673 2 DEBUG nova.network.os_vif_util [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.673 2 DEBUG os_vif [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fa0e3d4-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.683 2 INFO os_vif [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:3f:6f,bridge_name='br-int',has_traffic_filtering=True,id=7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101,network=Network(cf287730-8b39-470a-9870-d19a70f15c4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fa0e3d4-af')#033[00m
Oct  2 09:00:30 np0005466031 podman[308584]: 2025-10-02 13:00:30.72120367 +0000 UTC m=+0.052073631 container remove ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.732 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f27e9896-2768-4796-a5ee-754f03e506f6]: (4, ('Thu Oct  2 01:00:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d (ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29)\nba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29\nThu Oct  2 01:00:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d (ba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29)\nba84631b667de0d7c24402a72a2ce5e208689537fa4c88211fdf37bf043f4f29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.733 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[79a275af-79fd-4e81-bbed-36a874b5e901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.734 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf287730-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 kernel: tapcf287730-80: left promiscuous mode
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.740 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[13ab859e-7615-4ea4-84e9-5e5990a142f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.744 2 DEBUG nova.compute.manager [req-ad44d3a0-fdcc-4098-8f88-e497cd5d6251 req-6c636ee7-2579-43e3-b0e4-835a64f5b57c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.745 2 DEBUG oslo_concurrency.lockutils [req-ad44d3a0-fdcc-4098-8f88-e497cd5d6251 req-6c636ee7-2579-43e3-b0e4-835a64f5b57c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.746 2 DEBUG oslo_concurrency.lockutils [req-ad44d3a0-fdcc-4098-8f88-e497cd5d6251 req-6c636ee7-2579-43e3-b0e4-835a64f5b57c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.746 2 DEBUG oslo_concurrency.lockutils [req-ad44d3a0-fdcc-4098-8f88-e497cd5d6251 req-6c636ee7-2579-43e3-b0e4-835a64f5b57c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.746 2 DEBUG nova.compute.manager [req-ad44d3a0-fdcc-4098-8f88-e497cd5d6251 req-6c636ee7-2579-43e3-b0e4-835a64f5b57c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.747 2 DEBUG nova.compute.manager [req-ad44d3a0-fdcc-4098-8f88-e497cd5d6251 req-6c636ee7-2579-43e3-b0e4-835a64f5b57c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-unplugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:00:30 np0005466031 nova_compute[235803]: 2025-10-02 13:00:30.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.773 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d7de82df-5925-4501-bf34-44493713ef70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.775 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2cae1e83-a773-4d04-bc36-ae05bc1f0e4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.789 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7b5629-e422-4e4e-ae31-698458d214f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786630, 'reachable_time': 41178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308616, 'error': None, 'target': 'ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.793 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf287730-8b39-470a-9870-d19a70f15c4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:00:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:30.793 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[634bdc0d-dd26-472d-95e9-4efe5c2ceae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:30 np0005466031 systemd[1]: run-netns-ovnmeta\x2dcf287730\x2d8b39\x2d470a\x2d9870\x2dd19a70f15c4d.mount: Deactivated successfully.
Oct  2 09:00:31 np0005466031 nova_compute[235803]: 2025-10-02 13:00:31.603 2 INFO nova.virt.libvirt.driver [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Deleting instance files /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66_del#033[00m
Oct  2 09:00:31 np0005466031 nova_compute[235803]: 2025-10-02 13:00:31.604 2 INFO nova.virt.libvirt.driver [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Deletion of /var/lib/nova/instances/3ae8ab55-a114-4284-9d2d-e70ba073cb66_del complete#033[00m
Oct  2 09:00:31 np0005466031 nova_compute[235803]: 2025-10-02 13:00:31.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:31 np0005466031 nova_compute[235803]: 2025-10-02 13:00:31.765 2 INFO nova.compute.manager [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Took 1.38 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:00:31 np0005466031 nova_compute[235803]: 2025-10-02 13:00:31.765 2 DEBUG oslo.service.loopingcall [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:00:31 np0005466031 nova_compute[235803]: 2025-10-02 13:00:31.766 2 DEBUG nova.compute.manager [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:00:31 np0005466031 nova_compute[235803]: 2025-10-02 13:00:31.766 2 DEBUG nova.network.neutron [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:00:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:31.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:31.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.643 2 DEBUG nova.network.neutron [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.661 2 INFO nova.compute.manager [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Took 0.89 seconds to deallocate network for instance.#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.714 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.714 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.780 2 DEBUG oslo_concurrency.processutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.815 2 DEBUG nova.compute.manager [req-d3f461ea-302f-4943-8577-8bcf58b52953 req-bfa8a458-43f4-44ba-bfd3-d7b8a9a77680 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-deleted-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.869 2 DEBUG nova.compute.manager [req-a0d99f8d-f730-4754-9143-2a51e21b4981 req-7331f161-175d-454d-9d11-a8edd9ab5e8c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.870 2 DEBUG oslo_concurrency.lockutils [req-a0d99f8d-f730-4754-9143-2a51e21b4981 req-7331f161-175d-454d-9d11-a8edd9ab5e8c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.870 2 DEBUG oslo_concurrency.lockutils [req-a0d99f8d-f730-4754-9143-2a51e21b4981 req-7331f161-175d-454d-9d11-a8edd9ab5e8c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.870 2 DEBUG oslo_concurrency.lockutils [req-a0d99f8d-f730-4754-9143-2a51e21b4981 req-7331f161-175d-454d-9d11-a8edd9ab5e8c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.871 2 DEBUG nova.compute.manager [req-a0d99f8d-f730-4754-9143-2a51e21b4981 req-7331f161-175d-454d-9d11-a8edd9ab5e8c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] No waiting events found dispatching network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:32 np0005466031 nova_compute[235803]: 2025-10-02 13:00:32.871 2 WARNING nova.compute.manager [req-a0d99f8d-f730-4754-9143-2a51e21b4981 req-7331f161-175d-454d-9d11-a8edd9ab5e8c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Received unexpected event network-vif-plugged-7fa0e3d4-af96-4c8c-8dc8-a246cb1c4101 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:00:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/855805887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:33 np0005466031 nova_compute[235803]: 2025-10-02 13:00:33.265 2 DEBUG oslo_concurrency.processutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:33 np0005466031 nova_compute[235803]: 2025-10-02 13:00:33.271 2 DEBUG nova.compute.provider_tree [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:33 np0005466031 nova_compute[235803]: 2025-10-02 13:00:33.289 2 DEBUG nova.scheduler.client.report [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:33 np0005466031 nova_compute[235803]: 2025-10-02 13:00:33.312 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:33 np0005466031 nova_compute[235803]: 2025-10-02 13:00:33.337 2 INFO nova.scheduler.client.report [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Deleted allocations for instance 3ae8ab55-a114-4284-9d2d-e70ba073cb66#033[00m
Oct  2 09:00:33 np0005466031 nova_compute[235803]: 2025-10-02 13:00:33.414 2 DEBUG oslo_concurrency.lockutils [None req-3c9580ec-8a5a-4844-8129-be0ceb1fd473 57a1608ca1fc4bef8b6bc6ad68be3999 0b15f29eb32d4c5cba98baa238cc12e1 - - default default] Lock "3ae8ab55-a114-4284-9d2d-e70ba073cb66" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:33.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:33.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:34 np0005466031 nova_compute[235803]: 2025-10-02 13:00:34.841 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "864bd605-1885-426f-82b2-6042de7e9f72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:34 np0005466031 nova_compute[235803]: 2025-10-02 13:00:34.842 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:34 np0005466031 nova_compute[235803]: 2025-10-02 13:00:34.883 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:00:34 np0005466031 nova_compute[235803]: 2025-10-02 13:00:34.976 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:34 np0005466031 nova_compute[235803]: 2025-10-02 13:00:34.977 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:34 np0005466031 nova_compute[235803]: 2025-10-02 13:00:34.984 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:00:34 np0005466031 nova_compute[235803]: 2025-10-02 13:00:34.985 2 INFO nova.compute.claims [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.096 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/707540778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.554 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.561 2 DEBUG nova.compute.provider_tree [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.606 2 DEBUG nova.scheduler.client.report [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:35 np0005466031 podman[308667]: 2025-10-02 13:00:35.626433689 +0000 UTC m=+0.052846494 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.631 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.631 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:00:35 np0005466031 podman[308666]: 2025-10-02 13:00:35.634722307 +0000 UTC m=+0.062672526 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.679 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.679 2 DEBUG nova.network.neutron [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.697 2 INFO nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.713 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.842 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.843 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.843 2 INFO nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Creating image(s)#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.874 2 DEBUG nova.storage.rbd_utils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 864bd605-1885-426f-82b2-6042de7e9f72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:35.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.905 2 DEBUG nova.storage.rbd_utils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 864bd605-1885-426f-82b2-6042de7e9f72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:35.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.939 2 DEBUG nova.storage.rbd_utils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 864bd605-1885-426f-82b2-6042de7e9f72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:35 np0005466031 nova_compute[235803]: 2025-10-02 13:00:35.944 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:36 np0005466031 nova_compute[235803]: 2025-10-02 13:00:36.013 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:36 np0005466031 nova_compute[235803]: 2025-10-02 13:00:36.014 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:36 np0005466031 nova_compute[235803]: 2025-10-02 13:00:36.015 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:36 np0005466031 nova_compute[235803]: 2025-10-02 13:00:36.015 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:36 np0005466031 nova_compute[235803]: 2025-10-02 13:00:36.040 2 DEBUG nova.storage.rbd_utils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 864bd605-1885-426f-82b2-6042de7e9f72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:36 np0005466031 nova_compute[235803]: 2025-10-02 13:00:36.043 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 864bd605-1885-426f-82b2-6042de7e9f72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:36 np0005466031 nova_compute[235803]: 2025-10-02 13:00:36.695 2 DEBUG nova.policy [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:00:36 np0005466031 nova_compute[235803]: 2025-10-02 13:00:36.830 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 864bd605-1885-426f-82b2-6042de7e9f72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:37 np0005466031 nova_compute[235803]: 2025-10-02 13:00:37.031 2 DEBUG nova.storage.rbd_utils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 864bd605-1885-426f-82b2-6042de7e9f72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:00:37 np0005466031 nova_compute[235803]: 2025-10-02 13:00:37.294 2 DEBUG nova.objects.instance [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 864bd605-1885-426f-82b2-6042de7e9f72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:37 np0005466031 nova_compute[235803]: 2025-10-02 13:00:37.313 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:00:37 np0005466031 nova_compute[235803]: 2025-10-02 13:00:37.313 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Ensure instance console log exists: /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:00:37 np0005466031 nova_compute[235803]: 2025-10-02 13:00:37.314 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:37 np0005466031 nova_compute[235803]: 2025-10-02 13:00:37.314 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:37 np0005466031 nova_compute[235803]: 2025-10-02 13:00:37.315 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:37.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:37.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:38 np0005466031 nova_compute[235803]: 2025-10-02 13:00:38.269 2 DEBUG nova.network.neutron [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Successfully updated port: 6b57c5d9-dd16-427a-85c2-c02dedb41e29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:00:38 np0005466031 nova_compute[235803]: 2025-10-02 13:00:38.286 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-864bd605-1885-426f-82b2-6042de7e9f72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:38 np0005466031 nova_compute[235803]: 2025-10-02 13:00:38.286 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-864bd605-1885-426f-82b2-6042de7e9f72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:38 np0005466031 nova_compute[235803]: 2025-10-02 13:00:38.286 2 DEBUG nova.network.neutron [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:00:38 np0005466031 nova_compute[235803]: 2025-10-02 13:00:38.379 2 DEBUG nova.compute.manager [req-fce10d4c-4c7c-478e-b0ef-ddec020c4546 req-3c7b82b2-2f53-45df-971c-e51590f55c74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Received event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:38 np0005466031 nova_compute[235803]: 2025-10-02 13:00:38.379 2 DEBUG nova.compute.manager [req-fce10d4c-4c7c-478e-b0ef-ddec020c4546 req-3c7b82b2-2f53-45df-971c-e51590f55c74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Refreshing instance network info cache due to event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:00:38 np0005466031 nova_compute[235803]: 2025-10-02 13:00:38.380 2 DEBUG oslo_concurrency.lockutils [req-fce10d4c-4c7c-478e-b0ef-ddec020c4546 req-3c7b82b2-2f53-45df-971c-e51590f55c74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-864bd605-1885-426f-82b2-6042de7e9f72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:38 np0005466031 nova_compute[235803]: 2025-10-02 13:00:38.467 2 DEBUG nova.network.neutron [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.363 2 DEBUG nova.network.neutron [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Updating instance_info_cache with network_info: [{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.386 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-864bd605-1885-426f-82b2-6042de7e9f72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.387 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Instance network_info: |[{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.387 2 DEBUG oslo_concurrency.lockutils [req-fce10d4c-4c7c-478e-b0ef-ddec020c4546 req-3c7b82b2-2f53-45df-971c-e51590f55c74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-864bd605-1885-426f-82b2-6042de7e9f72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.387 2 DEBUG nova.network.neutron [req-fce10d4c-4c7c-478e-b0ef-ddec020c4546 req-3c7b82b2-2f53-45df-971c-e51590f55c74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Refreshing network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.390 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Start _get_guest_xml network_info=[{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.394 2 WARNING nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.399 2 DEBUG nova.virt.libvirt.host [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.400 2 DEBUG nova.virt.libvirt.host [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.403 2 DEBUG nova.virt.libvirt.host [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.404 2 DEBUG nova.virt.libvirt.host [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.405 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.405 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.405 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.406 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.406 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.406 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.406 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.407 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.407 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.407 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.407 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.407 2 DEBUG nova.virt.hardware [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.410 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1953748865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.861 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.889 2 DEBUG nova.storage.rbd_utils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 864bd605-1885-426f-82b2-6042de7e9f72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:39 np0005466031 nova_compute[235803]: 2025-10-02 13:00:39.893 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:39.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:39.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:40 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3874696248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.406 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.408 2 DEBUG nova.virt.libvirt.vif [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1996621182',display_name='tempest-TestNetworkBasicOps-server-1996621182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1996621182',id=167,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEcBrYw2Ah6rndhGGj0NSBXaSVyLIj0NJhIegA/1g1hfhi15itThzHKMO+L57zY/Kc6jvPCd1YUOMyJmbPXkVz47bk2c34RNeVn3SVqRhSnvIxhIHfhR2KOHOFBFGQMu3w==',key_name='tempest-TestNetworkBasicOps-1209815931',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-lm65kipj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:35Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=864bd605-1885-426f-82b2-6042de7e9f72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.409 2 DEBUG nova.network.os_vif_util [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.409 2 DEBUG nova.network.os_vif_util [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.411 2 DEBUG nova.objects.instance [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 864bd605-1885-426f-82b2-6042de7e9f72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.430 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <uuid>864bd605-1885-426f-82b2-6042de7e9f72</uuid>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <name>instance-000000a7</name>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkBasicOps-server-1996621182</nova:name>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:00:39</nova:creationTime>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <nova:port uuid="6b57c5d9-dd16-427a-85c2-c02dedb41e29">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <entry name="serial">864bd605-1885-426f-82b2-6042de7e9f72</entry>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <entry name="uuid">864bd605-1885-426f-82b2-6042de7e9f72</entry>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/864bd605-1885-426f-82b2-6042de7e9f72_disk">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/864bd605-1885-426f-82b2-6042de7e9f72_disk.config">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:6b:44:86"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <target dev="tap6b57c5d9-dd"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72/console.log" append="off"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:00:40 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:00:40 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:00:40 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:00:40 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.431 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Preparing to wait for external event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.432 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "864bd605-1885-426f-82b2-6042de7e9f72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.432 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.432 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.433 2 DEBUG nova.virt.libvirt.vif [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1996621182',display_name='tempest-TestNetworkBasicOps-server-1996621182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1996621182',id=167,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEcBrYw2Ah6rndhGGj0NSBXaSVyLIj0NJhIegA/1g1hfhi15itThzHKMO+L57zY/Kc6jvPCd1YUOMyJmbPXkVz47bk2c34RNeVn3SVqRhSnvIxhIHfhR2KOHOFBFGQMu3w==',key_name='tempest-TestNetworkBasicOps-1209815931',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-lm65kipj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:35Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=864bd605-1885-426f-82b2-6042de7e9f72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.433 2 DEBUG nova.network.os_vif_util [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.434 2 DEBUG nova.network.os_vif_util [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.434 2 DEBUG os_vif [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b57c5d9-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b57c5d9-dd, col_values=(('external_ids', {'iface-id': '6b57c5d9-dd16-427a-85c2-c02dedb41e29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:44:86', 'vm-uuid': '864bd605-1885-426f-82b2-6042de7e9f72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:40 np0005466031 NetworkManager[44907]: <info>  [1759410040.4443] manager: (tap6b57c5d9-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.450 2 INFO os_vif [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd')#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.541 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.541 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.541 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:6b:44:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.542 2 INFO nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Using config drive#033[00m
Oct  2 09:00:40 np0005466031 nova_compute[235803]: 2025-10-02 13:00:40.564 2 DEBUG nova.storage.rbd_utils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 864bd605-1885-426f-82b2-6042de7e9f72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.581 2 INFO nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Creating config drive at /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72/disk.config#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.586 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbyrsa8hb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.723 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbyrsa8hb" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.754 2 DEBUG nova.storage.rbd_utils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 864bd605-1885-426f-82b2-6042de7e9f72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.758 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72/disk.config 864bd605-1885-426f-82b2-6042de7e9f72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:41.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.900 2 DEBUG nova.network.neutron [req-fce10d4c-4c7c-478e-b0ef-ddec020c4546 req-3c7b82b2-2f53-45df-971c-e51590f55c74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Updated VIF entry in instance network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.901 2 DEBUG nova.network.neutron [req-fce10d4c-4c7c-478e-b0ef-ddec020c4546 req-3c7b82b2-2f53-45df-971c-e51590f55c74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Updating instance_info_cache with network_info: [{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.927 2 DEBUG oslo_concurrency.processutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72/disk.config 864bd605-1885-426f-82b2-6042de7e9f72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.928 2 INFO nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Deleting local config drive /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72/disk.config because it was imported into RBD.#033[00m
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.930 2 DEBUG oslo_concurrency.lockutils [req-fce10d4c-4c7c-478e-b0ef-ddec020c4546 req-3c7b82b2-2f53-45df-971c-e51590f55c74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-864bd605-1885-426f-82b2-6042de7e9f72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:41.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:41 np0005466031 kernel: tap6b57c5d9-dd: entered promiscuous mode
Oct  2 09:00:41 np0005466031 NetworkManager[44907]: <info>  [1759410041.9821] manager: (tap6b57c5d9-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/287)
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:41Z|00634|binding|INFO|Claiming lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 for this chassis.
Oct  2 09:00:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:41Z|00635|binding|INFO|6b57c5d9-dd16-427a-85c2-c02dedb41e29: Claiming fa:16:3e:6b:44:86 10.100.0.10
Oct  2 09:00:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:41Z|00636|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 ovn-installed in OVS
Oct  2 09:00:41 np0005466031 nova_compute[235803]: 2025-10-02 13:00:41.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:42 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:42Z|00637|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 up in Southbound
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.003 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:44:86 10.100.0.10'], port_security=['fa:16:3e:6b:44:86 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '864bd605-1885-426f-82b2-6042de7e9f72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a001cef-b85b-4c88-a329-8db2a6ee024d, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=6b57c5d9-dd16-427a-85c2-c02dedb41e29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.004 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 in datapath 2dacd3c2-a76f-4896-a922-fdbbab78ce12 bound to our chassis#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.006 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2dacd3c2-a76f-4896-a922-fdbbab78ce12#033[00m
Oct  2 09:00:42 np0005466031 systemd-udevd[309011]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:00:42 np0005466031 systemd-machined[192227]: New machine qemu-74-instance-000000a7.
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.019 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[628169c1-2519-4810-85f7-0d32922626eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.020 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2dacd3c2-a1 in ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.021 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2dacd3c2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.022 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[17122868-459e-4be5-99ee-ed3bc4a7cf97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.023 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e00bb559-f054-482b-bde1-6389a1fb568e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 NetworkManager[44907]: <info>  [1759410042.0262] device (tap6b57c5d9-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:00:42 np0005466031 NetworkManager[44907]: <info>  [1759410042.0274] device (tap6b57c5d9-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:00:42 np0005466031 systemd[1]: Started Virtual Machine qemu-74-instance-000000a7.
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.040 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[94bb364a-9fbe-4bb3-92b1-764906811b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.069 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e81feb-4bbf-4c40-93f9-55a04053ed04]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.106 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5af1e2df-95d0-4d14-b159-4687fea93556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.111 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f00a4f53-d9d4-40f5-bd39-1f16ea5e95ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 NetworkManager[44907]: <info>  [1759410042.1128] manager: (tap2dacd3c2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/288)
Oct  2 09:00:42 np0005466031 systemd-udevd[309016]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.146 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[73b37704-1783-4556-9865-700ea3f025a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.149 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[447f25bc-af11-419b-81aa-bf496fcfa316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 NetworkManager[44907]: <info>  [1759410042.1751] device (tap2dacd3c2-a0): carrier: link connected
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.181 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[821dcb46-b179-4229-ac8b-a46188aa3091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.198 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e84f3fba-ad22-4639-a09a-b49ff2639ce3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dacd3c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:7e:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789776, 'reachable_time': 40180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309045, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.214 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbdf9bd-c6d5-4590-bc55-f66de900130a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:7e5f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789776, 'tstamp': 789776}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309046, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.230 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a981cb84-0aed-4675-9241-10f5e9620ce9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dacd3c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:7e:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789776, 'reachable_time': 40180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309047, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.264 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0940bdfd-e205-46d3-8741-c6129c62b99e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.270 2 DEBUG nova.compute.manager [req-f9421dc7-b1be-4d00-abc7-5487007c20fd req-9d9c4ed6-9ec1-4eed-9c94-3a7c7a22f9ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.270 2 DEBUG oslo_concurrency.lockutils [req-f9421dc7-b1be-4d00-abc7-5487007c20fd req-9d9c4ed6-9ec1-4eed-9c94-3a7c7a22f9ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "864bd605-1885-426f-82b2-6042de7e9f72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.270 2 DEBUG oslo_concurrency.lockutils [req-f9421dc7-b1be-4d00-abc7-5487007c20fd req-9d9c4ed6-9ec1-4eed-9c94-3a7c7a22f9ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.270 2 DEBUG oslo_concurrency.lockutils [req-f9421dc7-b1be-4d00-abc7-5487007c20fd req-9d9c4ed6-9ec1-4eed-9c94-3a7c7a22f9ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.271 2 DEBUG nova.compute.manager [req-f9421dc7-b1be-4d00-abc7-5487007c20fd req-9d9c4ed6-9ec1-4eed-9c94-3a7c7a22f9ab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Processing event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.329 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c53e4b73-633b-43c7-83fc-6698c452a21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.330 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dacd3c2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.331 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.331 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2dacd3c2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:42 np0005466031 kernel: tap2dacd3c2-a0: entered promiscuous mode
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:42 np0005466031 NetworkManager[44907]: <info>  [1759410042.3340] manager: (tap2dacd3c2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.336 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2dacd3c2-a0, col_values=(('external_ids', {'iface-id': '563b4b62-2487-404e-81e1-f7d5b24fae89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:42 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:42Z|00638|binding|INFO|Releasing lport 563b4b62-2487-404e-81e1-f7d5b24fae89 from this chassis (sb_readonly=0)
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:42 np0005466031 nova_compute[235803]: 2025-10-02 13:00:42.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.354 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.355 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6f341a83-b796-4098-9cbb-32f9a21635b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.355 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-2dacd3c2-a76f-4896-a922-fdbbab78ce12
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 2dacd3c2-a76f-4896-a922-fdbbab78ce12
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:00:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:42.356 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'env', 'PROCESS_TAG=haproxy-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2dacd3c2-a76f-4896-a922-fdbbab78ce12.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:00:42 np0005466031 podman[309121]: 2025-10-02 13:00:42.758266905 +0000 UTC m=+0.061027739 container create 78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:00:42 np0005466031 systemd[1]: Started libpod-conmon-78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d.scope.
Oct  2 09:00:42 np0005466031 podman[309121]: 2025-10-02 13:00:42.721703522 +0000 UTC m=+0.024464386 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:00:42 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:00:42 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b9390f6cb2f00e16576bfd441149473b650bdb80019ff647bc432ff5416548/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:00:42 np0005466031 podman[309121]: 2025-10-02 13:00:42.855739404 +0000 UTC m=+0.158500268 container init 78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:00:42 np0005466031 podman[309121]: 2025-10-02 13:00:42.861648974 +0000 UTC m=+0.164409808 container start 78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:00:42 np0005466031 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[309136]: [NOTICE]   (309140) : New worker (309142) forked
Oct  2 09:00:42 np0005466031 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[309136]: [NOTICE]   (309140) : Loading success.
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.064 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.065 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410043.0638285, 864bd605-1885-426f-82b2-6042de7e9f72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.065 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] VM Started (Lifecycle Event)#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.068 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.072 2 INFO nova.virt.libvirt.driver [-] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Instance spawned successfully.#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.072 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.092 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.095 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.120 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.120 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.121 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.121 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.121 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.122 2 DEBUG nova.virt.libvirt.driver [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.125 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.125 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410043.0651586, 864bd605-1885-426f-82b2-6042de7e9f72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.126 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.175 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.179 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410043.067958, 864bd605-1885-426f-82b2-6042de7e9f72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.179 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.214 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.219 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.226 2 INFO nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Took 7.38 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.227 2 DEBUG nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.262 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.318 2 INFO nova.compute.manager [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Took 8.36 seconds to build instance.#033[00m
Oct  2 09:00:43 np0005466031 nova_compute[235803]: 2025-10-02 13:00:43.337 2 DEBUG oslo_concurrency.lockutils [None req-e39c38d0-5882-41d9-afd4-614138708390 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:43.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:43.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:44 np0005466031 nova_compute[235803]: 2025-10-02 13:00:44.359 2 DEBUG nova.compute.manager [req-b854f3c6-ce3f-405d-921c-1480a4d6e53a req-b433a900-23ee-4287-852a-254eed9b1b6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:44 np0005466031 nova_compute[235803]: 2025-10-02 13:00:44.359 2 DEBUG oslo_concurrency.lockutils [req-b854f3c6-ce3f-405d-921c-1480a4d6e53a req-b433a900-23ee-4287-852a-254eed9b1b6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "864bd605-1885-426f-82b2-6042de7e9f72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:44 np0005466031 nova_compute[235803]: 2025-10-02 13:00:44.359 2 DEBUG oslo_concurrency.lockutils [req-b854f3c6-ce3f-405d-921c-1480a4d6e53a req-b433a900-23ee-4287-852a-254eed9b1b6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:44 np0005466031 nova_compute[235803]: 2025-10-02 13:00:44.360 2 DEBUG oslo_concurrency.lockutils [req-b854f3c6-ce3f-405d-921c-1480a4d6e53a req-b433a900-23ee-4287-852a-254eed9b1b6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:44 np0005466031 nova_compute[235803]: 2025-10-02 13:00:44.360 2 DEBUG nova.compute.manager [req-b854f3c6-ce3f-405d-921c-1480a4d6e53a req-b433a900-23ee-4287-852a-254eed9b1b6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] No waiting events found dispatching network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:44 np0005466031 nova_compute[235803]: 2025-10-02 13:00:44.360 2 WARNING nova.compute.manager [req-b854f3c6-ce3f-405d-921c-1480a4d6e53a req-b433a900-23ee-4287-852a-254eed9b1b6a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Received unexpected event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:00:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:45 np0005466031 nova_compute[235803]: 2025-10-02 13:00:45.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:45 np0005466031 nova_compute[235803]: 2025-10-02 13:00:45.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:45 np0005466031 nova_compute[235803]: 2025-10-02 13:00:45.619 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410030.6170738, 3ae8ab55-a114-4284-9d2d-e70ba073cb66 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:45 np0005466031 nova_compute[235803]: 2025-10-02 13:00:45.619 2 INFO nova.compute.manager [-] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:00:45 np0005466031 nova_compute[235803]: 2025-10-02 13:00:45.638 2 DEBUG nova.compute.manager [None req-ff361c46-d856-4a36-ad62-128fce8f2c7e - - - - - -] [instance: 3ae8ab55-a114-4284-9d2d-e70ba073cb66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:45.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:45.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.114 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "864bd605-1885-426f-82b2-6042de7e9f72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.114 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.115 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "864bd605-1885-426f-82b2-6042de7e9f72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.116 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.116 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.117 2 INFO nova.compute.manager [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Terminating instance#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.118 2 DEBUG nova.compute.manager [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:00:46 np0005466031 kernel: tap6b57c5d9-dd (unregistering): left promiscuous mode
Oct  2 09:00:46 np0005466031 NetworkManager[44907]: <info>  [1759410046.1749] device (tap6b57c5d9-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:46 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:46Z|00639|binding|INFO|Releasing lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 from this chassis (sb_readonly=0)
Oct  2 09:00:46 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:46Z|00640|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 down in Southbound
Oct  2 09:00:46 np0005466031 ovn_controller[132413]: 2025-10-02T13:00:46Z|00641|binding|INFO|Removing iface tap6b57c5d9-dd ovn-installed in OVS
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.193 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:44:86 10.100.0.10'], port_security=['fa:16:3e:6b:44:86 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '864bd605-1885-426f-82b2-6042de7e9f72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.193', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a001cef-b85b-4c88-a329-8db2a6ee024d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=6b57c5d9-dd16-427a-85c2-c02dedb41e29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.195 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 in datapath 2dacd3c2-a76f-4896-a922-fdbbab78ce12 unbound from our chassis#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.196 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2dacd3c2-a76f-4896-a922-fdbbab78ce12, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.197 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[66280a53-35e7-4ec9-b150-784fa8ac358c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.197 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 namespace which is not needed anymore#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:46 np0005466031 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Oct  2 09:00:46 np0005466031 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a7.scope: Consumed 4.122s CPU time.
Oct  2 09:00:46 np0005466031 systemd-machined[192227]: Machine qemu-74-instance-000000a7 terminated.
Oct  2 09:00:46 np0005466031 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[309136]: [NOTICE]   (309140) : haproxy version is 2.8.14-c23fe91
Oct  2 09:00:46 np0005466031 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[309136]: [NOTICE]   (309140) : path to executable is /usr/sbin/haproxy
Oct  2 09:00:46 np0005466031 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[309136]: [WARNING]  (309140) : Exiting Master process...
Oct  2 09:00:46 np0005466031 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[309136]: [ALERT]    (309140) : Current worker (309142) exited with code 143 (Terminated)
Oct  2 09:00:46 np0005466031 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[309136]: [WARNING]  (309140) : All workers exited. Exiting... (0)
Oct  2 09:00:46 np0005466031 systemd[1]: libpod-78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d.scope: Deactivated successfully.
Oct  2 09:00:46 np0005466031 podman[309174]: 2025-10-02 13:00:46.340333931 +0000 UTC m=+0.059154255 container died 78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.352 2 INFO nova.virt.libvirt.driver [-] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Instance destroyed successfully.#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.353 2 DEBUG nova.objects.instance [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 864bd605-1885-426f-82b2-6042de7e9f72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.368 2 DEBUG nova.virt.libvirt.vif [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1996621182',display_name='tempest-TestNetworkBasicOps-server-1996621182',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1996621182',id=167,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEcBrYw2Ah6rndhGGj0NSBXaSVyLIj0NJhIegA/1g1hfhi15itThzHKMO+L57zY/Kc6jvPCd1YUOMyJmbPXkVz47bk2c34RNeVn3SVqRhSnvIxhIHfhR2KOHOFBFGQMu3w==',key_name='tempest-TestNetworkBasicOps-1209815931',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-lm65kipj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:43Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=864bd605-1885-426f-82b2-6042de7e9f72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.368 2 DEBUG nova.network.os_vif_util [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.369 2 DEBUG nova.network.os_vif_util [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.370 2 DEBUG os_vif [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b57c5d9-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.423 2 INFO os_vif [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd')#033[00m
Oct  2 09:00:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d-userdata-shm.mount: Deactivated successfully.
Oct  2 09:00:46 np0005466031 systemd[1]: var-lib-containers-storage-overlay-09b9390f6cb2f00e16576bfd441149473b650bdb80019ff647bc432ff5416548-merged.mount: Deactivated successfully.
Oct  2 09:00:46 np0005466031 podman[309174]: 2025-10-02 13:00:46.448627071 +0000 UTC m=+0.167447405 container cleanup 78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:00:46 np0005466031 systemd[1]: libpod-conmon-78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d.scope: Deactivated successfully.
Oct  2 09:00:46 np0005466031 podman[309228]: 2025-10-02 13:00:46.514721975 +0000 UTC m=+0.042652950 container remove 78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.520 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[17517c63-ed73-44d4-be00-5b3d3a9ef3d2]: (4, ('Thu Oct  2 01:00:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 (78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d)\n78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d\nThu Oct  2 01:00:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 (78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d)\n78d8cd82f27e6c85ff49ad6bc06f96f749bf16c4851887e155c95a09073ba13d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.521 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[40a74b81-80c7-4eed-9d87-b24223c4af8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.522 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dacd3c2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:46 np0005466031 kernel: tap2dacd3c2-a0: left promiscuous mode
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.541 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[05ff2028-35c3-4391-abf3-bafecb8e9512]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.567 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c4697c9b-17c4-49c3-ab3f-ddc8e39be6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.568 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea87108-82aa-4321-9599-b5fbcb1f928c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.583 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[10bf8f83-2857-4e6e-a0b9-005856306dd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789768, 'reachable_time': 32425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309246, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:46 np0005466031 systemd[1]: run-netns-ovnmeta\x2d2dacd3c2\x2da76f\x2d4896\x2da922\x2dfdbbab78ce12.mount: Deactivated successfully.
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.587 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:00:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:00:46.587 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed8a788-0186-423f-b227-de1a2e3aeb1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.711 2 DEBUG nova.compute.manager [req-89044487-5aed-423c-9873-f8520a6c1c01 req-e75dc43f-c449-4c83-be68-28631109b3a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Received event network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.711 2 DEBUG oslo_concurrency.lockutils [req-89044487-5aed-423c-9873-f8520a6c1c01 req-e75dc43f-c449-4c83-be68-28631109b3a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "864bd605-1885-426f-82b2-6042de7e9f72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.711 2 DEBUG oslo_concurrency.lockutils [req-89044487-5aed-423c-9873-f8520a6c1c01 req-e75dc43f-c449-4c83-be68-28631109b3a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.712 2 DEBUG oslo_concurrency.lockutils [req-89044487-5aed-423c-9873-f8520a6c1c01 req-e75dc43f-c449-4c83-be68-28631109b3a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.712 2 DEBUG nova.compute.manager [req-89044487-5aed-423c-9873-f8520a6c1c01 req-e75dc43f-c449-4c83-be68-28631109b3a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] No waiting events found dispatching network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:46 np0005466031 nova_compute[235803]: 2025-10-02 13:00:46.712 2 DEBUG nova.compute.manager [req-89044487-5aed-423c-9873-f8520a6c1c01 req-e75dc43f-c449-4c83-be68-28631109b3a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Received event network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:00:47 np0005466031 nova_compute[235803]: 2025-10-02 13:00:47.172 2 INFO nova.virt.libvirt.driver [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Deleting instance files /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72_del#033[00m
Oct  2 09:00:47 np0005466031 nova_compute[235803]: 2025-10-02 13:00:47.173 2 INFO nova.virt.libvirt.driver [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Deletion of /var/lib/nova/instances/864bd605-1885-426f-82b2-6042de7e9f72_del complete#033[00m
Oct  2 09:00:47 np0005466031 nova_compute[235803]: 2025-10-02 13:00:47.226 2 INFO nova.compute.manager [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:00:47 np0005466031 nova_compute[235803]: 2025-10-02 13:00:47.227 2 DEBUG oslo.service.loopingcall [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:00:47 np0005466031 nova_compute[235803]: 2025-10-02 13:00:47.227 2 DEBUG nova.compute.manager [-] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:00:47 np0005466031 nova_compute[235803]: 2025-10-02 13:00:47.227 2 DEBUG nova.network.neutron [-] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:00:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:47.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:47.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:48 np0005466031 nova_compute[235803]: 2025-10-02 13:00:48.811 2 DEBUG nova.compute.manager [req-e40045cf-9d02-4379-b5d7-ff61bf381c45 req-ff0ed0e7-0ab2-4c55-b0c1-e1c51c8ce8dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:48 np0005466031 nova_compute[235803]: 2025-10-02 13:00:48.811 2 DEBUG oslo_concurrency.lockutils [req-e40045cf-9d02-4379-b5d7-ff61bf381c45 req-ff0ed0e7-0ab2-4c55-b0c1-e1c51c8ce8dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "864bd605-1885-426f-82b2-6042de7e9f72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:48 np0005466031 nova_compute[235803]: 2025-10-02 13:00:48.812 2 DEBUG oslo_concurrency.lockutils [req-e40045cf-9d02-4379-b5d7-ff61bf381c45 req-ff0ed0e7-0ab2-4c55-b0c1-e1c51c8ce8dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:48 np0005466031 nova_compute[235803]: 2025-10-02 13:00:48.812 2 DEBUG oslo_concurrency.lockutils [req-e40045cf-9d02-4379-b5d7-ff61bf381c45 req-ff0ed0e7-0ab2-4c55-b0c1-e1c51c8ce8dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:48 np0005466031 nova_compute[235803]: 2025-10-02 13:00:48.812 2 DEBUG nova.compute.manager [req-e40045cf-9d02-4379-b5d7-ff61bf381c45 req-ff0ed0e7-0ab2-4c55-b0c1-e1c51c8ce8dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] No waiting events found dispatching network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:48 np0005466031 nova_compute[235803]: 2025-10-02 13:00:48.812 2 WARNING nova.compute.manager [req-e40045cf-9d02-4379-b5d7-ff61bf381c45 req-ff0ed0e7-0ab2-4c55-b0c1-e1c51c8ce8dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Received unexpected event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.017 2 DEBUG nova.network.neutron [-] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.047 2 INFO nova.compute.manager [-] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Took 1.82 seconds to deallocate network for instance.#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.133 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.134 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.191 2 DEBUG oslo_concurrency.processutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:49 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1634489610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.714 2 DEBUG oslo_concurrency.processutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.726 2 DEBUG nova.compute.provider_tree [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.745 2 DEBUG nova.scheduler.client.report [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.772 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.807 2 INFO nova.scheduler.client.report [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 864bd605-1885-426f-82b2-6042de7e9f72#033[00m
Oct  2 09:00:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:49 np0005466031 nova_compute[235803]: 2025-10-02 13:00:49.929 2 DEBUG oslo_concurrency.lockutils [None req-edba34b1-268f-44d8-aee8-ec532b4e329a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "864bd605-1885-426f-82b2-6042de7e9f72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:50 np0005466031 nova_compute[235803]: 2025-10-02 13:00:50.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:51 np0005466031 nova_compute[235803]: 2025-10-02 13:00:51.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:51.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:53.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:53.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:55 np0005466031 nova_compute[235803]: 2025-10-02 13:00:55.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:55.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:00:56 np0005466031 nova_compute[235803]: 2025-10-02 13:00:56.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:57.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:57.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:59 np0005466031 podman[309326]: 2025-10-02 13:00:59.623667387 +0000 UTC m=+0.056941502 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:00:59 np0005466031 podman[309327]: 2025-10-02 13:00:59.675876781 +0000 UTC m=+0.108568429 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:00:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:59.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:00:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:00:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:59.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:00 np0005466031 nova_compute[235803]: 2025-10-02 13:01:00.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:01 np0005466031 nova_compute[235803]: 2025-10-02 13:01:01.351 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410046.3494906, 864bd605-1885-426f-82b2-6042de7e9f72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:01 np0005466031 nova_compute[235803]: 2025-10-02 13:01:01.352 2 INFO nova.compute.manager [-] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:01:01 np0005466031 nova_compute[235803]: 2025-10-02 13:01:01.370 2 DEBUG nova.compute.manager [None req-14e68dd3-087b-4d2b-934d-1bd197804408 - - - - - -] [instance: 864bd605-1885-426f-82b2-6042de7e9f72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:01 np0005466031 nova_compute[235803]: 2025-10-02 13:01:01.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:01.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:01.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:02.721 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:01:02 np0005466031 nova_compute[235803]: 2025-10-02 13:01:02.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:02.722 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:01:03 np0005466031 nova_compute[235803]: 2025-10-02 13:01:03.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:03 np0005466031 nova_compute[235803]: 2025-10-02 13:01:03.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:03 np0005466031 nova_compute[235803]: 2025-10-02 13:01:03.691 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:03.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:03.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:05 np0005466031 nova_compute[235803]: 2025-10-02 13:01:05.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:05.724 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:05.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:05.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:06 np0005466031 nova_compute[235803]: 2025-10-02 13:01:06.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:06 np0005466031 podman[309389]: 2025-10-02 13:01:06.63269362 +0000 UTC m=+0.056498339 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:01:06 np0005466031 podman[309388]: 2025-10-02 13:01:06.652402977 +0000 UTC m=+0.078336448 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 09:01:07 np0005466031 nova_compute[235803]: 2025-10-02 13:01:07.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:07.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:01:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:07.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.236 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.236 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.257 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.353 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.353 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.359 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.359 2 INFO nova.compute.claims [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.459 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:08 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1154594658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.873 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.879 2 DEBUG nova.compute.provider_tree [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.916 2 DEBUG nova.scheduler.client.report [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.965 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:08 np0005466031 nova_compute[235803]: 2025-10-02 13:01:08.966 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.027 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.027 2 DEBUG nova.network.neutron [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.093 2 INFO nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.132 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.215 2 DEBUG nova.policy [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.285 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.286 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.287 2 INFO nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Creating image(s)#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.316 2 DEBUG nova.storage.rbd_utils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.344 2 DEBUG nova.storage.rbd_utils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.374 2 DEBUG nova.storage.rbd_utils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.377 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.445 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.447 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.448 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.448 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.477 2 DEBUG nova.storage.rbd_utils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.480 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:09 np0005466031 nova_compute[235803]: 2025-10-02 13:01:09.774 2 DEBUG nova.network.neutron [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Successfully created port: 77554a96-67eb-42e2-a771-fd06275f0dab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:01:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:09.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:09.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.577 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.609 2 DEBUG nova.network.neutron [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Successfully updated port: 77554a96-67eb-42e2-a771-fd06275f0dab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.646 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-626d0b9d-10f5-469e-bae3-9dd7d03072c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.647 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-626d0b9d-10f5-469e-bae3-9dd7d03072c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.647 2 DEBUG nova.network.neutron [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.653 2 DEBUG nova.storage.rbd_utils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.692 2 DEBUG nova.compute.manager [req-9567872f-5c81-4113-a921-fe7c81e1e77f req-8ab1aac4-6594-4d14-a536-fbe08d4e7564 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received event network-changed-77554a96-67eb-42e2-a771-fd06275f0dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.693 2 DEBUG nova.compute.manager [req-9567872f-5c81-4113-a921-fe7c81e1e77f req-8ab1aac4-6594-4d14-a536-fbe08d4e7564 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Refreshing instance network info cache due to event network-changed-77554a96-67eb-42e2-a771-fd06275f0dab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.693 2 DEBUG oslo_concurrency.lockutils [req-9567872f-5c81-4113-a921-fe7c81e1e77f req-8ab1aac4-6594-4d14-a536-fbe08d4e7564 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-626d0b9d-10f5-469e-bae3-9dd7d03072c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.828 2 DEBUG nova.objects.instance [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 626d0b9d-10f5-469e-bae3-9dd7d03072c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.859 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.860 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Ensure instance console log exists: /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.860 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.861 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.861 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:10 np0005466031 nova_compute[235803]: 2025-10-02 13:01:10.862 2 DEBUG nova.network.neutron [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.604 2 DEBUG nova.network.neutron [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Updating instance_info_cache with network_info: [{"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.623 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-626d0b9d-10f5-469e-bae3-9dd7d03072c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.625 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Instance network_info: |[{"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.625 2 DEBUG oslo_concurrency.lockutils [req-9567872f-5c81-4113-a921-fe7c81e1e77f req-8ab1aac4-6594-4d14-a536-fbe08d4e7564 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-626d0b9d-10f5-469e-bae3-9dd7d03072c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.625 2 DEBUG nova.network.neutron [req-9567872f-5c81-4113-a921-fe7c81e1e77f req-8ab1aac4-6594-4d14-a536-fbe08d4e7564 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Refreshing network info cache for port 77554a96-67eb-42e2-a771-fd06275f0dab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.628 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Start _get_guest_xml network_info=[{"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.632 2 WARNING nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.636 2 DEBUG nova.virt.libvirt.host [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.637 2 DEBUG nova.virt.libvirt.host [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.642 2 DEBUG nova.virt.libvirt.host [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.642 2 DEBUG nova.virt.libvirt.host [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.643 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.643 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.644 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.644 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.644 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.644 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.645 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.645 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.645 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.645 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.645 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.646 2 DEBUG nova.virt.hardware [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:01:11 np0005466031 nova_compute[235803]: 2025-10-02 13:01:11.648 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:11.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:01:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1735928497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.090 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.125 2 DEBUG nova.storage.rbd_utils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.130 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:01:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1881902432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.725 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.727 2 DEBUG nova.virt.libvirt.vif [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1203431128',display_name='tempest-ServersTestJSON-server-1203431128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1203431128',id=170,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJLQrHVV9kWL/E+7rAhNqFldcw7jCT0EgflQ8rAFqBXE1Im2rP3QZn1jBU5C+8a1H3d69b/KwNkkmq/r52lVGLr+XK1trYlHuzg081ojnTuAVyG15EFoPG/Alc0wVqQlVg==',key_name='tempest-key-693315259',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-3timroe5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:09Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=626d0b9d-10f5-469e-bae3-9dd7d03072c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.727 2 DEBUG nova.network.os_vif_util [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.728 2 DEBUG nova.network.os_vif_util [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:04:07,bridge_name='br-int',has_traffic_filtering=True,id=77554a96-67eb-42e2-a771-fd06275f0dab,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77554a96-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.729 2 DEBUG nova.objects.instance [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 626d0b9d-10f5-469e-bae3-9dd7d03072c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.746 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <uuid>626d0b9d-10f5-469e-bae3-9dd7d03072c1</uuid>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <name>instance-000000aa</name>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServersTestJSON-server-1203431128</nova:name>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:01:11</nova:creationTime>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <nova:port uuid="77554a96-67eb-42e2-a771-fd06275f0dab">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <entry name="serial">626d0b9d-10f5-469e-bae3-9dd7d03072c1</entry>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <entry name="uuid">626d0b9d-10f5-469e-bae3-9dd7d03072c1</entry>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk.config">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:75:04:07"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <target dev="tap77554a96-67"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1/console.log" append="off"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:01:12 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:01:12 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:01:12 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:01:12 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.748 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Preparing to wait for external event network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.749 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.750 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.750 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.751 2 DEBUG nova.virt.libvirt.vif [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1203431128',display_name='tempest-ServersTestJSON-server-1203431128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1203431128',id=170,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJLQrHVV9kWL/E+7rAhNqFldcw7jCT0EgflQ8rAFqBXE1Im2rP3QZn1jBU5C+8a1H3d69b/KwNkkmq/r52lVGLr+XK1trYlHuzg081ojnTuAVyG15EFoPG/Alc0wVqQlVg==',key_name='tempest-key-693315259',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-3timroe5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:09Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=626d0b9d-10f5-469e-bae3-9dd7d03072c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.751 2 DEBUG nova.network.os_vif_util [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.752 2 DEBUG nova.network.os_vif_util [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:04:07,bridge_name='br-int',has_traffic_filtering=True,id=77554a96-67eb-42e2-a771-fd06275f0dab,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77554a96-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.753 2 DEBUG os_vif [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:04:07,bridge_name='br-int',has_traffic_filtering=True,id=77554a96-67eb-42e2-a771-fd06275f0dab,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77554a96-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.754 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77554a96-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77554a96-67, col_values=(('external_ids', {'iface-id': '77554a96-67eb-42e2-a771-fd06275f0dab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:04:07', 'vm-uuid': '626d0b9d-10f5-469e-bae3-9dd7d03072c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:12 np0005466031 NetworkManager[44907]: <info>  [1759410072.7634] manager: (tap77554a96-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.773 2 INFO os_vif [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:04:07,bridge_name='br-int',has_traffic_filtering=True,id=77554a96-67eb-42e2-a771-fd06275f0dab,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77554a96-67')#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.854 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.855 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.855 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:75:04:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.856 2 INFO nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Using config drive#033[00m
Oct  2 09:01:12 np0005466031 nova_compute[235803]: 2025-10-02 13:01:12.886 2 DEBUG nova.storage.rbd_utils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.580 2 INFO nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Creating config drive at /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1/disk.config#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.585 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuerv1rdo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.660 2 DEBUG nova.network.neutron [req-9567872f-5c81-4113-a921-fe7c81e1e77f req-8ab1aac4-6594-4d14-a536-fbe08d4e7564 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Updated VIF entry in instance network info cache for port 77554a96-67eb-42e2-a771-fd06275f0dab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.661 2 DEBUG nova.network.neutron [req-9567872f-5c81-4113-a921-fe7c81e1e77f req-8ab1aac4-6594-4d14-a536-fbe08d4e7564 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Updating instance_info_cache with network_info: [{"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.679 2 DEBUG oslo_concurrency.lockutils [req-9567872f-5c81-4113-a921-fe7c81e1e77f req-8ab1aac4-6594-4d14-a536-fbe08d4e7564 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-626d0b9d-10f5-469e-bae3-9dd7d03072c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.721 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuerv1rdo" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.751 2 DEBUG nova.storage.rbd_utils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.754 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1/disk.config 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.912 2 DEBUG oslo_concurrency.processutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1/disk.config 626d0b9d-10f5-469e-bae3-9dd7d03072c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.913 2 INFO nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Deleting local config drive /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1/disk.config because it was imported into RBD.#033[00m
Oct  2 09:01:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:13.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:13 np0005466031 kernel: tap77554a96-67: entered promiscuous mode
Oct  2 09:01:13 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:13Z|00642|binding|INFO|Claiming lport 77554a96-67eb-42e2-a771-fd06275f0dab for this chassis.
Oct  2 09:01:13 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:13Z|00643|binding|INFO|77554a96-67eb-42e2-a771-fd06275f0dab: Claiming fa:16:3e:75:04:07 10.100.0.6
Oct  2 09:01:13 np0005466031 NetworkManager[44907]: <info>  [1759410073.9615] manager: (tap77554a96-67): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Oct  2 09:01:13 np0005466031 nova_compute[235803]: 2025-10-02 13:01:13.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:13.973 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:04:07 10.100.0.6'], port_security=['fa:16:3e:75:04:07 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '626d0b9d-10f5-469e-bae3-9dd7d03072c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=77554a96-67eb-42e2-a771-fd06275f0dab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:01:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:13.974 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 77554a96-67eb-42e2-a771-fd06275f0dab in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis#033[00m
Oct  2 09:01:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:13.975 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:01:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:13.986 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[57212171-10b5-42ac-b289-bf49c816b63b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:13.986 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap052f341a-01 in ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:01:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:13.989 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap052f341a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:01:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:13.989 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6278b052-dc76-41fd-b8fd-4a212269ae74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:13 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:13.989 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6953a554-d7f8-48af-ac59-7358d64dde8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:13 np0005466031 systemd-machined[192227]: New machine qemu-75-instance-000000aa.
Oct  2 09:01:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:13.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.000 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[bd734617-55e9-41d6-8ae7-1699c37de1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.026 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[445f04c1-023b-4d8f-a4c7-0fdd7ba555f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 systemd[1]: Started Virtual Machine qemu-75-instance-000000aa.
Oct  2 09:01:14 np0005466031 systemd-udevd[309810]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:14Z|00644|binding|INFO|Setting lport 77554a96-67eb-42e2-a771-fd06275f0dab ovn-installed in OVS
Oct  2 09:01:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:14Z|00645|binding|INFO|Setting lport 77554a96-67eb-42e2-a771-fd06275f0dab up in Southbound
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.059 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ada648d0-5f2c-42c8-981b-9fa0ac1c446d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 systemd-udevd[309815]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.064 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f81bd083-4182-4f59-9b55-df167b689537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 NetworkManager[44907]: <info>  [1759410074.0689] manager: (tap052f341a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Oct  2 09:01:14 np0005466031 NetworkManager[44907]: <info>  [1759410074.0707] device (tap77554a96-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:01:14 np0005466031 NetworkManager[44907]: <info>  [1759410074.0725] device (tap77554a96-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.102 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[486c78ff-4c78-4fe0-86fa-8199ef626089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.105 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e17db3e7-5759-481f-89f0-d319168e0859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 NetworkManager[44907]: <info>  [1759410074.1313] device (tap052f341a-00): carrier: link connected
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.140 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[64b42435-5b5c-4cb3-b0a1-869d21018e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.157 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[46368306-ad81-4f86-9cfa-4ccb66254e98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792972, 'reachable_time': 19047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309838, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.171 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ed15333d-75fe-4c10-8cb1-a3f37eee03c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:a55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 792972, 'tstamp': 792972}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309839, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.189 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[618974b7-4751-4f88-87f1-16c9fc18836a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792972, 'reachable_time': 19047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309840, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.218 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c358560e-b41e-408d-b3d5-c62aa19a91b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.276 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5ab482-6928-48c8-966f-1f96e57536fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.277 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.277 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.278 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:14 np0005466031 kernel: tap052f341a-00: entered promiscuous mode
Oct  2 09:01:14 np0005466031 NetworkManager[44907]: <info>  [1759410074.2817] manager: (tap052f341a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.284 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:14Z|00646|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.286 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.287 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[67681be0-9ea9-4982-b5c2-cd97def6db4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.289 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:01:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:14.289 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'env', 'PROCESS_TAG=haproxy-052f341a-0628-4183-a5e0-76312bc986c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/052f341a-0628-4183-a5e0-76312bc986c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.310 2 DEBUG nova.compute.manager [req-03d46b9a-836f-43e3-a206-40da82d923b0 req-ec840b90-f0d9-415a-bd27-07b9a24cc76d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received event network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.310 2 DEBUG oslo_concurrency.lockutils [req-03d46b9a-836f-43e3-a206-40da82d923b0 req-ec840b90-f0d9-415a-bd27-07b9a24cc76d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.311 2 DEBUG oslo_concurrency.lockutils [req-03d46b9a-836f-43e3-a206-40da82d923b0 req-ec840b90-f0d9-415a-bd27-07b9a24cc76d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.311 2 DEBUG oslo_concurrency.lockutils [req-03d46b9a-836f-43e3-a206-40da82d923b0 req-ec840b90-f0d9-415a-bd27-07b9a24cc76d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.312 2 DEBUG nova.compute.manager [req-03d46b9a-836f-43e3-a206-40da82d923b0 req-ec840b90-f0d9-415a-bd27-07b9a24cc76d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Processing event network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:01:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:14 np0005466031 podman[309914]: 2025-10-02 13:01:14.627992097 +0000 UTC m=+0.045849382 container create dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 09:01:14 np0005466031 systemd[1]: Started libpod-conmon-dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3.scope.
Oct  2 09:01:14 np0005466031 podman[309914]: 2025-10-02 13:01:14.603421309 +0000 UTC m=+0.021278614 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:01:14 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:01:14 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ab6bc24f0e60cb30df9bc4c3ec817652c1272373feb66daafd9c37744526c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:01:14 np0005466031 podman[309914]: 2025-10-02 13:01:14.736411421 +0000 UTC m=+0.154268726 container init dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 09:01:14 np0005466031 podman[309914]: 2025-10-02 13:01:14.741412605 +0000 UTC m=+0.159269890 container start dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:01:14 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[309930]: [NOTICE]   (309934) : New worker (309936) forked
Oct  2 09:01:14 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[309930]: [NOTICE]   (309934) : Loading success.
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.870 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410074.8703365, 626d0b9d-10f5-469e-bae3-9dd7d03072c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.871 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] VM Started (Lifecycle Event)#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.873 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.876 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.880 2 INFO nova.virt.libvirt.driver [-] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Instance spawned successfully.#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.880 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.893 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.899 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.903 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.904 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.904 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.904 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.905 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.905 2 DEBUG nova.virt.libvirt.driver [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.931 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.932 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410074.8704717, 626d0b9d-10f5-469e-bae3-9dd7d03072c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.932 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.972 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.980 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410074.875795, 626d0b9d-10f5-469e-bae3-9dd7d03072c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.981 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.986 2 INFO nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Took 5.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:01:14 np0005466031 nova_compute[235803]: 2025-10-02 13:01:14.986 2 DEBUG nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.028 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.032 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.094 2 INFO nova.compute.manager [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Took 6.77 seconds to build instance.#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.123 2 DEBUG oslo_concurrency.lockutils [None req-dab95797-59cd-4966-9a97-32b58e1eebeb b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.682 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.682 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.683 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.683 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:01:15 np0005466031 nova_compute[235803]: 2025-10-02 13:01:15.683 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:15.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2598328140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.142 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.243 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.244 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.388 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.389 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4109MB free_disk=20.841327667236328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.389 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.389 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.421 2 DEBUG nova.compute.manager [req-4d52eedd-962b-4503-b761-c1d2fcaf6080 req-1c41390d-634f-46b3-b974-b20e909c5ca9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received event network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.422 2 DEBUG oslo_concurrency.lockutils [req-4d52eedd-962b-4503-b761-c1d2fcaf6080 req-1c41390d-634f-46b3-b974-b20e909c5ca9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.422 2 DEBUG oslo_concurrency.lockutils [req-4d52eedd-962b-4503-b761-c1d2fcaf6080 req-1c41390d-634f-46b3-b974-b20e909c5ca9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.422 2 DEBUG oslo_concurrency.lockutils [req-4d52eedd-962b-4503-b761-c1d2fcaf6080 req-1c41390d-634f-46b3-b974-b20e909c5ca9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.422 2 DEBUG nova.compute.manager [req-4d52eedd-962b-4503-b761-c1d2fcaf6080 req-1c41390d-634f-46b3-b974-b20e909c5ca9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] No waiting events found dispatching network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.423 2 WARNING nova.compute.manager [req-4d52eedd-962b-4503-b761-c1d2fcaf6080 req-1c41390d-634f-46b3-b974-b20e909c5ca9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received unexpected event network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab for instance with vm_state active and task_state None.#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.517 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 626d0b9d-10f5-469e-bae3-9dd7d03072c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.518 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.518 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:01:16 np0005466031 nova_compute[235803]: 2025-10-02 13:01:16.631 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1283251929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.058 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.064 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.087 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.118 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.119 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.120 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.812 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.813 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.813 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.813 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.813 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.815 2 INFO nova.compute.manager [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Terminating instance#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.816 2 DEBUG nova.compute.manager [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:01:17 np0005466031 kernel: tap77554a96-67 (unregistering): left promiscuous mode
Oct  2 09:01:17 np0005466031 NetworkManager[44907]: <info>  [1759410077.8535] device (tap77554a96-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:17 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:17Z|00647|binding|INFO|Releasing lport 77554a96-67eb-42e2-a771-fd06275f0dab from this chassis (sb_readonly=0)
Oct  2 09:01:17 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:17Z|00648|binding|INFO|Setting lport 77554a96-67eb-42e2-a771-fd06275f0dab down in Southbound
Oct  2 09:01:17 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:17Z|00649|binding|INFO|Removing iface tap77554a96-67 ovn-installed in OVS
Oct  2 09:01:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:17.867 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:04:07 10.100.0.6'], port_security=['fa:16:3e:75:04:07 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '626d0b9d-10f5-469e-bae3-9dd7d03072c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=77554a96-67eb-42e2-a771-fd06275f0dab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:01:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:17.868 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 77554a96-67eb-42e2-a771-fd06275f0dab in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:01:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:17.870 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052f341a-0628-4183-a5e0-76312bc986c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:01:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:17.870 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[05431899-714b-4c17-923f-577e7d2ce5fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:17.871 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace which is not needed anymore#033[00m
Oct  2 09:01:17 np0005466031 nova_compute[235803]: 2025-10-02 13:01:17.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:17 np0005466031 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Oct  2 09:01:17 np0005466031 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000aa.scope: Consumed 3.734s CPU time.
Oct  2 09:01:17 np0005466031 systemd-machined[192227]: Machine qemu-75-instance-000000aa terminated.
Oct  2 09:01:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:17 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[309930]: [NOTICE]   (309934) : haproxy version is 2.8.14-c23fe91
Oct  2 09:01:17 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[309930]: [NOTICE]   (309934) : path to executable is /usr/sbin/haproxy
Oct  2 09:01:17 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[309930]: [WARNING]  (309934) : Exiting Master process...
Oct  2 09:01:17 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[309930]: [WARNING]  (309934) : Exiting Master process...
Oct  2 09:01:17 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[309930]: [ALERT]    (309934) : Current worker (309936) exited with code 143 (Terminated)
Oct  2 09:01:17 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[309930]: [WARNING]  (309934) : All workers exited. Exiting... (0)
Oct  2 09:01:17 np0005466031 systemd[1]: libpod-dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3.scope: Deactivated successfully.
Oct  2 09:01:17 np0005466031 podman[310016]: 2025-10-02 13:01:17.98991483 +0000 UTC m=+0.039115668 container died dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 09:01:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:18.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:18 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3-userdata-shm.mount: Deactivated successfully.
Oct  2 09:01:18 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b5ab6bc24f0e60cb30df9bc4c3ec817652c1272373feb66daafd9c37744526c8-merged.mount: Deactivated successfully.
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:18 np0005466031 podman[310016]: 2025-10-02 13:01:18.042958658 +0000 UTC m=+0.092159496 container cleanup dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:01:18 np0005466031 systemd[1]: libpod-conmon-dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3.scope: Deactivated successfully.
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.054 2 INFO nova.virt.libvirt.driver [-] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Instance destroyed successfully.#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.055 2 DEBUG nova.objects.instance [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid 626d0b9d-10f5-469e-bae3-9dd7d03072c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.070 2 DEBUG nova.virt.libvirt.vif [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1203431128',display_name='tempest-ServersTestJSON-server-1203431128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1203431128',id=170,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJLQrHVV9kWL/E+7rAhNqFldcw7jCT0EgflQ8rAFqBXE1Im2rP3QZn1jBU5C+8a1H3d69b/KwNkkmq/r52lVGLr+XK1trYlHuzg081ojnTuAVyG15EFoPG/Alc0wVqQlVg==',key_name='tempest-key-693315259',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:01:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-3timroe5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:01:15Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=626d0b9d-10f5-469e-bae3-9dd7d03072c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.071 2 DEBUG nova.network.os_vif_util [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "77554a96-67eb-42e2-a771-fd06275f0dab", "address": "fa:16:3e:75:04:07", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77554a96-67", "ovs_interfaceid": "77554a96-67eb-42e2-a771-fd06275f0dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.072 2 DEBUG nova.network.os_vif_util [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:04:07,bridge_name='br-int',has_traffic_filtering=True,id=77554a96-67eb-42e2-a771-fd06275f0dab,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77554a96-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.072 2 DEBUG os_vif [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:04:07,bridge_name='br-int',has_traffic_filtering=True,id=77554a96-67eb-42e2-a771-fd06275f0dab,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77554a96-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77554a96-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.080 2 INFO os_vif [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:04:07,bridge_name='br-int',has_traffic_filtering=True,id=77554a96-67eb-42e2-a771-fd06275f0dab,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77554a96-67')#033[00m
Oct  2 09:01:18 np0005466031 podman[310052]: 2025-10-02 13:01:18.108077854 +0000 UTC m=+0.041032543 container remove dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.114 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[40c4efae-0285-45d1-99ac-55d9550761c3]: (4, ('Thu Oct  2 01:01:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3)\ndd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3\nThu Oct  2 01:01:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (dd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3)\ndd00cc90b54d2404045dea4561f2e7454474f02daf083934cf6ed4b1e6bddfd3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.116 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[28be3764-d620-4963-99eb-ed6e6b458186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.117 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:18 np0005466031 kernel: tap052f341a-00: left promiscuous mode
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.136 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.137 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.137 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.138 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0033e6e8-1ea5-4b79-b6c1-1348a5c08276]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.161 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.161 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.162 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.178 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[00c2ac7d-8bf2-405d-8a94-652b7db77b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.179 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1bff7832-5e10-4e17-b901-022a4d63ee3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.198 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[285fe470-5877-42e1-9360-95447874faa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 792964, 'reachable_time': 39749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310084, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:18 np0005466031 systemd[1]: run-netns-ovnmeta\x2d052f341a\x2d0628\x2d4183\x2da5e0\x2d76312bc986c6.mount: Deactivated successfully.
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.202 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:01:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:18.203 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4c380e79-f11b-4fc3-bd8d-14a2de40a55d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.535 2 DEBUG nova.compute.manager [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received event network-vif-unplugged-77554a96-67eb-42e2-a771-fd06275f0dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.536 2 DEBUG oslo_concurrency.lockutils [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.536 2 DEBUG oslo_concurrency.lockutils [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.537 2 DEBUG oslo_concurrency.lockutils [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.537 2 DEBUG nova.compute.manager [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] No waiting events found dispatching network-vif-unplugged-77554a96-67eb-42e2-a771-fd06275f0dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.537 2 DEBUG nova.compute.manager [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received event network-vif-unplugged-77554a96-67eb-42e2-a771-fd06275f0dab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.537 2 DEBUG nova.compute.manager [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received event network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.538 2 DEBUG oslo_concurrency.lockutils [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.538 2 DEBUG oslo_concurrency.lockutils [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.538 2 DEBUG oslo_concurrency.lockutils [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.538 2 DEBUG nova.compute.manager [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] No waiting events found dispatching network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:01:18 np0005466031 nova_compute[235803]: 2025-10-02 13:01:18.539 2 WARNING nova.compute.manager [req-3f39dc72-9c4c-42f8-baa1-10be167e5fc5 req-a9d23c1a-a96a-48e3-a7f2-0c7e069967f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received unexpected event network-vif-plugged-77554a96-67eb-42e2-a771-fd06275f0dab for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:01:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:01:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4245233163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:01:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:19.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:20.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:20 np0005466031 nova_compute[235803]: 2025-10-02 13:01:20.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:21 np0005466031 nova_compute[235803]: 2025-10-02 13:01:21.236 2 INFO nova.virt.libvirt.driver [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Deleting instance files /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1_del#033[00m
Oct  2 09:01:21 np0005466031 nova_compute[235803]: 2025-10-02 13:01:21.237 2 INFO nova.virt.libvirt.driver [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Deletion of /var/lib/nova/instances/626d0b9d-10f5-469e-bae3-9dd7d03072c1_del complete#033[00m
Oct  2 09:01:21 np0005466031 nova_compute[235803]: 2025-10-02 13:01:21.321 2 INFO nova.compute.manager [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Took 3.51 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:01:21 np0005466031 nova_compute[235803]: 2025-10-02 13:01:21.322 2 DEBUG oslo.service.loopingcall [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:01:21 np0005466031 nova_compute[235803]: 2025-10-02 13:01:21.322 2 DEBUG nova.compute.manager [-] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:01:21 np0005466031 nova_compute[235803]: 2025-10-02 13:01:21.323 2 DEBUG nova.network.neutron [-] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:01:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:21.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:22.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:22 np0005466031 nova_compute[235803]: 2025-10-02 13:01:22.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.141 2 DEBUG nova.network.neutron [-] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.232 2 INFO nova.compute.manager [-] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Took 1.91 seconds to deallocate network for instance.#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.379 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.379 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.392 2 DEBUG nova.compute.manager [req-2ecae896-c616-4832-bc70-83c77ee534cf req-b7d705b4-0449-4ce4-8185-1d5e7ea6f33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Received event network-vif-deleted-77554a96-67eb-42e2-a771-fd06275f0dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.464 2 DEBUG oslo_concurrency.processutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:01:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4026947805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.936 2 DEBUG oslo_concurrency.processutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:23.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:23 np0005466031 nova_compute[235803]: 2025-10-02 13:01:23.943 2 DEBUG nova.compute.provider_tree [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:24.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:24 np0005466031 nova_compute[235803]: 2025-10-02 13:01:24.169 2 DEBUG nova.scheduler.client.report [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:24 np0005466031 nova_compute[235803]: 2025-10-02 13:01:24.228 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:24 np0005466031 nova_compute[235803]: 2025-10-02 13:01:24.310 2 INFO nova.scheduler.client.report [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance 626d0b9d-10f5-469e-bae3-9dd7d03072c1#033[00m
Oct  2 09:01:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:24 np0005466031 nova_compute[235803]: 2025-10-02 13:01:24.484 2 DEBUG oslo_concurrency.lockutils [None req-faced15a-6ef5-4f29-befb-0a7b91c34f75 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "626d0b9d-10f5-469e-bae3-9dd7d03072c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:01:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:01:25 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:01:25 np0005466031 nova_compute[235803]: 2025-10-02 13:01:25.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:25.869 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:25.870 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:25.870 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:25.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:01:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2992057832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:01:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:27.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:28.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:28 np0005466031 nova_compute[235803]: 2025-10-02 13:01:28.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:29.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:30.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:30 np0005466031 nova_compute[235803]: 2025-10-02 13:01:30.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:30 np0005466031 podman[310294]: 2025-10-02 13:01:30.638354874 +0000 UTC m=+0.064094267 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:01:30 np0005466031 podman[310295]: 2025-10-02 13:01:30.670853261 +0000 UTC m=+0.092278480 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 09:01:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:31.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:32.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.106 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4d1b443f-821c-4181-bbff-248387e3cdec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.107 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.138 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.235 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.236 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.243 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.244 2 INFO nova.compute.claims [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.543 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/524146024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:32 np0005466031 nova_compute[235803]: 2025-10-02 13:01:32.997 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.003 2 DEBUG nova.compute.provider_tree [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.032 2 DEBUG nova.scheduler.client.report [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.051 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410078.0510802, 626d0b9d-10f5-469e-bae3-9dd7d03072c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.052 2 INFO nova.compute.manager [-] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.060 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.061 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.089 2 DEBUG nova.compute.manager [None req-cac35333-2e0a-438d-9e6e-c064a0265d04 - - - - - -] [instance: 626d0b9d-10f5-469e-bae3-9dd7d03072c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.132 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.133 2 DEBUG nova.network.neutron [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.172 2 INFO nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.191 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.355 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.357 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.357 2 INFO nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Creating image(s)#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.381 2 DEBUG nova.storage.rbd_utils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4d1b443f-821c-4181-bbff-248387e3cdec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.405 2 DEBUG nova.storage.rbd_utils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4d1b443f-821c-4181-bbff-248387e3cdec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.430 2 DEBUG nova.storage.rbd_utils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4d1b443f-821c-4181-bbff-248387e3cdec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.434 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.466 2 DEBUG nova.policy [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.504 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.504 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.505 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.505 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.530 2 DEBUG nova.storage.rbd_utils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4d1b443f-821c-4181-bbff-248387e3cdec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:33 np0005466031 nova_compute[235803]: 2025-10-02 13:01:33.535 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4d1b443f-821c-4181-bbff-248387e3cdec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:33.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:34.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.245 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4d1b443f-821c-4181-bbff-248387e3cdec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.317 2 DEBUG nova.storage.rbd_utils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image 4d1b443f-821c-4181-bbff-248387e3cdec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:01:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.479 2 DEBUG nova.objects.instance [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 4d1b443f-821c-4181-bbff-248387e3cdec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.493 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.493 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Ensure instance console log exists: /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.494 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.494 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.495 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:34 np0005466031 nova_compute[235803]: 2025-10-02 13:01:34.664 2 DEBUG nova.network.neutron [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Successfully created port: 68623d1a-0888-456f-80bc-cddf35b56984 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:01:35 np0005466031 nova_compute[235803]: 2025-10-02 13:01:35.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:35.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:36.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:36 np0005466031 nova_compute[235803]: 2025-10-02 13:01:36.965 2 DEBUG nova.network.neutron [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Successfully updated port: 68623d1a-0888-456f-80bc-cddf35b56984 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:01:36 np0005466031 nova_compute[235803]: 2025-10-02 13:01:36.997 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-4d1b443f-821c-4181-bbff-248387e3cdec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:36 np0005466031 nova_compute[235803]: 2025-10-02 13:01:36.997 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-4d1b443f-821c-4181-bbff-248387e3cdec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:36 np0005466031 nova_compute[235803]: 2025-10-02 13:01:36.998 2 DEBUG nova.network.neutron [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:01:37 np0005466031 nova_compute[235803]: 2025-10-02 13:01:37.118 2 DEBUG nova.compute.manager [req-76fa86dd-f0b6-435e-8df6-18b7265ed037 req-69c36bba-2627-44e7-923b-4020c401508f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received event network-changed-68623d1a-0888-456f-80bc-cddf35b56984 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:37 np0005466031 nova_compute[235803]: 2025-10-02 13:01:37.119 2 DEBUG nova.compute.manager [req-76fa86dd-f0b6-435e-8df6-18b7265ed037 req-69c36bba-2627-44e7-923b-4020c401508f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Refreshing instance network info cache due to event network-changed-68623d1a-0888-456f-80bc-cddf35b56984. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:01:37 np0005466031 nova_compute[235803]: 2025-10-02 13:01:37.119 2 DEBUG oslo_concurrency.lockutils [req-76fa86dd-f0b6-435e-8df6-18b7265ed037 req-69c36bba-2627-44e7-923b-4020c401508f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4d1b443f-821c-4181-bbff-248387e3cdec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:37 np0005466031 nova_compute[235803]: 2025-10-02 13:01:37.246 2 DEBUG nova.network.neutron [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:01:37 np0005466031 podman[310528]: 2025-10-02 13:01:37.633313241 +0000 UTC m=+0.059233367 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:01:37 np0005466031 podman[310533]: 2025-10-02 13:01:37.65443425 +0000 UTC m=+0.079847562 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:01:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:01:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:01:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:37.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:38.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:38 np0005466031 nova_compute[235803]: 2025-10-02 13:01:38.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:38 np0005466031 nova_compute[235803]: 2025-10-02 13:01:38.927 2 DEBUG nova.network.neutron [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Updating instance_info_cache with network_info: [{"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:38 np0005466031 nova_compute[235803]: 2025-10-02 13:01:38.997 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-4d1b443f-821c-4181-bbff-248387e3cdec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:01:38 np0005466031 nova_compute[235803]: 2025-10-02 13:01:38.997 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Instance network_info: |[{"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:01:38 np0005466031 nova_compute[235803]: 2025-10-02 13:01:38.997 2 DEBUG oslo_concurrency.lockutils [req-76fa86dd-f0b6-435e-8df6-18b7265ed037 req-69c36bba-2627-44e7-923b-4020c401508f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4d1b443f-821c-4181-bbff-248387e3cdec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:38 np0005466031 nova_compute[235803]: 2025-10-02 13:01:38.998 2 DEBUG nova.network.neutron [req-76fa86dd-f0b6-435e-8df6-18b7265ed037 req-69c36bba-2627-44e7-923b-4020c401508f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Refreshing network info cache for port 68623d1a-0888-456f-80bc-cddf35b56984 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.000 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Start _get_guest_xml network_info=[{"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.005 2 WARNING nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.013 2 DEBUG nova.virt.libvirt.host [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.014 2 DEBUG nova.virt.libvirt.host [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.016 2 DEBUG nova.virt.libvirt.host [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.017 2 DEBUG nova.virt.libvirt.host [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.018 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.018 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.018 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.018 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.019 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.019 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.019 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.019 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.020 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.020 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.020 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.020 2 DEBUG nova.virt.hardware [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.023 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:01:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2497424559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.457 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.485 2 DEBUG nova.storage.rbd_utils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4d1b443f-821c-4181-bbff-248387e3cdec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.489 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:01:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3600982715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.925 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.926 2 DEBUG nova.virt.libvirt.vif [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1224939934',display_name='tempest-ServersTestJSON-server-1224939934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1224939934',id=172,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-uywtjk9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:33Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=4d1b443f-821c-4181-bbff-248387e3cdec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.927 2 DEBUG nova.network.os_vif_util [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.927 2 DEBUG nova.network.os_vif_util [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1f:d4,bridge_name='br-int',has_traffic_filtering=True,id=68623d1a-0888-456f-80bc-cddf35b56984,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68623d1a-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.928 2 DEBUG nova.objects.instance [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d1b443f-821c-4181-bbff-248387e3cdec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:39.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.967 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <uuid>4d1b443f-821c-4181-bbff-248387e3cdec</uuid>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <name>instance-000000ac</name>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServersTestJSON-server-1224939934</nova:name>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:01:39</nova:creationTime>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <nova:port uuid="68623d1a-0888-456f-80bc-cddf35b56984">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <entry name="serial">4d1b443f-821c-4181-bbff-248387e3cdec</entry>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <entry name="uuid">4d1b443f-821c-4181-bbff-248387e3cdec</entry>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/4d1b443f-821c-4181-bbff-248387e3cdec_disk">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/4d1b443f-821c-4181-bbff-248387e3cdec_disk.config">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:b5:1f:d4"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <target dev="tap68623d1a-08"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec/console.log" append="off"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:01:39 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:01:39 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:01:39 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:01:39 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.969 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Preparing to wait for external event network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.969 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.969 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.970 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.970 2 DEBUG nova.virt.libvirt.vif [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1224939934',display_name='tempest-ServersTestJSON-server-1224939934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1224939934',id=172,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-uywtjk9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:33Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=4d1b443f-821c-4181-bbff-248387e3cdec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.971 2 DEBUG nova.network.os_vif_util [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.971 2 DEBUG nova.network.os_vif_util [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1f:d4,bridge_name='br-int',has_traffic_filtering=True,id=68623d1a-0888-456f-80bc-cddf35b56984,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68623d1a-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.972 2 DEBUG os_vif [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1f:d4,bridge_name='br-int',has_traffic_filtering=True,id=68623d1a-0888-456f-80bc-cddf35b56984,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68623d1a-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.973 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.973 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.976 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68623d1a-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68623d1a-08, col_values=(('external_ids', {'iface-id': '68623d1a-0888-456f-80bc-cddf35b56984', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:1f:d4', 'vm-uuid': '4d1b443f-821c-4181-bbff-248387e3cdec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:39 np0005466031 NetworkManager[44907]: <info>  [1759410099.9805] manager: (tap68623d1a-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:39 np0005466031 nova_compute[235803]: 2025-10-02 13:01:39.988 2 INFO os_vif [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1f:d4,bridge_name='br-int',has_traffic_filtering=True,id=68623d1a-0888-456f-80bc-cddf35b56984,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68623d1a-08')#033[00m
Oct  2 09:01:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:40.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.050 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.050 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.050 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:b5:1f:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.051 2 INFO nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Using config drive#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.077 2 DEBUG nova.storage.rbd_utils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4d1b443f-821c-4181-bbff-248387e3cdec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.627 2 DEBUG nova.network.neutron [req-76fa86dd-f0b6-435e-8df6-18b7265ed037 req-69c36bba-2627-44e7-923b-4020c401508f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Updated VIF entry in instance network info cache for port 68623d1a-0888-456f-80bc-cddf35b56984. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.628 2 DEBUG nova.network.neutron [req-76fa86dd-f0b6-435e-8df6-18b7265ed037 req-69c36bba-2627-44e7-923b-4020c401508f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Updating instance_info_cache with network_info: [{"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.633 2 INFO nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Creating config drive at /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec/disk.config#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.637 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsa1xm81a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.669 2 DEBUG oslo_concurrency.lockutils [req-76fa86dd-f0b6-435e-8df6-18b7265ed037 req-69c36bba-2627-44e7-923b-4020c401508f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4d1b443f-821c-4181-bbff-248387e3cdec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.775 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsa1xm81a" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.801 2 DEBUG nova.storage.rbd_utils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4d1b443f-821c-4181-bbff-248387e3cdec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:40 np0005466031 nova_compute[235803]: 2025-10-02 13:01:40.805 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec/disk.config 4d1b443f-821c-4181-bbff-248387e3cdec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:41 np0005466031 nova_compute[235803]: 2025-10-02 13:01:41.154 2 DEBUG oslo_concurrency.processutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec/disk.config 4d1b443f-821c-4181-bbff-248387e3cdec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:41 np0005466031 nova_compute[235803]: 2025-10-02 13:01:41.156 2 INFO nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Deleting local config drive /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec/disk.config because it was imported into RBD.#033[00m
Oct  2 09:01:41 np0005466031 kernel: tap68623d1a-08: entered promiscuous mode
Oct  2 09:01:41 np0005466031 NetworkManager[44907]: <info>  [1759410101.2243] manager: (tap68623d1a-08): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Oct  2 09:01:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:41Z|00650|binding|INFO|Claiming lport 68623d1a-0888-456f-80bc-cddf35b56984 for this chassis.
Oct  2 09:01:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:41Z|00651|binding|INFO|68623d1a-0888-456f-80bc-cddf35b56984: Claiming fa:16:3e:b5:1f:d4 10.100.0.10
Oct  2 09:01:41 np0005466031 nova_compute[235803]: 2025-10-02 13:01:41.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.238 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:1f:d4 10.100.0.10'], port_security=['fa:16:3e:b5:1f:d4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d1b443f-821c-4181-bbff-248387e3cdec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=68623d1a-0888-456f-80bc-cddf35b56984) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.240 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 68623d1a-0888-456f-80bc-cddf35b56984 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.242 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:01:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:41Z|00652|binding|INFO|Setting lport 68623d1a-0888-456f-80bc-cddf35b56984 ovn-installed in OVS
Oct  2 09:01:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:41Z|00653|binding|INFO|Setting lport 68623d1a-0888-456f-80bc-cddf35b56984 up in Southbound
Oct  2 09:01:41 np0005466031 nova_compute[235803]: 2025-10-02 13:01:41.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466031 nova_compute[235803]: 2025-10-02 13:01:41.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.255 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8f052bc9-02b8-4d39-bd6b-e31d79f893d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.256 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap052f341a-01 in ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.259 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap052f341a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.259 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[98fbd852-7f55-4f00-8ed4-c418f0bd056b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.260 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b8537e19-d652-4bc6-aeaf-f34288d49e7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 systemd-machined[192227]: New machine qemu-76-instance-000000ac.
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.273 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[514b0b06-7b1c-4185-9d73-bde09afd3e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 systemd-udevd[310757]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:01:41 np0005466031 systemd[1]: Started Virtual Machine qemu-76-instance-000000ac.
Oct  2 09:01:41 np0005466031 NetworkManager[44907]: <info>  [1759410101.2991] device (tap68623d1a-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.297 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5abef5-ea07-4f96-ab9b-7fd4b426f1d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 NetworkManager[44907]: <info>  [1759410101.3000] device (tap68623d1a-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.327 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f66b11b2-aaae-4780-86ab-65095a9c7eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 NetworkManager[44907]: <info>  [1759410101.3328] manager: (tap052f341a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.332 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[21cb13a3-bcf7-412f-943a-2b6d3af8436f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 systemd-udevd[310759]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.361 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[539f3800-26d5-402a-8748-84f3249e0155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.364 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[f7735ee5-49a2-45e6-b32a-ae09331125d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 NetworkManager[44907]: <info>  [1759410101.3907] device (tap052f341a-00): carrier: link connected
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.396 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d1236792-686a-4dc9-9aaa-f78903cc8a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.415 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e73bd1-fb08-4e52-bf16-190564235c60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795698, 'reachable_time': 40428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310787, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.431 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a3411538-1128-45e3-9e09-504001f3e75e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:a55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795698, 'tstamp': 795698}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310788, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.450 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[774bc490-6fea-4403-8657-7ed6d52bc559]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795698, 'reachable_time': 40428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310789, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.483 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[887dd95b-896b-40d3-af5e-6e3b1ad9b163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.544 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ceee2d90-4377-496e-9484-d7b9f26b5b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.546 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.546 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.546 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:41 np0005466031 NetworkManager[44907]: <info>  [1759410101.5490] manager: (tap052f341a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Oct  2 09:01:41 np0005466031 kernel: tap052f341a-00: entered promiscuous mode
Oct  2 09:01:41 np0005466031 nova_compute[235803]: 2025-10-02 13:01:41.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.552 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:41 np0005466031 nova_compute[235803]: 2025-10-02 13:01:41.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:01:41Z|00654|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:01:41 np0005466031 nova_compute[235803]: 2025-10-02 13:01:41.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.568 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.569 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[44ccd9f3-91f2-46ff-9d40-359edf353d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.570 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:01:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:01:41.572 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'env', 'PROCESS_TAG=haproxy-052f341a-0628-4183-a5e0-76312bc986c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/052f341a-0628-4183-a5e0-76312bc986c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:01:41 np0005466031 podman[310863]: 2025-10-02 13:01:41.948816738 +0000 UTC m=+0.049918459 container create 4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:01:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:41 np0005466031 systemd[1]: Started libpod-conmon-4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3.scope.
Oct  2 09:01:42 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:01:42 np0005466031 podman[310863]: 2025-10-02 13:01:41.923596002 +0000 UTC m=+0.024697743 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:01:42 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90e8c412b2a8be5a31e710d385b440c74eaa543d3024768fd57586e3f94bb05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:01:42 np0005466031 podman[310863]: 2025-10-02 13:01:42.036781953 +0000 UTC m=+0.137883704 container init 4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 09:01:42 np0005466031 podman[310863]: 2025-10-02 13:01:42.043157817 +0000 UTC m=+0.144259538 container start 4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:01:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:42.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:42 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[310879]: [NOTICE]   (310883) : New worker (310885) forked
Oct  2 09:01:42 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[310879]: [NOTICE]   (310883) : Loading success.
Oct  2 09:01:42 np0005466031 nova_compute[235803]: 2025-10-02 13:01:42.232 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410102.2314696, 4d1b443f-821c-4181-bbff-248387e3cdec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:42 np0005466031 nova_compute[235803]: 2025-10-02 13:01:42.232 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] VM Started (Lifecycle Event)#033[00m
Oct  2 09:01:42 np0005466031 nova_compute[235803]: 2025-10-02 13:01:42.429 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:42 np0005466031 nova_compute[235803]: 2025-10-02 13:01:42.433 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410102.2316453, 4d1b443f-821c-4181-bbff-248387e3cdec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:42 np0005466031 nova_compute[235803]: 2025-10-02 13:01:42.434 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:01:42 np0005466031 nova_compute[235803]: 2025-10-02 13:01:42.474 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:42 np0005466031 nova_compute[235803]: 2025-10-02 13:01:42.478 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:01:42 np0005466031 nova_compute[235803]: 2025-10-02 13:01:42.512 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:01:43 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 09:01:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:43 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:01:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:43.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:43 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:01:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:44.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:44 np0005466031 nova_compute[235803]: 2025-10-02 13:01:44.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:44 np0005466031 nova_compute[235803]: 2025-10-02 13:01:44.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:01:44 np0005466031 nova_compute[235803]: 2025-10-02 13:01:44.655 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:01:44 np0005466031 nova_compute[235803]: 2025-10-02 13:01:44.655 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:44 np0005466031 nova_compute[235803]: 2025-10-02 13:01:44.655 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:01:44 np0005466031 nova_compute[235803]: 2025-10-02 13:01:44.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:45 np0005466031 nova_compute[235803]: 2025-10-02 13:01:45.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:45.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:46.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.077 2 DEBUG nova.compute.manager [req-3c89f6d4-df17-4814-90a4-f28e595b777a req-2ddaf730-3b33-450c-ad0e-e28125e51b78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received event network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.078 2 DEBUG oslo_concurrency.lockutils [req-3c89f6d4-df17-4814-90a4-f28e595b777a req-2ddaf730-3b33-450c-ad0e-e28125e51b78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.078 2 DEBUG oslo_concurrency.lockutils [req-3c89f6d4-df17-4814-90a4-f28e595b777a req-2ddaf730-3b33-450c-ad0e-e28125e51b78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.078 2 DEBUG oslo_concurrency.lockutils [req-3c89f6d4-df17-4814-90a4-f28e595b777a req-2ddaf730-3b33-450c-ad0e-e28125e51b78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.078 2 DEBUG nova.compute.manager [req-3c89f6d4-df17-4814-90a4-f28e595b777a req-2ddaf730-3b33-450c-ad0e-e28125e51b78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Processing event network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.079 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.082 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410107.0821626, 4d1b443f-821c-4181-bbff-248387e3cdec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.082 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.084 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.087 2 INFO nova.virt.libvirt.driver [-] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Instance spawned successfully.#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.088 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.120 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.124 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.125 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.125 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.126 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.126 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.126 2 DEBUG nova.virt.libvirt.driver [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.131 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.166 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.210 2 INFO nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Took 13.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.211 2 DEBUG nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.312 2 INFO nova.compute.manager [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Took 15.12 seconds to build instance.#033[00m
Oct  2 09:01:47 np0005466031 nova_compute[235803]: 2025-10-02 13:01:47.332 2 DEBUG oslo_concurrency.lockutils [None req-0b88f10c-751a-4a8a-b053-17d9101d718c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:47.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:01:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:01:49 np0005466031 nova_compute[235803]: 2025-10-02 13:01:49.263 2 DEBUG nova.compute.manager [req-ec7ce27c-ad62-4498-9a4d-f16bc2d197b2 req-f114534b-e29b-446b-af41-719e952fd0a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received event network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:49 np0005466031 nova_compute[235803]: 2025-10-02 13:01:49.263 2 DEBUG oslo_concurrency.lockutils [req-ec7ce27c-ad62-4498-9a4d-f16bc2d197b2 req-f114534b-e29b-446b-af41-719e952fd0a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:49 np0005466031 nova_compute[235803]: 2025-10-02 13:01:49.263 2 DEBUG oslo_concurrency.lockutils [req-ec7ce27c-ad62-4498-9a4d-f16bc2d197b2 req-f114534b-e29b-446b-af41-719e952fd0a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:49 np0005466031 nova_compute[235803]: 2025-10-02 13:01:49.264 2 DEBUG oslo_concurrency.lockutils [req-ec7ce27c-ad62-4498-9a4d-f16bc2d197b2 req-f114534b-e29b-446b-af41-719e952fd0a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:49 np0005466031 nova_compute[235803]: 2025-10-02 13:01:49.264 2 DEBUG nova.compute.manager [req-ec7ce27c-ad62-4498-9a4d-f16bc2d197b2 req-f114534b-e29b-446b-af41-719e952fd0a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] No waiting events found dispatching network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:01:49 np0005466031 nova_compute[235803]: 2025-10-02 13:01:49.264 2 WARNING nova.compute.manager [req-ec7ce27c-ad62-4498-9a4d-f16bc2d197b2 req-f114534b-e29b-446b-af41-719e952fd0a3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received unexpected event network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:01:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:49.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:49 np0005466031 nova_compute[235803]: 2025-10-02 13:01:49.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:50.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:50 np0005466031 nova_compute[235803]: 2025-10-02 13:01:50.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:51.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:01:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:52.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:01:53 np0005466031 nova_compute[235803]: 2025-10-02 13:01:53.348 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "f32c2187-a56f-421e-8f7a-5ebf679654cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:53 np0005466031 nova_compute[235803]: 2025-10-02 13:01:53.348 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:53 np0005466031 nova_compute[235803]: 2025-10-02 13:01:53.384 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:01:53 np0005466031 nova_compute[235803]: 2025-10-02 13:01:53.481 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:53 np0005466031 nova_compute[235803]: 2025-10-02 13:01:53.481 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:53 np0005466031 nova_compute[235803]: 2025-10-02 13:01:53.490 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:01:53 np0005466031 nova_compute[235803]: 2025-10-02 13:01:53.490 2 INFO nova.compute.claims [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:01:53 np0005466031 nova_compute[235803]: 2025-10-02 13:01:53.633 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:53.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:54.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:54 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3628328142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.114 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.120 2 DEBUG nova.compute.provider_tree [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.143 2 DEBUG nova.scheduler.client.report [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.168 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.171 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.244 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.247 2 DEBUG nova.network.neutron [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.270 2 INFO nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.298 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.404 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.405 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.406 2 INFO nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Creating image(s)#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.426 2 DEBUG nova.storage.rbd_utils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image f32c2187-a56f-421e-8f7a-5ebf679654cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.452 2 DEBUG nova.storage.rbd_utils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image f32c2187-a56f-421e-8f7a-5ebf679654cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.476 2 DEBUG nova.storage.rbd_utils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image f32c2187-a56f-421e-8f7a-5ebf679654cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.480 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.510 2 DEBUG nova.policy [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.549 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.550 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.551 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.551 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.575 2 DEBUG nova.storage.rbd_utils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image f32c2187-a56f-421e-8f7a-5ebf679654cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.579 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f32c2187-a56f-421e-8f7a-5ebf679654cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:54 np0005466031 nova_compute[235803]: 2025-10-02 13:01:54.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:55 np0005466031 nova_compute[235803]: 2025-10-02 13:01:55.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:55 np0005466031 nova_compute[235803]: 2025-10-02 13:01:55.569 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f32c2187-a56f-421e-8f7a-5ebf679654cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.991s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:55 np0005466031 nova_compute[235803]: 2025-10-02 13:01:55.651 2 DEBUG nova.storage.rbd_utils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image f32c2187-a56f-421e-8f7a-5ebf679654cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:01:55 np0005466031 nova_compute[235803]: 2025-10-02 13:01:55.870 2 DEBUG nova.network.neutron [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Successfully created port: 4942dddb-a16e-4721-b890-c560adea864a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:01:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:01:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3214295918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:01:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:01:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3214295918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:01:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:01:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:55.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:01:55 np0005466031 nova_compute[235803]: 2025-10-02 13:01:55.982 2 DEBUG nova.objects.instance [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid f32c2187-a56f-421e-8f7a-5ebf679654cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:56 np0005466031 nova_compute[235803]: 2025-10-02 13:01:56.027 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:01:56 np0005466031 nova_compute[235803]: 2025-10-02 13:01:56.028 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Ensure instance console log exists: /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:01:56 np0005466031 nova_compute[235803]: 2025-10-02 13:01:56.028 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:56 np0005466031 nova_compute[235803]: 2025-10-02 13:01:56.029 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:56 np0005466031 nova_compute[235803]: 2025-10-02 13:01:56.029 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:56.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:57 np0005466031 nova_compute[235803]: 2025-10-02 13:01:57.677 2 DEBUG nova.network.neutron [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Successfully updated port: 4942dddb-a16e-4721-b890-c560adea864a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:01:57 np0005466031 nova_compute[235803]: 2025-10-02 13:01:57.694 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-f32c2187-a56f-421e-8f7a-5ebf679654cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:57 np0005466031 nova_compute[235803]: 2025-10-02 13:01:57.695 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-f32c2187-a56f-421e-8f7a-5ebf679654cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:57 np0005466031 nova_compute[235803]: 2025-10-02 13:01:57.695 2 DEBUG nova.network.neutron [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:01:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 09:01:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:57.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 09:01:57 np0005466031 nova_compute[235803]: 2025-10-02 13:01:57.981 2 DEBUG nova.network.neutron [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:01:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:58.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:59 np0005466031 nova_compute[235803]: 2025-10-02 13:01:59.614 2 DEBUG nova.compute.manager [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received event network-changed-4942dddb-a16e-4721-b890-c560adea864a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:59 np0005466031 nova_compute[235803]: 2025-10-02 13:01:59.614 2 DEBUG nova.compute.manager [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Refreshing instance network info cache due to event network-changed-4942dddb-a16e-4721-b890-c560adea864a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:01:59 np0005466031 nova_compute[235803]: 2025-10-02 13:01:59.614 2 DEBUG oslo_concurrency.lockutils [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f32c2187-a56f-421e-8f7a-5ebf679654cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:01:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:59.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:59 np0005466031 nova_compute[235803]: 2025-10-02 13:01:59.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:00.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.191 2 DEBUG nova.network.neutron [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Updating instance_info_cache with network_info: [{"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.248 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-f32c2187-a56f-421e-8f7a-5ebf679654cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.248 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Instance network_info: |[{"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.248 2 DEBUG oslo_concurrency.lockutils [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f32c2187-a56f-421e-8f7a-5ebf679654cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.249 2 DEBUG nova.network.neutron [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Refreshing network info cache for port 4942dddb-a16e-4721-b890-c560adea864a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.251 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Start _get_guest_xml network_info=[{"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.256 2 WARNING nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.260 2 DEBUG nova.virt.libvirt.host [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.261 2 DEBUG nova.virt.libvirt.host [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.263 2 DEBUG nova.virt.libvirt.host [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.264 2 DEBUG nova.virt.libvirt.host [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.265 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.266 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.266 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.266 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.267 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.267 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.267 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.267 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.267 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.268 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.268 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.268 2 DEBUG nova.virt.hardware [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.271 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:00 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:00Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:1f:d4 10.100.0.10
Oct  2 09:02:00 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:00Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:1f:d4 10.100.0.10
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4099713465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.720 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.751 2 DEBUG nova.storage.rbd_utils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image f32c2187-a56f-421e-8f7a-5ebf679654cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:00 np0005466031 nova_compute[235803]: 2025-10-02 13:02:00.756 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3896671779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.189 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.190 2 DEBUG nova.virt.libvirt.vif [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1224939934',display_name='tempest-ServersTestJSON-server-1224939934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1224939934',id=174,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-u5htm23k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:54Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=f32c2187-a56f-421e-8f7a-5ebf679654cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.191 2 DEBUG nova.network.os_vif_util [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.192 2 DEBUG nova.network.os_vif_util [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:6f:93,bridge_name='br-int',has_traffic_filtering=True,id=4942dddb-a16e-4721-b890-c560adea864a,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4942dddb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.193 2 DEBUG nova.objects.instance [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid f32c2187-a56f-421e-8f7a-5ebf679654cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.227 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <uuid>f32c2187-a56f-421e-8f7a-5ebf679654cd</uuid>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <name>instance-000000ae</name>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServersTestJSON-server-1224939934</nova:name>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:02:00</nova:creationTime>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <nova:port uuid="4942dddb-a16e-4721-b890-c560adea864a">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <entry name="serial">f32c2187-a56f-421e-8f7a-5ebf679654cd</entry>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <entry name="uuid">f32c2187-a56f-421e-8f7a-5ebf679654cd</entry>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f32c2187-a56f-421e-8f7a-5ebf679654cd_disk">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f32c2187-a56f-421e-8f7a-5ebf679654cd_disk.config">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:d1:6f:93"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <target dev="tap4942dddb-a1"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd/console.log" append="off"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:02:01 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:02:01 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:02:01 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:02:01 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.229 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Preparing to wait for external event network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.229 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.230 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.230 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.230 2 DEBUG nova.virt.libvirt.vif [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1224939934',display_name='tempest-ServersTestJSON-server-1224939934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1224939934',id=174,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-u5htm23k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:54Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=f32c2187-a56f-421e-8f7a-5ebf679654cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.231 2 DEBUG nova.network.os_vif_util [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.231 2 DEBUG nova.network.os_vif_util [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:6f:93,bridge_name='br-int',has_traffic_filtering=True,id=4942dddb-a16e-4721-b890-c560adea864a,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4942dddb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.232 2 DEBUG os_vif [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:6f:93,bridge_name='br-int',has_traffic_filtering=True,id=4942dddb-a16e-4721-b890-c560adea864a,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4942dddb-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4942dddb-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4942dddb-a1, col_values=(('external_ids', {'iface-id': '4942dddb-a16e-4721-b890-c560adea864a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:6f:93', 'vm-uuid': 'f32c2187-a56f-421e-8f7a-5ebf679654cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:01 np0005466031 NetworkManager[44907]: <info>  [1759410121.2398] manager: (tap4942dddb-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.247 2 INFO os_vif [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:6f:93,bridge_name='br-int',has_traffic_filtering=True,id=4942dddb-a16e-4721-b890-c560adea864a,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4942dddb-a1')#033[00m
Oct  2 09:02:01 np0005466031 podman[311207]: 2025-10-02 13:02:01.339084847 +0000 UTC m=+0.060268988 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:02:01 np0005466031 podman[311208]: 2025-10-02 13:02:01.365755045 +0000 UTC m=+0.085590407 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.377 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.378 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.378 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:d1:6f:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.378 2 INFO nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Using config drive#033[00m
Oct  2 09:02:01 np0005466031 nova_compute[235803]: 2025-10-02 13:02:01.417 2 DEBUG nova.storage.rbd_utils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image f32c2187-a56f-421e-8f7a-5ebf679654cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:01.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.033 2 INFO nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Creating config drive at /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd/disk.config#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.039 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbcuz5mvw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:02.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.180 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbcuz5mvw" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.209 2 DEBUG nova.storage.rbd_utils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image f32c2187-a56f-421e-8f7a-5ebf679654cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.214 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd/disk.config f32c2187-a56f-421e-8f7a-5ebf679654cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.260 2 DEBUG nova.network.neutron [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Updated VIF entry in instance network info cache for port 4942dddb-a16e-4721-b890-c560adea864a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.261 2 DEBUG nova.network.neutron [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Updating instance_info_cache with network_info: [{"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.279 2 DEBUG oslo_concurrency.lockutils [req-5a37a9fd-a43a-47ce-abec-18d36ef45d60 req-e6470ee6-ed7d-4be5-93ed-e6d41eda8fa1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f32c2187-a56f-421e-8f7a-5ebf679654cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.379 2 DEBUG oslo_concurrency.processutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd/disk.config f32c2187-a56f-421e-8f7a-5ebf679654cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.379 2 INFO nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Deleting local config drive /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd/disk.config because it was imported into RBD.#033[00m
Oct  2 09:02:02 np0005466031 kernel: tap4942dddb-a1: entered promiscuous mode
Oct  2 09:02:02 np0005466031 NetworkManager[44907]: <info>  [1759410122.4331] manager: (tap4942dddb-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:02Z|00655|binding|INFO|Claiming lport 4942dddb-a16e-4721-b890-c560adea864a for this chassis.
Oct  2 09:02:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:02Z|00656|binding|INFO|4942dddb-a16e-4721-b890-c560adea864a: Claiming fa:16:3e:d1:6f:93 10.100.0.12
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.441 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:6f:93 10.100.0.12'], port_security=['fa:16:3e:d1:6f:93 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f32c2187-a56f-421e-8f7a-5ebf679654cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=4942dddb-a16e-4721-b890-c560adea864a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.442 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 4942dddb-a16e-4721-b890-c560adea864a in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.443 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:02:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:02Z|00657|binding|INFO|Setting lport 4942dddb-a16e-4721-b890-c560adea864a ovn-installed in OVS
Oct  2 09:02:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:02Z|00658|binding|INFO|Setting lport 4942dddb-a16e-4721-b890-c560adea864a up in Southbound
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:02 np0005466031 systemd-udevd[311322]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:02 np0005466031 systemd-machined[192227]: New machine qemu-77-instance-000000ae.
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.463 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd77611-e376-48a1-a5d2-bb387c97d33f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:02 np0005466031 NetworkManager[44907]: <info>  [1759410122.4780] device (tap4942dddb-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:02:02 np0005466031 NetworkManager[44907]: <info>  [1759410122.4789] device (tap4942dddb-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:02:02 np0005466031 systemd[1]: Started Virtual Machine qemu-77-instance-000000ae.
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.501 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f1d17f-9814-4122-86f5-6e4ce8fd563a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.504 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[84c22838-c10f-4cdf-aa8d-8bfa4b628f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.534 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[024d2e10-6a24-48c7-a467-bf0a4898475a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.553 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7a9815-ebe3-4f52-96be-729eae71f1ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795698, 'reachable_time': 40428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311335, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.568 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[35ee7dd2-fc1f-4449-aa3d-9baeaa130049]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795709, 'tstamp': 795709}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311336, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795712, 'tstamp': 795712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311336, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.570 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:02 np0005466031 nova_compute[235803]: 2025-10-02 13:02:02.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.572 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.573 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.573 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:02.573 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.246 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410123.2459066, f32c2187-a56f-421e-8f7a-5ebf679654cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.247 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] VM Started (Lifecycle Event)#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.280 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.282 2 DEBUG nova.compute.manager [req-38684418-59f2-44a6-b4f4-2f6c56fea786 req-72838e38-3599-4b69-8e74-a0a858daad17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received event network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.283 2 DEBUG oslo_concurrency.lockutils [req-38684418-59f2-44a6-b4f4-2f6c56fea786 req-72838e38-3599-4b69-8e74-a0a858daad17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.283 2 DEBUG oslo_concurrency.lockutils [req-38684418-59f2-44a6-b4f4-2f6c56fea786 req-72838e38-3599-4b69-8e74-a0a858daad17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.283 2 DEBUG oslo_concurrency.lockutils [req-38684418-59f2-44a6-b4f4-2f6c56fea786 req-72838e38-3599-4b69-8e74-a0a858daad17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.283 2 DEBUG nova.compute.manager [req-38684418-59f2-44a6-b4f4-2f6c56fea786 req-72838e38-3599-4b69-8e74-a0a858daad17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Processing event network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.284 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.287 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.290 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.293 2 INFO nova.virt.libvirt.driver [-] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Instance spawned successfully.#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.293 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.314 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.314 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410123.2462077, f32c2187-a56f-421e-8f7a-5ebf679654cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.314 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.321 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.321 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.322 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.322 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.323 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.323 2 DEBUG nova.virt.libvirt.driver [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.353 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.357 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410123.2870657, f32c2187-a56f-421e-8f7a-5ebf679654cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.357 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.384 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.388 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.414 2 INFO nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Took 9.01 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.415 2 DEBUG nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.421 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.481 2 INFO nova.compute.manager [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Took 10.03 seconds to build instance.#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.538 2 DEBUG oslo_concurrency.lockutils [None req-0c28a424-2c8c-4988-8d97-d40c8b5b459c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.659 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:03.944 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:03.945 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:02:03 np0005466031 nova_compute[235803]: 2025-10-02 13:02:03.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:03.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:04.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:05 np0005466031 nova_compute[235803]: 2025-10-02 13:02:05.507 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received event network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:05 np0005466031 nova_compute[235803]: 2025-10-02 13:02:05.508 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:05 np0005466031 nova_compute[235803]: 2025-10-02 13:02:05.508 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:05 np0005466031 nova_compute[235803]: 2025-10-02 13:02:05.508 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:05 np0005466031 nova_compute[235803]: 2025-10-02 13:02:05.509 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] No waiting events found dispatching network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:05 np0005466031 nova_compute[235803]: 2025-10-02 13:02:05.509 2 WARNING nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received unexpected event network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a for instance with vm_state active and task_state None.#033[00m
Oct  2 09:02:05 np0005466031 nova_compute[235803]: 2025-10-02 13:02:05.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:05.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:02:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:06.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:02:06 np0005466031 nova_compute[235803]: 2025-10-02 13:02:06.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:07 np0005466031 nova_compute[235803]: 2025-10-02 13:02:07.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:07.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.390 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "f32c2187-a56f-421e-8f7a-5ebf679654cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.390 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.390 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.391 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.392 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.393 2 INFO nova.compute.manager [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Terminating instance#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.394 2 DEBUG nova.compute.manager [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:02:08 np0005466031 kernel: tap4942dddb-a1 (unregistering): left promiscuous mode
Oct  2 09:02:08 np0005466031 NetworkManager[44907]: <info>  [1759410128.4368] device (tap4942dddb-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:08 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:08Z|00659|binding|INFO|Releasing lport 4942dddb-a16e-4721-b890-c560adea864a from this chassis (sb_readonly=0)
Oct  2 09:02:08 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:08Z|00660|binding|INFO|Setting lport 4942dddb-a16e-4721-b890-c560adea864a down in Southbound
Oct  2 09:02:08 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:08Z|00661|binding|INFO|Removing iface tap4942dddb-a1 ovn-installed in OVS
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:08 np0005466031 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Oct  2 09:02:08 np0005466031 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000ae.scope: Consumed 6.001s CPU time.
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.476 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:6f:93 10.100.0.12'], port_security=['fa:16:3e:d1:6f:93 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f32c2187-a56f-421e-8f7a-5ebf679654cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=4942dddb-a16e-4721-b890-c560adea864a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.477 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 4942dddb-a16e-4721-b890-c560adea864a in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:02:08 np0005466031 systemd-machined[192227]: Machine qemu-77-instance-000000ae terminated.
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.479 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.495 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1abd0dc1-9e5a-470b-9b93-c4fd2252485c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:08 np0005466031 podman[311385]: 2025-10-02 13:02:08.519497947 +0000 UTC m=+0.060716490 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=iscsid)
Oct  2 09:02:08 np0005466031 podman[311382]: 2025-10-02 13:02:08.523584065 +0000 UTC m=+0.064640454 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.529 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[cc13a0c6-0a18-499e-8aba-9ec9e6a471fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.532 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b027a9c3-1483-494f-8ca4-6b47224cab32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.560 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[52a9ee18-b77a-4b9a-b2dd-6d1c49abb6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.579 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ec50b264-abf9-49f7-aecd-d8541d7495ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795698, 'reachable_time': 40428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311434, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.595 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[51be38e9-563f-465b-a3be-2b4635cc0040]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795709, 'tstamp': 795709}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311435, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795712, 'tstamp': 795712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311435, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.596 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.605 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.605 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.606 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:08.606 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.625 2 INFO nova.virt.libvirt.driver [-] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Instance destroyed successfully.#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.626 2 DEBUG nova.objects.instance [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid f32c2187-a56f-421e-8f7a-5ebf679654cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.645 2 DEBUG nova.virt.libvirt.vif [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1224939934',display_name='tempest-ServersTestJSON-server-1224939934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1224939934',id=174,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-u5htm23k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:03Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=f32c2187-a56f-421e-8f7a-5ebf679654cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.646 2 DEBUG nova.network.os_vif_util [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "4942dddb-a16e-4721-b890-c560adea864a", "address": "fa:16:3e:d1:6f:93", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4942dddb-a1", "ovs_interfaceid": "4942dddb-a16e-4721-b890-c560adea864a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.647 2 DEBUG nova.network.os_vif_util [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:6f:93,bridge_name='br-int',has_traffic_filtering=True,id=4942dddb-a16e-4721-b890-c560adea864a,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4942dddb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.647 2 DEBUG os_vif [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:6f:93,bridge_name='br-int',has_traffic_filtering=True,id=4942dddb-a16e-4721-b890-c560adea864a,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4942dddb-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4942dddb-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:08 np0005466031 nova_compute[235803]: 2025-10-02 13:02:08.655 2 INFO os_vif [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:6f:93,bridge_name='br-int',has_traffic_filtering=True,id=4942dddb-a16e-4721-b890-c560adea864a,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4942dddb-a1')#033[00m
Oct  2 09:02:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:09 np0005466031 nova_compute[235803]: 2025-10-02 13:02:09.672 2 INFO nova.virt.libvirt.driver [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Deleting instance files /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd_del#033[00m
Oct  2 09:02:09 np0005466031 nova_compute[235803]: 2025-10-02 13:02:09.673 2 INFO nova.virt.libvirt.driver [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Deletion of /var/lib/nova/instances/f32c2187-a56f-421e-8f7a-5ebf679654cd_del complete#033[00m
Oct  2 09:02:09 np0005466031 nova_compute[235803]: 2025-10-02 13:02:09.732 2 INFO nova.compute.manager [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:02:09 np0005466031 nova_compute[235803]: 2025-10-02 13:02:09.733 2 DEBUG oslo.service.loopingcall [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:02:09 np0005466031 nova_compute[235803]: 2025-10-02 13:02:09.734 2 DEBUG nova.compute.manager [-] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:02:09 np0005466031 nova_compute[235803]: 2025-10-02 13:02:09.734 2 DEBUG nova.network.neutron [-] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:02:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:09.946 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:09.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:10.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.330 2 DEBUG nova.compute.manager [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received event network-vif-unplugged-4942dddb-a16e-4721-b890-c560adea864a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.330 2 DEBUG oslo_concurrency.lockutils [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.331 2 DEBUG oslo_concurrency.lockutils [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.331 2 DEBUG oslo_concurrency.lockutils [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.331 2 DEBUG nova.compute.manager [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] No waiting events found dispatching network-vif-unplugged-4942dddb-a16e-4721-b890-c560adea864a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.331 2 DEBUG nova.compute.manager [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received event network-vif-unplugged-4942dddb-a16e-4721-b890-c560adea864a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.331 2 DEBUG nova.compute.manager [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received event network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.331 2 DEBUG oslo_concurrency.lockutils [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.332 2 DEBUG oslo_concurrency.lockutils [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.332 2 DEBUG oslo_concurrency.lockutils [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.332 2 DEBUG nova.compute.manager [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] No waiting events found dispatching network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.332 2 WARNING nova.compute.manager [req-026eb5d4-98cb-4458-bd69-ced06dfd63e8 req-42593301-86f2-40f4-b966-b36ec0eb1ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received unexpected event network-vif-plugged-4942dddb-a16e-4721-b890-c560adea864a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.546 2 DEBUG nova.network.neutron [-] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.588 2 INFO nova.compute.manager [-] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Took 0.85 seconds to deallocate network for instance.#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.651 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.652 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:10 np0005466031 nova_compute[235803]: 2025-10-02 13:02:10.783 2 DEBUG oslo_concurrency.processutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3695471513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:11 np0005466031 nova_compute[235803]: 2025-10-02 13:02:11.247 2 DEBUG oslo_concurrency.processutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:11 np0005466031 nova_compute[235803]: 2025-10-02 13:02:11.253 2 DEBUG nova.compute.provider_tree [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:11 np0005466031 nova_compute[235803]: 2025-10-02 13:02:11.273 2 DEBUG nova.scheduler.client.report [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:11 np0005466031 nova_compute[235803]: 2025-10-02 13:02:11.308 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:11 np0005466031 nova_compute[235803]: 2025-10-02 13:02:11.341 2 INFO nova.scheduler.client.report [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance f32c2187-a56f-421e-8f7a-5ebf679654cd#033[00m
Oct  2 09:02:11 np0005466031 nova_compute[235803]: 2025-10-02 13:02:11.420 2 DEBUG oslo_concurrency.lockutils [None req-69e04bba-c344-4ef6-ae80-a6e10d0f5433 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "f32c2187-a56f-421e-8f7a-5ebf679654cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:11.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.343 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4d1b443f-821c-4181-bbff-248387e3cdec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.344 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.344 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.344 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.344 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.345 2 INFO nova.compute.manager [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Terminating instance#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.346 2 DEBUG nova.compute.manager [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:02:12 np0005466031 kernel: tap68623d1a-08 (unregistering): left promiscuous mode
Oct  2 09:02:12 np0005466031 NetworkManager[44907]: <info>  [1759410132.4600] device (tap68623d1a-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.466 2 DEBUG nova.compute.manager [req-2b350cdf-e0f5-4504-94b8-4b75d9c5f03a req-ca8b8c65-dd76-4010-8805-42936c68b591 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Received event network-vif-deleted-4942dddb-a16e-4721-b890-c560adea864a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00662|binding|INFO|Releasing lport 68623d1a-0888-456f-80bc-cddf35b56984 from this chassis (sb_readonly=0)
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00663|binding|INFO|Setting lport 68623d1a-0888-456f-80bc-cddf35b56984 down in Southbound
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00664|binding|INFO|Removing iface tap68623d1a-08 ovn-installed in OVS
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.476 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:1f:d4 10.100.0.10'], port_security=['fa:16:3e:b5:1f:d4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d1b443f-821c-4181-bbff-248387e3cdec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=68623d1a-0888-456f-80bc-cddf35b56984) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.477 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 68623d1a-0888-456f-80bc-cddf35b56984 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.478 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052f341a-0628-4183-a5e0-76312bc986c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.479 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4e8e5b-1dd6-4f04-9173-cffebe009a51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.480 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace which is not needed anymore#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Oct  2 09:02:12 np0005466031 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000ac.scope: Consumed 14.677s CPU time.
Oct  2 09:02:12 np0005466031 systemd-machined[192227]: Machine qemu-76-instance-000000ac terminated.
Oct  2 09:02:12 np0005466031 kernel: tap68623d1a-08: entered promiscuous mode
Oct  2 09:02:12 np0005466031 systemd-udevd[311547]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00665|binding|INFO|Claiming lport 68623d1a-0888-456f-80bc-cddf35b56984 for this chassis.
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00666|binding|INFO|68623d1a-0888-456f-80bc-cddf35b56984: Claiming fa:16:3e:b5:1f:d4 10.100.0.10
Oct  2 09:02:12 np0005466031 NetworkManager[44907]: <info>  [1759410132.5699] manager: (tap68623d1a-08): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 kernel: tap68623d1a-08 (unregistering): left promiscuous mode
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.576 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:1f:d4 10.100.0.10'], port_security=['fa:16:3e:b5:1f:d4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d1b443f-821c-4181-bbff-248387e3cdec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=68623d1a-0888-456f-80bc-cddf35b56984) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00667|binding|INFO|Setting lport 68623d1a-0888-456f-80bc-cddf35b56984 ovn-installed in OVS
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00668|binding|INFO|Setting lport 68623d1a-0888-456f-80bc-cddf35b56984 up in Southbound
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00669|binding|INFO|Releasing lport 68623d1a-0888-456f-80bc-cddf35b56984 from this chassis (sb_readonly=1)
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00670|if_status|INFO|Dropped 1 log messages in last 1464 seconds (most recently, 1464 seconds ago) due to excessive rate
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00671|if_status|INFO|Not setting lport 68623d1a-0888-456f-80bc-cddf35b56984 down as sb is readonly
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00672|binding|INFO|Removing iface tap68623d1a-08 ovn-installed in OVS
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.602 2 INFO nova.virt.libvirt.driver [-] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Instance destroyed successfully.#033[00m
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00673|binding|INFO|Releasing lport 68623d1a-0888-456f-80bc-cddf35b56984 from this chassis (sb_readonly=0)
Oct  2 09:02:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:12Z|00674|binding|INFO|Setting lport 68623d1a-0888-456f-80bc-cddf35b56984 down in Southbound
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.603 2 DEBUG nova.objects.instance [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid 4d1b443f-821c-4181-bbff-248387e3cdec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.612 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:1f:d4 10.100.0.10'], port_security=['fa:16:3e:b5:1f:d4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d1b443f-821c-4181-bbff-248387e3cdec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=68623d1a-0888-456f-80bc-cddf35b56984) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.638 2 DEBUG nova.virt.libvirt.vif [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1224939934',display_name='tempest-ServersTestJSON-server-1224939934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1224939934',id=172,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:01:47Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-uywtjk9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:01:47Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=4d1b443f-821c-4181-bbff-248387e3cdec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.639 2 DEBUG nova.network.os_vif_util [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "68623d1a-0888-456f-80bc-cddf35b56984", "address": "fa:16:3e:b5:1f:d4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68623d1a-08", "ovs_interfaceid": "68623d1a-0888-456f-80bc-cddf35b56984", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.639 2 DEBUG nova.network.os_vif_util [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1f:d4,bridge_name='br-int',has_traffic_filtering=True,id=68623d1a-0888-456f-80bc-cddf35b56984,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68623d1a-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.640 2 DEBUG os_vif [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1f:d4,bridge_name='br-int',has_traffic_filtering=True,id=68623d1a-0888-456f-80bc-cddf35b56984,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68623d1a-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:02:12 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[310879]: [NOTICE]   (310883) : haproxy version is 2.8.14-c23fe91
Oct  2 09:02:12 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[310879]: [NOTICE]   (310883) : path to executable is /usr/sbin/haproxy
Oct  2 09:02:12 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[310879]: [WARNING]  (310883) : Exiting Master process...
Oct  2 09:02:12 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[310879]: [WARNING]  (310883) : Exiting Master process...
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68623d1a-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:12 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[310879]: [ALERT]    (310883) : Current worker (310885) exited with code 143 (Terminated)
Oct  2 09:02:12 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[310879]: [WARNING]  (310883) : All workers exited. Exiting... (0)
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 systemd[1]: libpod-4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3.scope: Deactivated successfully.
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.648 2 INFO os_vif [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:1f:d4,bridge_name='br-int',has_traffic_filtering=True,id=68623d1a-0888-456f-80bc-cddf35b56984,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68623d1a-08')#033[00m
Oct  2 09:02:12 np0005466031 podman[311568]: 2025-10-02 13:02:12.652641101 +0000 UTC m=+0.072065877 container died 4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 09:02:12 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3-userdata-shm.mount: Deactivated successfully.
Oct  2 09:02:12 np0005466031 systemd[1]: var-lib-containers-storage-overlay-d90e8c412b2a8be5a31e710d385b440c74eaa543d3024768fd57586e3f94bb05-merged.mount: Deactivated successfully.
Oct  2 09:02:12 np0005466031 podman[311568]: 2025-10-02 13:02:12.753681462 +0000 UTC m=+0.173106238 container cleanup 4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:02:12 np0005466031 systemd[1]: libpod-conmon-4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3.scope: Deactivated successfully.
Oct  2 09:02:12 np0005466031 podman[311617]: 2025-10-02 13:02:12.848587757 +0000 UTC m=+0.074093356 container remove 4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.855 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[14e690ce-1690-4f8e-9031-687d456de1b1]: (4, ('Thu Oct  2 01:02:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3)\n4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3\nThu Oct  2 01:02:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3)\n4205e80567d1c5f6c1ce13c44f26031dbec777843c670b1b55d5a96501e940d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.857 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9dab1694-1d1b-4215-92e9-935429649c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.858 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 kernel: tap052f341a-00: left promiscuous mode
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.865 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6dff04f7-4a31-444a-a4f0-9511e98ca48a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 nova_compute[235803]: 2025-10-02 13:02:12.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.894 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5a558d-fd9e-48fd-ad7f-8bcaa372dc9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.895 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7a48d399-7a4c-45d2-9951-c0e54608e714]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.911 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7235ee71-24fb-47ef-8373-5e6e7693ad1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795691, 'reachable_time': 25471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311631, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 systemd[1]: run-netns-ovnmeta\x2d052f341a\x2d0628\x2d4183\x2da5e0\x2d76312bc986c6.mount: Deactivated successfully.
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.917 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.917 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7b41b3-b723-4030-a194-980ffd505a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.918 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 68623d1a-0888-456f-80bc-cddf35b56984 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.920 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052f341a-0628-4183-a5e0-76312bc986c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.921 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d4916fd1-477f-4d55-a296-9e804fdbb99d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.921 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 68623d1a-0888-456f-80bc-cddf35b56984 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.924 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052f341a-0628-4183-a5e0-76312bc986c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:02:12 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:12.925 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c19f0a7e-2799-46ab-a302-9cbe515e6418]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:13.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:14.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:14 np0005466031 nova_compute[235803]: 2025-10-02 13:02:14.296 2 INFO nova.virt.libvirt.driver [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Deleting instance files /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec_del#033[00m
Oct  2 09:02:14 np0005466031 nova_compute[235803]: 2025-10-02 13:02:14.297 2 INFO nova.virt.libvirt.driver [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Deletion of /var/lib/nova/instances/4d1b443f-821c-4181-bbff-248387e3cdec_del complete#033[00m
Oct  2 09:02:14 np0005466031 nova_compute[235803]: 2025-10-02 13:02:14.343 2 INFO nova.compute.manager [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Took 2.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:02:14 np0005466031 nova_compute[235803]: 2025-10-02 13:02:14.344 2 DEBUG oslo.service.loopingcall [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:02:14 np0005466031 nova_compute[235803]: 2025-10-02 13:02:14.344 2 DEBUG nova.compute.manager [-] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:02:14 np0005466031 nova_compute[235803]: 2025-10-02 13:02:14.344 2 DEBUG nova.network.neutron [-] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:02:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:15 np0005466031 nova_compute[235803]: 2025-10-02 13:02:15.210 2 DEBUG nova.compute.manager [req-4b5099a5-9f94-41f8-888c-434e270140e2 req-1f2d8915-277e-46af-98d7-e796da1adf95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received event network-vif-unplugged-68623d1a-0888-456f-80bc-cddf35b56984 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:15 np0005466031 nova_compute[235803]: 2025-10-02 13:02:15.210 2 DEBUG oslo_concurrency.lockutils [req-4b5099a5-9f94-41f8-888c-434e270140e2 req-1f2d8915-277e-46af-98d7-e796da1adf95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:15 np0005466031 nova_compute[235803]: 2025-10-02 13:02:15.210 2 DEBUG oslo_concurrency.lockutils [req-4b5099a5-9f94-41f8-888c-434e270140e2 req-1f2d8915-277e-46af-98d7-e796da1adf95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:15 np0005466031 nova_compute[235803]: 2025-10-02 13:02:15.211 2 DEBUG oslo_concurrency.lockutils [req-4b5099a5-9f94-41f8-888c-434e270140e2 req-1f2d8915-277e-46af-98d7-e796da1adf95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:15 np0005466031 nova_compute[235803]: 2025-10-02 13:02:15.211 2 DEBUG nova.compute.manager [req-4b5099a5-9f94-41f8-888c-434e270140e2 req-1f2d8915-277e-46af-98d7-e796da1adf95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] No waiting events found dispatching network-vif-unplugged-68623d1a-0888-456f-80bc-cddf35b56984 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:15 np0005466031 nova_compute[235803]: 2025-10-02 13:02:15.211 2 DEBUG nova.compute.manager [req-4b5099a5-9f94-41f8-888c-434e270140e2 req-1f2d8915-277e-46af-98d7-e796da1adf95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received event network-vif-unplugged-68623d1a-0888-456f-80bc-cddf35b56984 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:02:15 np0005466031 nova_compute[235803]: 2025-10-02 13:02:15.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:15.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.003 2 DEBUG nova.network.neutron [-] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.056 2 INFO nova.compute.manager [-] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Took 1.71 seconds to deallocate network for instance.#033[00m
Oct  2 09:02:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:16.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.110 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.111 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.194 2 DEBUG oslo_concurrency.processutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1321131216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.653 2 DEBUG oslo_concurrency.processutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.659 2 DEBUG nova.compute.provider_tree [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.678 2 DEBUG nova.scheduler.client.report [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.713 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.744 2 INFO nova.scheduler.client.report [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance 4d1b443f-821c-4181-bbff-248387e3cdec#033[00m
Oct  2 09:02:16 np0005466031 nova_compute[235803]: 2025-10-02 13:02:16.902 2 DEBUG oslo_concurrency.lockutils [None req-59705f59-0e24-4f78-b288-fd59447a0612 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.460 2 DEBUG nova.compute.manager [req-1f4d91cc-dbf6-4243-9e43-da6fd5f8f725 req-b6883001-4418-477b-87d4-6c38ef6435f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received event network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.461 2 DEBUG oslo_concurrency.lockutils [req-1f4d91cc-dbf6-4243-9e43-da6fd5f8f725 req-b6883001-4418-477b-87d4-6c38ef6435f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.462 2 DEBUG oslo_concurrency.lockutils [req-1f4d91cc-dbf6-4243-9e43-da6fd5f8f725 req-b6883001-4418-477b-87d4-6c38ef6435f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.462 2 DEBUG oslo_concurrency.lockutils [req-1f4d91cc-dbf6-4243-9e43-da6fd5f8f725 req-b6883001-4418-477b-87d4-6c38ef6435f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d1b443f-821c-4181-bbff-248387e3cdec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.462 2 DEBUG nova.compute.manager [req-1f4d91cc-dbf6-4243-9e43-da6fd5f8f725 req-b6883001-4418-477b-87d4-6c38ef6435f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] No waiting events found dispatching network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.462 2 WARNING nova.compute.manager [req-1f4d91cc-dbf6-4243-9e43-da6fd5f8f725 req-b6883001-4418-477b-87d4-6c38ef6435f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received unexpected event network-vif-plugged-68623d1a-0888-456f-80bc-cddf35b56984 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.462 2 DEBUG nova.compute.manager [req-1f4d91cc-dbf6-4243-9e43-da6fd5f8f725 req-b6883001-4418-477b-87d4-6c38ef6435f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Received event network-vif-deleted-68623d1a-0888-456f-80bc-cddf35b56984 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.653 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.654 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.676 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.677 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.677 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.677 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:02:17 np0005466031 nova_compute[235803]: 2025-10-02 13:02:17.678 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 09:02:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:17.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 09:02:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:18.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3731738114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.163 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.372 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.373 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4184MB free_disk=20.883583068847656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.374 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.374 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.423 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.423 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.438 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/601393838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.914 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.919 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:18 np0005466031 nova_compute[235803]: 2025-10-02 13:02:18.948 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:19 np0005466031 nova_compute[235803]: 2025-10-02 13:02:19.003 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:02:19 np0005466031 nova_compute[235803]: 2025-10-02 13:02:19.004 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:19 np0005466031 nova_compute[235803]: 2025-10-02 13:02:19.986 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:19.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:02:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:20.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:02:20 np0005466031 nova_compute[235803]: 2025-10-02 13:02:20.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.186 2 DEBUG nova.compute.manager [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.268 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.269 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.287 2 DEBUG nova.objects.instance [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_requests' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.301 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.302 2 INFO nova.compute.claims [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.302 2 DEBUG nova.objects.instance [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.315 2 DEBUG nova.objects.instance [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.367 2 INFO nova.compute.resource_tracker [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating resource usage from migration 67833b05-bd17-48b1-974a-39a83762f931#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.367 2 DEBUG nova.compute.resource_tracker [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Starting to track incoming migration 67833b05-bd17-48b1-974a-39a83762f931 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.429 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.990 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:21 np0005466031 nova_compute[235803]: 2025-10-02 13:02:21.990 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:21.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2643813130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.008 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.022 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.027 2 DEBUG nova.compute.provider_tree [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.050 2 DEBUG nova.scheduler.client.report [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.085 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.086 2 INFO nova.compute.manager [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Migrating#033[00m
Oct  2 09:02:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:22.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.117 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.117 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.123 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.123 2 INFO nova.compute.claims [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.260 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2704903679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.748 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.754 2 DEBUG nova.compute.provider_tree [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.782 2 DEBUG nova.scheduler.client.report [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.847 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.848 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.907 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.907 2 DEBUG nova.network.neutron [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.941 2 INFO nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:02:22 np0005466031 nova_compute[235803]: 2025-10-02 13:02:22.961 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.052 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.053 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.054 2 INFO nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Creating image(s)#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.082 2 DEBUG nova.storage.rbd_utils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.114 2 DEBUG nova.storage.rbd_utils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.145 2 DEBUG nova.storage.rbd_utils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.148 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.219 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.220 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.220 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.221 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.251 2 DEBUG nova.storage.rbd_utils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.254 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.624 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410128.6236122, f32c2187-a56f-421e-8f7a-5ebf679654cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.624 2 INFO nova.compute.manager [-] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.642 2 DEBUG nova.compute.manager [None req-9b81034e-ff02-4858-a8fc-f55a62ceafbc - - - - - -] [instance: f32c2187-a56f-421e-8f7a-5ebf679654cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:23 np0005466031 nova_compute[235803]: 2025-10-02 13:02:23.684 2 DEBUG nova.policy [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:02:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:24.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.261 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.368 2 DEBUG nova.storage.rbd_utils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:02:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.579 2 DEBUG nova.objects.instance [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.594 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.594 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Ensure instance console log exists: /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.595 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.595 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.596 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:24 np0005466031 nova_compute[235803]: 2025-10-02 13:02:24.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:25 np0005466031 nova_compute[235803]: 2025-10-02 13:02:25.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:25 np0005466031 nova_compute[235803]: 2025-10-02 13:02:25.653 2 DEBUG nova.network.neutron [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Successfully created port: 8cd36551-5d1a-4860-94aa-a914395f6a92 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:02:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:25.871 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:25.871 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:25.872 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:26.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:26 np0005466031 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 09:02:26 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 09:02:26 np0005466031 systemd-logind[786]: New session 65 of user nova.
Oct  2 09:02:26 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 09:02:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:02:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:26.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:02:26 np0005466031 systemd[1]: Starting User Manager for UID 42436...
Oct  2 09:02:26 np0005466031 systemd[311922]: Queued start job for default target Main User Target.
Oct  2 09:02:26 np0005466031 systemd[311922]: Created slice User Application Slice.
Oct  2 09:02:26 np0005466031 systemd[311922]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 09:02:26 np0005466031 systemd[311922]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 09:02:26 np0005466031 systemd[311922]: Reached target Paths.
Oct  2 09:02:26 np0005466031 systemd[311922]: Reached target Timers.
Oct  2 09:02:26 np0005466031 systemd[311922]: Starting D-Bus User Message Bus Socket...
Oct  2 09:02:26 np0005466031 systemd[311922]: Starting Create User's Volatile Files and Directories...
Oct  2 09:02:26 np0005466031 systemd[311922]: Finished Create User's Volatile Files and Directories.
Oct  2 09:02:26 np0005466031 systemd[311922]: Listening on D-Bus User Message Bus Socket.
Oct  2 09:02:26 np0005466031 systemd[311922]: Reached target Sockets.
Oct  2 09:02:26 np0005466031 systemd[311922]: Reached target Basic System.
Oct  2 09:02:26 np0005466031 systemd[311922]: Reached target Main User Target.
Oct  2 09:02:26 np0005466031 systemd[311922]: Startup finished in 163ms.
Oct  2 09:02:26 np0005466031 systemd[1]: Started User Manager for UID 42436.
Oct  2 09:02:26 np0005466031 systemd[1]: Started Session 65 of User nova.
Oct  2 09:02:26 np0005466031 systemd[1]: session-65.scope: Deactivated successfully.
Oct  2 09:02:26 np0005466031 systemd-logind[786]: Session 65 logged out. Waiting for processes to exit.
Oct  2 09:02:26 np0005466031 systemd-logind[786]: Removed session 65.
Oct  2 09:02:26 np0005466031 systemd-logind[786]: New session 67 of user nova.
Oct  2 09:02:26 np0005466031 systemd[1]: Started Session 67 of User nova.
Oct  2 09:02:26 np0005466031 systemd[1]: session-67.scope: Deactivated successfully.
Oct  2 09:02:26 np0005466031 systemd-logind[786]: Session 67 logged out. Waiting for processes to exit.
Oct  2 09:02:26 np0005466031 systemd-logind[786]: Removed session 67.
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.003 2 DEBUG nova.network.neutron [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Successfully updated port: 8cd36551-5d1a-4860-94aa-a914395f6a92 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.020 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.020 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.020 2 DEBUG nova.network.neutron [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.141 2 DEBUG nova.compute.manager [req-53e52dc4-5bca-4a07-bfcb-7822401b3719 req-254bf02a-3917-4791-af72-29b8d95a2bf3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received event network-changed-8cd36551-5d1a-4860-94aa-a914395f6a92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.141 2 DEBUG nova.compute.manager [req-53e52dc4-5bca-4a07-bfcb-7822401b3719 req-254bf02a-3917-4791-af72-29b8d95a2bf3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Refreshing instance network info cache due to event network-changed-8cd36551-5d1a-4860-94aa-a914395f6a92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.141 2 DEBUG oslo_concurrency.lockutils [req-53e52dc4-5bca-4a07-bfcb-7822401b3719 req-254bf02a-3917-4791-af72-29b8d95a2bf3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.598 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410132.5957851, 4d1b443f-821c-4181-bbff-248387e3cdec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.599 2 INFO nova.compute.manager [-] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.642 2 DEBUG nova.compute.manager [None req-701eb3fe-d06e-41a7-81e6-ff630aec1ba5 - - - - - -] [instance: 4d1b443f-821c-4181-bbff-248387e3cdec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:27 np0005466031 nova_compute[235803]: 2025-10-02 13:02:27.728 2 DEBUG nova.network.neutron [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:02:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:28.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:28.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.939 2 DEBUG nova.network.neutron [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Updating instance_info_cache with network_info: [{"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.981 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.982 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Instance network_info: |[{"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.982 2 DEBUG oslo_concurrency.lockutils [req-53e52dc4-5bca-4a07-bfcb-7822401b3719 req-254bf02a-3917-4791-af72-29b8d95a2bf3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.982 2 DEBUG nova.network.neutron [req-53e52dc4-5bca-4a07-bfcb-7822401b3719 req-254bf02a-3917-4791-af72-29b8d95a2bf3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Refreshing network info cache for port 8cd36551-5d1a-4860-94aa-a914395f6a92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.984 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Start _get_guest_xml network_info=[{"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.990 2 WARNING nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.996 2 DEBUG nova.virt.libvirt.host [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:02:29 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.997 2 DEBUG nova.virt.libvirt.host [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:29.999 2 DEBUG nova.virt.libvirt.host [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.000 2 DEBUG nova.virt.libvirt.host [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.001 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.002 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.002 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.002 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.003 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.003 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.003 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.004 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.004 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.004 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.004 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.005 2 DEBUG nova.virt.hardware [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.008 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:30.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1114902513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.490 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.515 2 DEBUG nova.storage.rbd_utils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.519 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.880 2 DEBUG nova.compute.manager [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.881 2 DEBUG oslo_concurrency.lockutils [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.881 2 DEBUG oslo_concurrency.lockutils [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.881 2 DEBUG oslo_concurrency.lockutils [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.881 2 DEBUG nova.compute.manager [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.882 2 WARNING nova.compute.manager [req-8224ff38-10d4-4d92-831e-bc93afb33e12 req-83aa9217-b279-44a5-beda-e0d8fa994394 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 09:02:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/237972929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.961 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.963 2 DEBUG nova.virt.libvirt.vif [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-778388894',display_name='tempest-ServersTestJSON-server-778388894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-778388894',id=175,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-4gj2qh38',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:22Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.963 2 DEBUG nova.network.os_vif_util [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.964 2 DEBUG nova.network.os_vif_util [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:23:46,bridge_name='br-int',has_traffic_filtering=True,id=8cd36551-5d1a-4860-94aa-a914395f6a92,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd36551-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.965 2 DEBUG nova.objects.instance [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.984 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <uuid>1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485</uuid>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <name>instance-000000af</name>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServersTestJSON-server-778388894</nova:name>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:02:29</nova:creationTime>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <nova:port uuid="8cd36551-5d1a-4860-94aa-a914395f6a92">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <entry name="serial">1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485</entry>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <entry name="uuid">1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485</entry>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk.config">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:70:23:46"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <target dev="tap8cd36551-5d"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485/console.log" append="off"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:02:30 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:02:30 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:02:30 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:02:30 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.986 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Preparing to wait for external event network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.986 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.986 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.986 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.987 2 DEBUG nova.virt.libvirt.vif [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-778388894',display_name='tempest-ServersTestJSON-server-778388894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-778388894',id=175,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-4gj2qh38',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:22Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.987 2 DEBUG nova.network.os_vif_util [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.988 2 DEBUG nova.network.os_vif_util [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:23:46,bridge_name='br-int',has_traffic_filtering=True,id=8cd36551-5d1a-4860-94aa-a914395f6a92,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd36551-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.988 2 DEBUG os_vif [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:23:46,bridge_name='br-int',has_traffic_filtering=True,id=8cd36551-5d1a-4860-94aa-a914395f6a92,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd36551-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cd36551-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8cd36551-5d, col_values=(('external_ids', {'iface-id': '8cd36551-5d1a-4860-94aa-a914395f6a92', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:23:46', 'vm-uuid': '1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:30 np0005466031 nova_compute[235803]: 2025-10-02 13:02:30.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:30 np0005466031 NetworkManager[44907]: <info>  [1759410150.9987] manager: (tap8cd36551-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.006 2 INFO os_vif [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:23:46,bridge_name='br-int',has_traffic_filtering=True,id=8cd36551-5d1a-4860-94aa-a914395f6a92,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd36551-5d')#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.059 2 INFO nova.network.neutron [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.067 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.068 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.068 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:70:23:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.069 2 INFO nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Using config drive#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.099 2 DEBUG nova.storage.rbd_utils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:31 np0005466031 podman[312079]: 2025-10-02 13:02:31.630560978 +0000 UTC m=+0.053685658 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:31 np0005466031 podman[312080]: 2025-10-02 13:02:31.67543593 +0000 UTC m=+0.098622852 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.931 2 INFO nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Creating config drive at /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485/disk.config#033[00m
Oct  2 09:02:31 np0005466031 nova_compute[235803]: 2025-10-02 13:02:31.936 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24he5q1q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:32.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:32 np0005466031 nova_compute[235803]: 2025-10-02 13:02:32.071 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24he5q1q" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:32 np0005466031 nova_compute[235803]: 2025-10-02 13:02:32.097 2 DEBUG nova.storage.rbd_utils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:32 np0005466031 nova_compute[235803]: 2025-10-02 13:02:32.101 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485/disk.config 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.043 2 DEBUG nova.compute.manager [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.044 2 DEBUG oslo_concurrency.lockutils [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.044 2 DEBUG oslo_concurrency.lockutils [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.045 2 DEBUG oslo_concurrency.lockutils [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.045 2 DEBUG nova.compute.manager [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.046 2 WARNING nova.compute.manager [req-775ebb35-9303-4838-ae69-3452a1b5fd11 req-dde5f75d-b5f0-46e0-8cd4-b75ea0dd1f52 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.047 2 DEBUG oslo_concurrency.processutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485/disk.config 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.946s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.047 2 INFO nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Deleting local config drive /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485/disk.config because it was imported into RBD.#033[00m
Oct  2 09:02:33 np0005466031 kernel: tap8cd36551-5d: entered promiscuous mode
Oct  2 09:02:33 np0005466031 NetworkManager[44907]: <info>  [1759410153.1223] manager: (tap8cd36551-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Oct  2 09:02:33 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:33Z|00675|binding|INFO|Claiming lport 8cd36551-5d1a-4860-94aa-a914395f6a92 for this chassis.
Oct  2 09:02:33 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:33Z|00676|binding|INFO|8cd36551-5d1a-4860-94aa-a914395f6a92: Claiming fa:16:3e:70:23:46 10.100.0.10
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.141 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:23:46 10.100.0.10'], port_security=['fa:16:3e:70:23:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=8cd36551-5d1a-4860-94aa-a914395f6a92) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:33 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:33Z|00677|binding|INFO|Setting lport 8cd36551-5d1a-4860-94aa-a914395f6a92 ovn-installed in OVS
Oct  2 09:02:33 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:33Z|00678|binding|INFO|Setting lport 8cd36551-5d1a-4860-94aa-a914395f6a92 up in Southbound
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.145 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 8cd36551-5d1a-4860-94aa-a914395f6a92 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.149 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.163 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a84b42-e927-4bdb-a9a5-1ddd4798b710]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.165 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap052f341a-01 in ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:02:33 np0005466031 systemd-udevd[312178]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.167 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap052f341a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.168 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4d28f6ac-6c93-4850-8030-0550ab2865d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.169 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e820a4e2-0f3d-42bb-a48a-a6c7bf429311]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 systemd-machined[192227]: New machine qemu-78-instance-000000af.
Oct  2 09:02:33 np0005466031 NetworkManager[44907]: <info>  [1759410153.1879] device (tap8cd36551-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:02:33 np0005466031 NetworkManager[44907]: <info>  [1759410153.1886] device (tap8cd36551-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.188 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[aac6ed6f-4947-42b5-9f15-94551ac4bb31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 systemd[1]: Started Virtual Machine qemu-78-instance-000000af.
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.217 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[56420baa-3aa7-4d9b-a4c6-0fe3542e6644]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.253 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac3d63c-29cf-47a3-960d-804780cd8e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 NetworkManager[44907]: <info>  [1759410153.2602] manager: (tap052f341a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/303)
Oct  2 09:02:33 np0005466031 systemd-udevd[312182]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.259 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe66ca4-24d6-4d31-9a9e-158e75e4bbcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.291 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.292 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.292 2 DEBUG nova.network.neutron [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.294 2 DEBUG nova.network.neutron [req-53e52dc4-5bca-4a07-bfcb-7822401b3719 req-254bf02a-3917-4791-af72-29b8d95a2bf3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Updated VIF entry in instance network info cache for port 8cd36551-5d1a-4860-94aa-a914395f6a92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.294 2 DEBUG nova.network.neutron [req-53e52dc4-5bca-4a07-bfcb-7822401b3719 req-254bf02a-3917-4791-af72-29b8d95a2bf3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Updating instance_info_cache with network_info: [{"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.299 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf00138-0c1b-427c-8f12-91b69813e764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.303 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b910c695-5c03-47fc-9fa5-54e2cb98baf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 NetworkManager[44907]: <info>  [1759410153.3277] device (tap052f341a-00): carrier: link connected
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.331 2 DEBUG oslo_concurrency.lockutils [req-53e52dc4-5bca-4a07-bfcb-7822401b3719 req-254bf02a-3917-4791-af72-29b8d95a2bf3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.333 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b405d136-bb4d-46e3-a73a-840eef556887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.354 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7ebf50-8750-4c62-bbc5-9532db6f01f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800891, 'reachable_time': 16726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312211, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.369 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9b255d-0dca-48a8-9447-c8c9c56207bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:a55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 800891, 'tstamp': 800891}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312212, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.387 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e99890bf-c291-4f8b-b1df-5af3ce5e2a69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800891, 'reachable_time': 16726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312213, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.416 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f09de306-8942-4513-8683-f2e32125ea1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.476 2 DEBUG nova.compute.manager [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.476 2 DEBUG nova.compute.manager [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing instance network info cache due to event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.476 2 DEBUG oslo_concurrency.lockutils [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.480 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[37dd609b-e46f-4a09-9144-69510e00fcad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.481 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.481 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.481 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:33 np0005466031 kernel: tap052f341a-00: entered promiscuous mode
Oct  2 09:02:33 np0005466031 NetworkManager[44907]: <info>  [1759410153.4846] manager: (tap052f341a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.491 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:33 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:33Z|00679|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.495 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.507 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[507f2fcd-e697-4daa-bde4-07823f07187a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.508 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:02:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:33.509 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'env', 'PROCESS_TAG=haproxy-052f341a-0628-4183-a5e0-76312bc986c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/052f341a-0628-4183-a5e0-76312bc986c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:02:33 np0005466031 nova_compute[235803]: 2025-10-02 13:02:33.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:33 np0005466031 podman[312241]: 2025-10-02 13:02:33.88391901 +0000 UTC m=+0.028524373 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:02:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:34.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:34 np0005466031 podman[312241]: 2025-10-02 13:02:34.229983411 +0000 UTC m=+0.374588754 container create dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 09:02:34 np0005466031 systemd[1]: Started libpod-conmon-dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5.scope.
Oct  2 09:02:34 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:02:34 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393f3bab87689333b61394b8beaa378c6304cacfb08b7828f424a4042ff06227/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:02:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:34 np0005466031 podman[312241]: 2025-10-02 13:02:34.616912219 +0000 UTC m=+0.761517652 container init dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 09:02:34 np0005466031 podman[312241]: 2025-10-02 13:02:34.630098459 +0000 UTC m=+0.774703812 container start dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:02:34 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[312257]: [NOTICE]   (312261) : New worker (312263) forked
Oct  2 09:02:34 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[312257]: [NOTICE]   (312261) : Loading success.
Oct  2 09:02:34 np0005466031 nova_compute[235803]: 2025-10-02 13:02:34.796 2 DEBUG nova.network.neutron [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:34 np0005466031 nova_compute[235803]: 2025-10-02 13:02:34.816 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:34 np0005466031 nova_compute[235803]: 2025-10-02 13:02:34.823 2 DEBUG oslo_concurrency.lockutils [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:34 np0005466031 nova_compute[235803]: 2025-10-02 13:02:34.823 2 DEBUG nova.network.neutron [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:34 np0005466031 nova_compute[235803]: 2025-10-02 13:02:34.948 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 09:02:34 np0005466031 nova_compute[235803]: 2025-10-02 13:02:34.950 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 09:02:34 np0005466031 nova_compute[235803]: 2025-10-02 13:02:34.950 2 INFO nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Creating image(s)#033[00m
Oct  2 09:02:34 np0005466031 nova_compute[235803]: 2025-10-02 13:02:34.994 2 DEBUG nova.storage.rbd_utils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] creating snapshot(nova-resize) on rbd image(b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.567 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410155.566527, 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.567 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] VM Started (Lifecycle Event)#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.608 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.611 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410155.566748, 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.612 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.633 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.637 2 DEBUG nova.compute.manager [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received event network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.637 2 DEBUG oslo_concurrency.lockutils [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.637 2 DEBUG oslo_concurrency.lockutils [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.637 2 DEBUG oslo_concurrency.lockutils [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.638 2 DEBUG nova.compute.manager [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Processing event network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.638 2 DEBUG nova.compute.manager [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received event network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.638 2 DEBUG oslo_concurrency.lockutils [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.638 2 DEBUG oslo_concurrency.lockutils [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.638 2 DEBUG oslo_concurrency.lockutils [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.638 2 DEBUG nova.compute.manager [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] No waiting events found dispatching network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.639 2 WARNING nova.compute.manager [req-f0e477a8-ecbb-4f5c-9603-f82238531de1 req-6a4c927b-5aab-49d5-84fe-921c3bbf388d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received unexpected event network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.640 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.644 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.645 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410155.64278, 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.646 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.649 2 INFO nova.virt.libvirt.driver [-] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Instance spawned successfully.#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.649 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.672 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.677 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.677 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.677 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.678 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.678 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.678 2 DEBUG nova.virt.libvirt.driver [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.681 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.710 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.761 2 INFO nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Took 12.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.761 2 DEBUG nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.932 2 INFO nova.compute.manager [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Took 13.84 seconds to build instance.#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.976 2 DEBUG oslo_concurrency.lockutils [None req-b4470985-6034-4327-b69b-ba73955255fd b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:35 np0005466031 nova_compute[235803]: 2025-10-02 13:02:35.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:36.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e351 e351: 3 total, 3 up, 3 in
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.583 2 DEBUG nova.objects.instance [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.793 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.794 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Ensure instance console log exists: /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.794 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:36 np0005466031 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.796 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.796 2 DEBUG oslo_concurrency.lockutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:36 np0005466031 systemd[311922]: Activating special unit Exit the Session...
Oct  2 09:02:36 np0005466031 systemd[311922]: Stopped target Main User Target.
Oct  2 09:02:36 np0005466031 systemd[311922]: Stopped target Basic System.
Oct  2 09:02:36 np0005466031 systemd[311922]: Stopped target Paths.
Oct  2 09:02:36 np0005466031 systemd[311922]: Stopped target Sockets.
Oct  2 09:02:36 np0005466031 systemd[311922]: Stopped target Timers.
Oct  2 09:02:36 np0005466031 systemd[311922]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 09:02:36 np0005466031 systemd[311922]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 09:02:36 np0005466031 systemd[311922]: Closed D-Bus User Message Bus Socket.
Oct  2 09:02:36 np0005466031 systemd[311922]: Stopped Create User's Volatile Files and Directories.
Oct  2 09:02:36 np0005466031 systemd[311922]: Removed slice User Application Slice.
Oct  2 09:02:36 np0005466031 systemd[311922]: Reached target Shutdown.
Oct  2 09:02:36 np0005466031 systemd[311922]: Finished Exit the Session.
Oct  2 09:02:36 np0005466031 systemd[311922]: Reached target Exit the Session.
Oct  2 09:02:36 np0005466031 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.807 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Start _get_guest_xml network_info=[{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1867044655", "vif_mac": "fa:16:3e:4a:f7:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:02:36 np0005466031 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.812 2 WARNING nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.817 2 DEBUG nova.virt.libvirt.host [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.817 2 DEBUG nova.virt.libvirt.host [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:02:36 np0005466031 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.829 2 DEBUG nova.virt.libvirt.host [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.830 2 DEBUG nova.virt.libvirt.host [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.831 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.831 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.832 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.832 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.832 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.833 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.833 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.833 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.833 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.833 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.834 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.834 2 DEBUG nova.virt.hardware [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.834 2 DEBUG nova.objects.instance [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:36 np0005466031 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 09:02:36 np0005466031 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 09:02:36 np0005466031 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 09:02:36 np0005466031 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 09:02:36 np0005466031 nova_compute[235803]: 2025-10-02 13:02:36.857 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.035 2 DEBUG nova.network.neutron [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updated VIF entry in instance network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.036 2 DEBUG nova.network.neutron [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [{"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.051 2 DEBUG oslo_concurrency.lockutils [req-f02e99a6-c034-4c14-b59c-625c97a30b38 req-2dc1948f-ef93-4715-9091-ccb01b2066bf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3209806150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.326 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.368 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1729713660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.841 2 DEBUG oslo_concurrency.processutils [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.844 2 DEBUG nova.virt.libvirt.vif [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1345789876',display_name='tempest-TestNetworkAdvancedServerOps-server-1345789876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1345789876',id=173,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEIMyLEEiTsPEaNADNpznD0SvPywb5Pg8hG/EPPWqO2JIb485VcJfrXGgJByt8PJyHfyaT1SSoE+QlqZ2pUFHk8hDhg8WQsOqARgR1ox1fbDjGtF3fgbhMsuoES+3OcIQ==',key_name='tempest-TestNetworkAdvancedServerOps-295053675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:01:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xifn5c06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:30Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1867044655", "vif_mac": "fa:16:3e:4a:f7:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.844 2 DEBUG nova.network.os_vif_util [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1867044655", "vif_mac": "fa:16:3e:4a:f7:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.845 2 DEBUG nova.network.os_vif_util [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.848 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <uuid>b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07</uuid>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <name>instance-000000ad</name>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <memory>196608</memory>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1345789876</nova:name>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:02:36</nova:creationTime>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.micro">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <nova:memory>192</nova:memory>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <nova:port uuid="43e0d421-2dcd-4a63-a1ab-dc9711a2b840">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <entry name="serial">b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07</entry>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <entry name="uuid">b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07</entry>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_disk.config">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:4a:f7:63"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <target dev="tap43e0d421-2d"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07/console.log" append="off"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:02:37 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:02:37 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:02:37 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:02:37 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.849 2 DEBUG nova.virt.libvirt.vif [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1345789876',display_name='tempest-TestNetworkAdvancedServerOps-server-1345789876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1345789876',id=173,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEIMyLEEiTsPEaNADNpznD0SvPywb5Pg8hG/EPPWqO2JIb485VcJfrXGgJByt8PJyHfyaT1SSoE+QlqZ2pUFHk8hDhg8WQsOqARgR1ox1fbDjGtF3fgbhMsuoES+3OcIQ==',key_name='tempest-TestNetworkAdvancedServerOps-295053675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:01:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xifn5c06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:30Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1867044655", "vif_mac": "fa:16:3e:4a:f7:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.849 2 DEBUG nova.network.os_vif_util [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1867044655", "vif_mac": "fa:16:3e:4a:f7:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.850 2 DEBUG nova.network.os_vif_util [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.850 2 DEBUG os_vif [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.854 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43e0d421-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43e0d421-2d, col_values=(('external_ids', {'iface-id': '43e0d421-2dcd-4a63-a1ab-dc9711a2b840', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:f7:63', 'vm-uuid': 'b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:37 np0005466031 NetworkManager[44907]: <info>  [1759410157.8643] manager: (tap43e0d421-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:37 np0005466031 nova_compute[235803]: 2025-10-02 13:02:37.877 2 INFO os_vif [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d')#033[00m
Oct  2 09:02:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:38.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.063 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.064 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.064 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:4a:f7:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.065 2 INFO nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Using config drive#033[00m
Oct  2 09:02:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:38.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:38 np0005466031 kernel: tap43e0d421-2d: entered promiscuous mode
Oct  2 09:02:38 np0005466031 NetworkManager[44907]: <info>  [1759410158.1790] manager: (tap43e0d421-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:38Z|00680|binding|INFO|Claiming lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for this chassis.
Oct  2 09:02:38 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:38Z|00681|binding|INFO|43e0d421-2dcd-4a63-a1ab-dc9711a2b840: Claiming fa:16:3e:4a:f7:63 10.100.0.8
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 NetworkManager[44907]: <info>  [1759410158.2016] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Oct  2 09:02:38 np0005466031 NetworkManager[44907]: <info>  [1759410158.2024] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.206 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:f7:63 10.100.0.8'], port_security=['fa:16:3e:4a:f7:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4c2d0411-240e-42d5-b104-fafcf4d7fcf7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17b4abc9-fef6-4b35-b16d-0b00cb9b9f26, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=43e0d421-2dcd-4a63-a1ab-dc9711a2b840) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.209 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 in datapath e00eecd6-70d4-4b18-95b4-609dbcd626b1 bound to our chassis#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.211 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e00eecd6-70d4-4b18-95b4-609dbcd626b1#033[00m
Oct  2 09:02:38 np0005466031 systemd-udevd[312599]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.225 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0835557b-3d20-4e9b-b65b-5d5ac71c3e14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.227 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape00eecd6-71 in ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.228 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape00eecd6-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.228 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6a525c21-6554-4584-bb02-d9ff628efe3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.229 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8d9e44-dd92-4fbf-b564-cbc52dde7f6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 NetworkManager[44907]: <info>  [1759410158.2393] device (tap43e0d421-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:02:38 np0005466031 NetworkManager[44907]: <info>  [1759410158.2415] device (tap43e0d421-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.241 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[cb285006-8405-468b-a281-0da5d6dd0707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 systemd-machined[192227]: New machine qemu-79-instance-000000ad.
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.257 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bc096b5b-0e86-4b9b-af9d-ed1a7a0d255b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.286 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6408580e-ae50-4f28-9abb-fbbe7a00bf4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.292 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1207cbcd-8935-40a8-934a-c09796f829fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 systemd[1]: Started Virtual Machine qemu-79-instance-000000ad.
Oct  2 09:02:38 np0005466031 NetworkManager[44907]: <info>  [1759410158.2932] manager: (tape00eecd6-70): new Veth device (/org/freedesktop/NetworkManager/Devices/309)
Oct  2 09:02:38 np0005466031 systemd-udevd[312603]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.327 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[07d335bd-8e8e-4c9c-8071-457c5c2670a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.331 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdd2191-c265-4280-ae6f-eca8968e8d81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 NetworkManager[44907]: <info>  [1759410158.3588] device (tape00eecd6-70): carrier: link connected
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.364 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcf7c9b-65b0-47bb-8ff6-8ae12d02db1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.383 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[192f8b7a-7708-4b9e-aa90-04b63a1c55fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape00eecd6-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:bc:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801394, 'reachable_time': 30280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312636, 'error': None, 'target': 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.401 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbb4e5c-88ac-434e-bd08-98f26da01eb3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:bc13'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801394, 'tstamp': 801394}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312638, 'error': None, 'target': 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.416 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b67f141a-fc88-4206-a1d8-a92c7eecebaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape00eecd6-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:bc:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801394, 'reachable_time': 30280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312639, 'error': None, 'target': 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.456 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0028ac-bf15-4278-8426-d45f78a462b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:38Z|00682|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:38Z|00683|binding|INFO|Setting lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 ovn-installed in OVS
Oct  2 09:02:38 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:38Z|00684|binding|INFO|Setting lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 up in Southbound
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.519 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ece614-3b5a-4c02-8304-025fe35b7457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.521 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape00eecd6-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.521 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.522 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape00eecd6-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:38 np0005466031 NetworkManager[44907]: <info>  [1759410158.5248] manager: (tape00eecd6-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Oct  2 09:02:38 np0005466031 kernel: tape00eecd6-70: entered promiscuous mode
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.529 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape00eecd6-70, col_values=(('external_ids', {'iface-id': 'b3c16797-3484-4c54-823f-36b3f66b294c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:38 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:38Z|00685|binding|INFO|Releasing lport b3c16797-3484-4c54-823f-36b3f66b294c from this chassis (sb_readonly=0)
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.535 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e00eecd6-70d4-4b18-95b4-609dbcd626b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e00eecd6-70d4-4b18-95b4-609dbcd626b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.536 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7e15b4-dae5-41ad-ab4d-1a20ceb94305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.537 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-e00eecd6-70d4-4b18-95b4-609dbcd626b1
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/e00eecd6-70d4-4b18-95b4-609dbcd626b1.pid.haproxy
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID e00eecd6-70d4-4b18-95b4-609dbcd626b1
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:02:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:38.537 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'env', 'PROCESS_TAG=haproxy-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e00eecd6-70d4-4b18-95b4-609dbcd626b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:02:38 np0005466031 nova_compute[235803]: 2025-10-02 13:02:38.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:38 np0005466031 podman[312659]: 2025-10-02 13:02:38.655251601 +0000 UTC m=+0.073374275 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:02:38 np0005466031 podman[312660]: 2025-10-02 13:02:38.658374311 +0000 UTC m=+0.080212942 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:02:38 np0005466031 podman[312744]: 2025-10-02 13:02:38.894862974 +0000 UTC m=+0.022081647 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:02:39 np0005466031 podman[312744]: 2025-10-02 13:02:39.055495653 +0000 UTC m=+0.182714296 container create dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.097 2 DEBUG nova.compute.manager [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.098 2 DEBUG oslo_concurrency.lockutils [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.099 2 DEBUG oslo_concurrency.lockutils [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.099 2 DEBUG oslo_concurrency.lockutils [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.099 2 DEBUG nova.compute.manager [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.099 2 WARNING nova.compute.manager [req-113de4e1-7de2-403d-b8c8-11942bbec8f4 req-3c390f22-03d0-46a8-a837-28b57a72dfaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 09:02:39 np0005466031 systemd[1]: Started libpod-conmon-dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654.scope.
Oct  2 09:02:39 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:02:39 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8889ebcf3c18d20254ca4562af350e7478cdfacf259d487819e86c8570b4be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:02:39 np0005466031 podman[312744]: 2025-10-02 13:02:39.376156811 +0000 UTC m=+0.503375474 container init dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:02:39 np0005466031 podman[312744]: 2025-10-02 13:02:39.382962308 +0000 UTC m=+0.510180951 container start dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 09:02:39 np0005466031 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[312776]: [NOTICE]   (312781) : New worker (312783) forked
Oct  2 09:02:39 np0005466031 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[312776]: [NOTICE]   (312781) : Loading success.
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.466 2 DEBUG nova.compute.manager [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.467 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410159.4659264, b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.467 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.472 2 INFO nova.virt.libvirt.driver [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance running successfully.#033[00m
Oct  2 09:02:39 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.476 2 DEBUG nova.virt.libvirt.guest [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.477 2 DEBUG nova.virt.libvirt.driver [None req-2de53461-223b-4468-965a-595cf5eca3c8 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.511 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.518 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.571 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.572 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410159.46601, b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.572 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] VM Started (Lifecycle Event)#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.615 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:39 np0005466031 nova_compute[235803]: 2025-10-02 13:02:39.620 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:40.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:40.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:40 np0005466031 nova_compute[235803]: 2025-10-02 13:02:40.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:02:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.112 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.113 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.114 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.114 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.115 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.117 2 INFO nova.compute.manager [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Terminating instance#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.119 2 DEBUG nova.compute.manager [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.240 2 DEBUG nova.compute.manager [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.242 2 DEBUG oslo_concurrency.lockutils [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.243 2 DEBUG oslo_concurrency.lockutils [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.243 2 DEBUG oslo_concurrency.lockutils [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.244 2 DEBUG nova.compute.manager [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.244 2 WARNING nova.compute.manager [req-1cce34dd-764f-4469-8191-d20af761a675 req-f9dd2a37-0dd4-4a1f-8c4e-b250a604fc84 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state resized and task_state None.#033[00m
Oct  2 09:02:41 np0005466031 kernel: tap8cd36551-5d (unregistering): left promiscuous mode
Oct  2 09:02:41 np0005466031 NetworkManager[44907]: <info>  [1759410161.2707] device (tap8cd36551-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:41Z|00686|binding|INFO|Releasing lport 8cd36551-5d1a-4860-94aa-a914395f6a92 from this chassis (sb_readonly=0)
Oct  2 09:02:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:41Z|00687|binding|INFO|Setting lport 8cd36551-5d1a-4860-94aa-a914395f6a92 down in Southbound
Oct  2 09:02:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:41Z|00688|binding|INFO|Removing iface tap8cd36551-5d ovn-installed in OVS
Oct  2 09:02:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:41.290 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:23:46 10.100.0.10'], port_security=['fa:16:3e:70:23:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=8cd36551-5d1a-4860-94aa-a914395f6a92) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:41.296 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 8cd36551-5d1a-4860-94aa-a914395f6a92 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:02:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:41.297 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052f341a-0628-4183-a5e0-76312bc986c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:02:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:41.299 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[601c3339-d630-484f-b3f1-5c6d1d867dca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:41.300 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace which is not needed anymore#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:41 np0005466031 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000af.scope: Deactivated successfully.
Oct  2 09:02:41 np0005466031 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000af.scope: Consumed 7.726s CPU time.
Oct  2 09:02:41 np0005466031 systemd-machined[192227]: Machine qemu-78-instance-000000af terminated.
Oct  2 09:02:41 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[312257]: [NOTICE]   (312261) : haproxy version is 2.8.14-c23fe91
Oct  2 09:02:41 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[312257]: [NOTICE]   (312261) : path to executable is /usr/sbin/haproxy
Oct  2 09:02:41 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[312257]: [WARNING]  (312261) : Exiting Master process...
Oct  2 09:02:41 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[312257]: [WARNING]  (312261) : Exiting Master process...
Oct  2 09:02:41 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[312257]: [ALERT]    (312261) : Current worker (312263) exited with code 143 (Terminated)
Oct  2 09:02:41 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[312257]: [WARNING]  (312261) : All workers exited. Exiting... (0)
Oct  2 09:02:41 np0005466031 systemd[1]: libpod-dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5.scope: Deactivated successfully.
Oct  2 09:02:41 np0005466031 podman[312817]: 2025-10-02 13:02:41.493512306 +0000 UTC m=+0.103356499 container died dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.553 2 INFO nova.virt.libvirt.driver [-] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Instance destroyed successfully.#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.554 2 DEBUG nova.objects.instance [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.574 2 DEBUG nova.virt.libvirt.vif [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-778388894',display_name='tempest-ServersTestJSON-server-778388894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-778388894',id=175,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-4gj2qh38',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:39Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.576 2 DEBUG nova.network.os_vif_util [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "8cd36551-5d1a-4860-94aa-a914395f6a92", "address": "fa:16:3e:70:23:46", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd36551-5d", "ovs_interfaceid": "8cd36551-5d1a-4860-94aa-a914395f6a92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.577 2 DEBUG nova.network.os_vif_util [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:23:46,bridge_name='br-int',has_traffic_filtering=True,id=8cd36551-5d1a-4860-94aa-a914395f6a92,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd36551-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.578 2 DEBUG os_vif [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:23:46,bridge_name='br-int',has_traffic_filtering=True,id=8cd36551-5d1a-4860-94aa-a914395f6a92,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd36551-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cd36551-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:41 np0005466031 nova_compute[235803]: 2025-10-02 13:02:41.587 2 INFO os_vif [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:23:46,bridge_name='br-int',has_traffic_filtering=True,id=8cd36551-5d1a-4860-94aa-a914395f6a92,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd36551-5d')#033[00m
Oct  2 09:02:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5-userdata-shm.mount: Deactivated successfully.
Oct  2 09:02:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay-393f3bab87689333b61394b8beaa378c6304cacfb08b7828f424a4042ff06227-merged.mount: Deactivated successfully.
Oct  2 09:02:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:42.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:42 np0005466031 podman[312817]: 2025-10-02 13:02:42.082341421 +0000 UTC m=+0.692185584 container cleanup dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:02:42 np0005466031 systemd[1]: libpod-conmon-dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5.scope: Deactivated successfully.
Oct  2 09:02:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:42.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.414 2 DEBUG nova.compute.manager [req-2462545a-0de0-4983-b6b9-dbff7a89f121 req-d288da67-2b82-489f-ae46-a52204fb9ab0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received event network-vif-unplugged-8cd36551-5d1a-4860-94aa-a914395f6a92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.415 2 DEBUG oslo_concurrency.lockutils [req-2462545a-0de0-4983-b6b9-dbff7a89f121 req-d288da67-2b82-489f-ae46-a52204fb9ab0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.416 2 DEBUG oslo_concurrency.lockutils [req-2462545a-0de0-4983-b6b9-dbff7a89f121 req-d288da67-2b82-489f-ae46-a52204fb9ab0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.416 2 DEBUG oslo_concurrency.lockutils [req-2462545a-0de0-4983-b6b9-dbff7a89f121 req-d288da67-2b82-489f-ae46-a52204fb9ab0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.416 2 DEBUG nova.compute.manager [req-2462545a-0de0-4983-b6b9-dbff7a89f121 req-d288da67-2b82-489f-ae46-a52204fb9ab0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] No waiting events found dispatching network-vif-unplugged-8cd36551-5d1a-4860-94aa-a914395f6a92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.417 2 DEBUG nova.compute.manager [req-2462545a-0de0-4983-b6b9-dbff7a89f121 req-d288da67-2b82-489f-ae46-a52204fb9ab0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received event network-vif-unplugged-8cd36551-5d1a-4860-94aa-a914395f6a92 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:02:42 np0005466031 podman[312877]: 2025-10-02 13:02:42.444586408 +0000 UTC m=+0.336870767 container remove dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.451 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[de03142a-3925-4533-a561-e5fa33214c69]: (4, ('Thu Oct  2 01:02:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5)\ndfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5\nThu Oct  2 01:02:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (dfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5)\ndfae8ec4d116204956f1a323fac0524562f43d28172601fe115fac5a7ee77ee5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.453 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[303abef9-9331-42db-b755-bf76d1fb45e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.454 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:42 np0005466031 kernel: tap052f341a-00: left promiscuous mode
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.474 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[844dc1fe-0b1a-4777-8084-7de5f43b5c40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.510 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a04eb86a-ac04-43e4-acd5-2abfe020912e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.511 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[281f4f0b-1fdc-433e-b315-94ed123c83c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.526 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[26726540-1c29-4b03-8b94-eb34c777788b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800883, 'reachable_time': 32179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312893, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.528 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:02:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:42.528 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[fc712d2b-6242-4bae-8369-6c1bfce18e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:42 np0005466031 systemd[1]: run-netns-ovnmeta\x2d052f341a\x2d0628\x2d4183\x2da5e0\x2d76312bc986c6.mount: Deactivated successfully.
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.909 2 INFO nova.virt.libvirt.driver [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Deleting instance files /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_del#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.910 2 INFO nova.virt.libvirt.driver [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Deletion of /var/lib/nova/instances/1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485_del complete#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.968 2 INFO nova.compute.manager [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Took 1.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.969 2 DEBUG oslo.service.loopingcall [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.969 2 DEBUG nova.compute.manager [-] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:02:42 np0005466031 nova_compute[235803]: 2025-10-02 13:02:42.970 2 DEBUG nova.network.neutron [-] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:02:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:44.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:44.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.249724) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164249794, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1686, "num_deletes": 252, "total_data_size": 3664554, "memory_usage": 3727568, "flush_reason": "Manual Compaction"}
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.307 2 DEBUG nova.network.neutron [-] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.331 2 INFO nova.compute.manager [-] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Took 1.36 seconds to deallocate network for instance.#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.379 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.379 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.401 2 DEBUG nova.compute.manager [req-95f7caf5-eb66-40be-aced-e07377c38ee1 req-2a705f5d-c983-4da6-a14b-249efc8d4501 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received event network-vif-deleted-8cd36551-5d1a-4860-94aa-a914395f6a92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164444126, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 2407025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63404, "largest_seqno": 65085, "table_properties": {"data_size": 2400105, "index_size": 3926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14657, "raw_average_key_size": 19, "raw_value_size": 2385867, "raw_average_value_size": 3114, "num_data_blocks": 172, "num_entries": 766, "num_filter_entries": 766, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410029, "oldest_key_time": 1759410029, "file_creation_time": 1759410164, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 194429 microseconds, and 11334 cpu microseconds.
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.463 2 DEBUG oslo_concurrency.processutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.546 2 DEBUG nova.compute.manager [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received event network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.547 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.547 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.548 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.548 2 DEBUG nova.compute.manager [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] No waiting events found dispatching network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.548 2 WARNING nova.compute.manager [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Received unexpected event network-vif-plugged-8cd36551-5d1a-4860-94aa-a914395f6a92 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.444158) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 2407025 bytes OK
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.444176) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.555766) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.555844) EVENT_LOG_v1 {"time_micros": 1759410164555829, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.555886) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 3656784, prev total WAL file size 3703296, number of live WAL files 2.
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.557647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323532' seq:72057594037927935, type:22 .. '6B7600353035' seq:0, type:0; will stop at (end)
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(2350KB)], [126(10057KB)]
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164558266, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 12705604, "oldest_snapshot_seqno": -1}
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 8891 keys, 11598998 bytes, temperature: kUnknown
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164708800, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 11598998, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11541854, "index_size": 33799, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 232013, "raw_average_key_size": 26, "raw_value_size": 11386121, "raw_average_value_size": 1280, "num_data_blocks": 1299, "num_entries": 8891, "num_filter_entries": 8891, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410164, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.709101) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11598998 bytes
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.722215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.4 rd, 77.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.8 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(10.1) write-amplify(4.8) OK, records in: 9414, records dropped: 523 output_compression: NoCompression
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.722257) EVENT_LOG_v1 {"time_micros": 1759410164722242, "job": 80, "event": "compaction_finished", "compaction_time_micros": 150490, "compaction_time_cpu_micros": 29224, "output_level": 6, "num_output_files": 1, "total_output_size": 11598998, "num_input_records": 9414, "num_output_records": 8891, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164722922, "job": 80, "event": "table_file_deletion", "file_number": 128}
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164724750, "job": 80, "event": "table_file_deletion", "file_number": 126}
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.557499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.724815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.724820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.724821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.724822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:44.724824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3140300285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.959 2 DEBUG oslo_concurrency.processutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.964 2 DEBUG nova.compute.provider_tree [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:44 np0005466031 nova_compute[235803]: 2025-10-02 13:02:44.996 2 DEBUG nova.scheduler.client.report [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:45 np0005466031 nova_compute[235803]: 2025-10-02 13:02:45.028 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:45 np0005466031 nova_compute[235803]: 2025-10-02 13:02:45.063 2 INFO nova.scheduler.client.report [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485#033[00m
Oct  2 09:02:45 np0005466031 nova_compute[235803]: 2025-10-02 13:02:45.190 2 DEBUG oslo_concurrency.lockutils [None req-3661bef6-05bb-4776-8ef2-6b7773e26d1f b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:45 np0005466031 nova_compute[235803]: 2025-10-02 13:02:45.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e352 e352: 3 total, 3 up, 3 in
Oct  2 09:02:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:46.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:46.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:46 np0005466031 nova_compute[235803]: 2025-10-02 13:02:46.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:48.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:48.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:48 np0005466031 nova_compute[235803]: 2025-10-02 13:02:48.408 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:48 np0005466031 nova_compute[235803]: 2025-10-02 13:02:48.409 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:48 np0005466031 nova_compute[235803]: 2025-10-02 13:02:48.430 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:02:48 np0005466031 nova_compute[235803]: 2025-10-02 13:02:48.499 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:48 np0005466031 nova_compute[235803]: 2025-10-02 13:02:48.499 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:48 np0005466031 nova_compute[235803]: 2025-10-02 13:02:48.504 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:02:48 np0005466031 nova_compute[235803]: 2025-10-02 13:02:48.504 2 INFO nova.compute.claims [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:02:48 np0005466031 nova_compute[235803]: 2025-10-02 13:02:48.615 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:49 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1635756819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.070 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.076 2 DEBUG nova.compute.provider_tree [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.102 2 DEBUG nova.scheduler.client.report [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.126 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.127 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.187 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.187 2 DEBUG nova.network.neutron [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.212 2 INFO nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.230 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.325 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.328 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.328 2 INFO nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Creating image(s)#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.466 2 DEBUG nova.storage.rbd_utils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4961c831-88fa-4f95-ada9-0cc475306291_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.490 2 DEBUG nova.storage.rbd_utils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4961c831-88fa-4f95-ada9-0cc475306291_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.513 2 DEBUG nova.storage.rbd_utils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4961c831-88fa-4f95-ada9-0cc475306291_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.517 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.556 2 DEBUG nova.policy [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:02:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.594 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.595 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.595 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.596 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.626 2 DEBUG nova.storage.rbd_utils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4961c831-88fa-4f95-ada9-0cc475306291_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:49 np0005466031 nova_compute[235803]: 2025-10-02 13:02:49.630 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4961c831-88fa-4f95-ada9-0cc475306291_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:02:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:50.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:02:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:50.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:50 np0005466031 nova_compute[235803]: 2025-10-02 13:02:50.368 2 DEBUG nova.network.neutron [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Successfully created port: ac143b17-d8a4-4339-86d5-dd95e00aaf7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:02:50 np0005466031 nova_compute[235803]: 2025-10-02 13:02:50.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.253 2 DEBUG nova.network.neutron [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Successfully updated port: ac143b17-d8a4-4339-86d5-dd95e00aaf7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.282 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-4961c831-88fa-4f95-ada9-0cc475306291" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.282 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-4961c831-88fa-4f95-ada9-0cc475306291" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.283 2 DEBUG nova.network.neutron [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.439 2 DEBUG nova.network.neutron [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.472 2 DEBUG nova.compute.manager [req-4558608e-24d9-4871-a3f2-76896284e625 req-c34ec929-1f85-43b0-b8db-69e6b5855395 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received event network-changed-ac143b17-d8a4-4339-86d5-dd95e00aaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.473 2 DEBUG nova.compute.manager [req-4558608e-24d9-4871-a3f2-76896284e625 req-c34ec929-1f85-43b0-b8db-69e6b5855395 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Refreshing instance network info cache due to event network-changed-ac143b17-d8a4-4339-86d5-dd95e00aaf7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.473 2 DEBUG oslo_concurrency.lockutils [req-4558608e-24d9-4871-a3f2-76896284e625 req-c34ec929-1f85-43b0-b8db-69e6b5855395 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4961c831-88fa-4f95-ada9-0cc475306291" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.514 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4961c831-88fa-4f95-ada9-0cc475306291_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.884s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.597 2 DEBUG nova.storage.rbd_utils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image 4961c831-88fa-4f95-ada9-0cc475306291_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.766 2 DEBUG nova.objects.instance [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 4961c831-88fa-4f95-ada9-0cc475306291 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.788 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.788 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Ensure instance console log exists: /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.789 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.789 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:51 np0005466031 nova_compute[235803]: 2025-10-02 13:02:51.790 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:52.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.429 2 DEBUG nova.network.neutron [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Updating instance_info_cache with network_info: [{"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:52 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:52Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:f7:63 10.100.0.8
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.455 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-4961c831-88fa-4f95-ada9-0cc475306291" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.455 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Instance network_info: |[{"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.455 2 DEBUG oslo_concurrency.lockutils [req-4558608e-24d9-4871-a3f2-76896284e625 req-c34ec929-1f85-43b0-b8db-69e6b5855395 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4961c831-88fa-4f95-ada9-0cc475306291" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.456 2 DEBUG nova.network.neutron [req-4558608e-24d9-4871-a3f2-76896284e625 req-c34ec929-1f85-43b0-b8db-69e6b5855395 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Refreshing network info cache for port ac143b17-d8a4-4339-86d5-dd95e00aaf7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.458 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Start _get_guest_xml network_info=[{"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.462 2 WARNING nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.466 2 DEBUG nova.virt.libvirt.host [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.467 2 DEBUG nova.virt.libvirt.host [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.470 2 DEBUG nova.virt.libvirt.host [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.471 2 DEBUG nova.virt.libvirt.host [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.472 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.472 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.472 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.472 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.473 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.473 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.473 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.473 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.473 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.474 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.474 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.474 2 DEBUG nova.virt.hardware [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.477 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3837070089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.903 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.937 2 DEBUG nova.storage.rbd_utils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4961c831-88fa-4f95-ada9-0cc475306291_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:52 np0005466031 nova_compute[235803]: 2025-10-02 13:02:52.941 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:53 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/771358143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.393 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.395 2 DEBUG nova.virt.libvirt.vif [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263857536',display_name='tempest-ServersTestJSON-server-263857536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-263857536',id=178,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-063de3i4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:49Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=4961c831-88fa-4f95-ada9-0cc475306291,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.396 2 DEBUG nova.network.os_vif_util [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.396 2 DEBUG nova.network.os_vif_util [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:49:4d,bridge_name='br-int',has_traffic_filtering=True,id=ac143b17-d8a4-4339-86d5-dd95e00aaf7c,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac143b17-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.398 2 DEBUG nova.objects.instance [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 4961c831-88fa-4f95-ada9-0cc475306291 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.414 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <uuid>4961c831-88fa-4f95-ada9-0cc475306291</uuid>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <name>instance-000000b2</name>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <nova:name>tempest-ServersTestJSON-server-263857536</nova:name>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:02:52</nova:creationTime>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <nova:port uuid="ac143b17-d8a4-4339-86d5-dd95e00aaf7c">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <entry name="serial">4961c831-88fa-4f95-ada9-0cc475306291</entry>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <entry name="uuid">4961c831-88fa-4f95-ada9-0cc475306291</entry>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/4961c831-88fa-4f95-ada9-0cc475306291_disk">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/4961c831-88fa-4f95-ada9-0cc475306291_disk.config">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:e7:49:4d"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <target dev="tapac143b17-d8"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291/console.log" append="off"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:02:53 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:02:53 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:02:53 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:02:53 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.414 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Preparing to wait for external event network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.415 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.415 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.415 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.416 2 DEBUG nova.virt.libvirt.vif [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263857536',display_name='tempest-ServersTestJSON-server-263857536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-263857536',id=178,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-063de3i4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:49Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=4961c831-88fa-4f95-ada9-0cc475306291,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.416 2 DEBUG nova.network.os_vif_util [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.417 2 DEBUG nova.network.os_vif_util [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:49:4d,bridge_name='br-int',has_traffic_filtering=True,id=ac143b17-d8a4-4339-86d5-dd95e00aaf7c,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac143b17-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.417 2 DEBUG os_vif [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:49:4d,bridge_name='br-int',has_traffic_filtering=True,id=ac143b17-d8a4-4339-86d5-dd95e00aaf7c,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac143b17-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac143b17-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac143b17-d8, col_values=(('external_ids', {'iface-id': 'ac143b17-d8a4-4339-86d5-dd95e00aaf7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:49:4d', 'vm-uuid': '4961c831-88fa-4f95-ada9-0cc475306291'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:53 np0005466031 NetworkManager[44907]: <info>  [1759410173.4239] manager: (tapac143b17-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.430 2 INFO os_vif [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:49:4d,bridge_name='br-int',has_traffic_filtering=True,id=ac143b17-d8a4-4339-86d5-dd95e00aaf7c,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac143b17-d8')#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.542 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.542 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.542 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:e7:49:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.543 2 INFO nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Using config drive#033[00m
Oct  2 09:02:53 np0005466031 nova_compute[235803]: 2025-10-02 13:02:53.565 2 DEBUG nova.storage.rbd_utils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4961c831-88fa-4f95-ada9-0cc475306291_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:54.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 e353: 3 total, 3 up, 3 in
Oct  2 09:02:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:02:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:54.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:02:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:54 np0005466031 nova_compute[235803]: 2025-10-02 13:02:54.687 2 INFO nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Creating config drive at /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291/disk.config#033[00m
Oct  2 09:02:54 np0005466031 nova_compute[235803]: 2025-10-02 13:02:54.692 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2_qv164v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:54 np0005466031 nova_compute[235803]: 2025-10-02 13:02:54.828 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2_qv164v" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:54 np0005466031 nova_compute[235803]: 2025-10-02 13:02:54.856 2 DEBUG nova.storage.rbd_utils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 4961c831-88fa-4f95-ada9-0cc475306291_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:54 np0005466031 nova_compute[235803]: 2025-10-02 13:02:54.860 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291/disk.config 4961c831-88fa-4f95-ada9-0cc475306291_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.460 2 DEBUG oslo_concurrency.processutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291/disk.config 4961c831-88fa-4f95-ada9-0cc475306291_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.461 2 INFO nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Deleting local config drive /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291/disk.config because it was imported into RBD.#033[00m
Oct  2 09:02:55 np0005466031 NetworkManager[44907]: <info>  [1759410175.5059] manager: (tapac143b17-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Oct  2 09:02:55 np0005466031 kernel: tapac143b17-d8: entered promiscuous mode
Oct  2 09:02:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:55Z|00689|binding|INFO|Claiming lport ac143b17-d8a4-4339-86d5-dd95e00aaf7c for this chassis.
Oct  2 09:02:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:55Z|00690|binding|INFO|ac143b17-d8a4-4339-86d5-dd95e00aaf7c: Claiming fa:16:3e:e7:49:4d 10.100.0.10
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.517 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:49:4d 10.100.0.10'], port_security=['fa:16:3e:e7:49:4d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4961c831-88fa-4f95-ada9-0cc475306291', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=ac143b17-d8a4-4339-86d5-dd95e00aaf7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.518 141898 INFO neutron.agent.ovn.metadata.agent [-] Port ac143b17-d8a4-4339-86d5-dd95e00aaf7c in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.519 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:02:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:55Z|00691|binding|INFO|Setting lport ac143b17-d8a4-4339-86d5-dd95e00aaf7c ovn-installed in OVS
Oct  2 09:02:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:55Z|00692|binding|INFO|Setting lport ac143b17-d8a4-4339-86d5-dd95e00aaf7c up in Southbound
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.532 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c8edeaea-c3b9-42a8-a527-a4ce594f34f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.534 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap052f341a-01 in ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.536 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap052f341a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.536 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[28d45f71-50df-4468-aaaa-a191d9190d99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.537 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[898feb06-1736-478e-8215-905aed15befb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 systemd-udevd[313347]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.548 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[9739c6b1-35fc-4c5c-a2dc-28947bf03efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 systemd-machined[192227]: New machine qemu-80-instance-000000b2.
Oct  2 09:02:55 np0005466031 NetworkManager[44907]: <info>  [1759410175.5563] device (tapac143b17-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:02:55 np0005466031 NetworkManager[44907]: <info>  [1759410175.5574] device (tapac143b17-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.560 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc114e7-6e8e-465b-85bc-46fe990b2497]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 systemd[1]: Started Virtual Machine qemu-80-instance-000000b2.
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.586 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fea8a1a0-6dd4-4a7b-ab67-dc40dcccc47b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.590 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7e60ab74-724e-461a-b421-087ed0eeb9c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 NetworkManager[44907]: <info>  [1759410175.5911] manager: (tap052f341a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/313)
Oct  2 09:02:55 np0005466031 systemd-udevd[313351]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.623 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c732aa4e-8ae6-4058-bde6-9002e1735499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.627 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[963613a7-f8ee-432a-a22a-5da7c56261d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 NetworkManager[44907]: <info>  [1759410175.6509] device (tap052f341a-00): carrier: link connected
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.656 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c09ced22-ddf1-48c0-b878-aa6401141af3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.674 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5ce79a-1f30-42dd-abe7-4c2426e3ae64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803124, 'reachable_time': 22008, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313379, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.688 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[13011a8b-3704-457c-8cd2-565a8ce336bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:a55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803124, 'tstamp': 803124}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313380, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.705 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa76d90-c75a-4d68-a906-e216e3d8a909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803124, 'reachable_time': 22008, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313381, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.739 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d61fcd0c-aa2e-4f42-9048-b34c58d3eaef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.795 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cee749ac-b432-4fa1-8e02-bc68ae0bad6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.797 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.797 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.798 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:55 np0005466031 NetworkManager[44907]: <info>  [1759410175.8005] manager: (tap052f341a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:55 np0005466031 kernel: tap052f341a-00: entered promiscuous mode
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.805 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.808 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:02:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:55Z|00693|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.809 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ec46d2ec-558f-4201-b0a3-65cc65c95a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.810 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:02:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:55.810 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'env', 'PROCESS_TAG=haproxy-052f341a-0628-4183-a5e0-76312bc986c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/052f341a-0628-4183-a5e0-76312bc986c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:02:55 np0005466031 nova_compute[235803]: 2025-10-02 13:02:55.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:56.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:56 np0005466031 podman[313456]: 2025-10-02 13:02:56.181005938 +0000 UTC m=+0.046796319 container create cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:02:56 np0005466031 systemd[1]: Started libpod-conmon-cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de.scope.
Oct  2 09:02:56 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:02:56 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfff8af61fccffa78ec24d630b29aed421aae91b5d45e6b8d0b7d9d5321cca13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:02:56 np0005466031 podman[313456]: 2025-10-02 13:02:56.154828254 +0000 UTC m=+0.020618655 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:02:56 np0005466031 podman[313456]: 2025-10-02 13:02:56.261517228 +0000 UTC m=+0.127307619 container init cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:02:56 np0005466031 podman[313456]: 2025-10-02 13:02:56.266493641 +0000 UTC m=+0.132284012 container start cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:02:56 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[313471]: [NOTICE]   (313475) : New worker (313477) forked
Oct  2 09:02:56 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[313471]: [NOTICE]   (313475) : Loading success.
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.457 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410176.456657, 4961c831-88fa-4f95-ada9-0cc475306291 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.457 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] VM Started (Lifecycle Event)#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.476 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.480 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410176.4595852, 4961c831-88fa-4f95-ada9-0cc475306291 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.480 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.503 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.506 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.523 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.551 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410161.5498042, 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.551 2 INFO nova.compute.manager [-] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.571 2 DEBUG nova.compute.manager [None req-4c01b5ee-f932-4559-a1e7-1934a105b21f - - - - - -] [instance: 1cf04db5-1f3a-4fdf-b2b5-dac7dcd36485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.725 2 DEBUG nova.network.neutron [req-4558608e-24d9-4871-a3f2-76896284e625 req-c34ec929-1f85-43b0-b8db-69e6b5855395 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Updated VIF entry in instance network info cache for port ac143b17-d8a4-4339-86d5-dd95e00aaf7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.726 2 DEBUG nova.network.neutron [req-4558608e-24d9-4871-a3f2-76896284e625 req-c34ec929-1f85-43b0-b8db-69e6b5855395 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Updating instance_info_cache with network_info: [{"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.745 2 DEBUG oslo_concurrency.lockutils [req-4558608e-24d9-4871-a3f2-76896284e625 req-c34ec929-1f85-43b0-b8db-69e6b5855395 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4961c831-88fa-4f95-ada9-0cc475306291" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.969 2 DEBUG nova.compute.manager [req-2448dd5f-0193-4af4-830f-a4f96e872a7c req-ae6a2d5f-7873-4892-b089-fa8a8577ad63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received event network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.969 2 DEBUG oslo_concurrency.lockutils [req-2448dd5f-0193-4af4-830f-a4f96e872a7c req-ae6a2d5f-7873-4892-b089-fa8a8577ad63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.969 2 DEBUG oslo_concurrency.lockutils [req-2448dd5f-0193-4af4-830f-a4f96e872a7c req-ae6a2d5f-7873-4892-b089-fa8a8577ad63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.969 2 DEBUG oslo_concurrency.lockutils [req-2448dd5f-0193-4af4-830f-a4f96e872a7c req-ae6a2d5f-7873-4892-b089-fa8a8577ad63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.970 2 DEBUG nova.compute.manager [req-2448dd5f-0193-4af4-830f-a4f96e872a7c req-ae6a2d5f-7873-4892-b089-fa8a8577ad63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Processing event network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.970 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.974 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410176.9729843, 4961c831-88fa-4f95-ada9-0cc475306291 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.974 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.976 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.979 2 INFO nova.virt.libvirt.driver [-] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Instance spawned successfully.#033[00m
Oct  2 09:02:56 np0005466031 nova_compute[235803]: 2025-10-02 13:02:56.979 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.023 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.028 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.031 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.031 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.031 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.032 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.032 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.033 2 DEBUG nova.virt.libvirt.driver [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.067 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.098 2 INFO nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Took 7.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.099 2 DEBUG nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:57 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.165 2 INFO nova.compute.manager [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Took 8.68 seconds to build instance.#033[00m
Oct  2 09:02:57 np0005466031 nova_compute[235803]: 2025-10-02 13:02:57.180 2 DEBUG oslo_concurrency.lockutils [None req-45d5153a-d5a8-49b2-bfc5-ec65192e8357 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:58.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:02:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:58.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:58 np0005466031 nova_compute[235803]: 2025-10-02 13:02:58.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:58 np0005466031 nova_compute[235803]: 2025-10-02 13:02:58.465 2 INFO nova.compute.manager [None req-1f83c8dd-f3ba-4e59-ba45-cc4f543b033b 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Get console output#033[00m
Oct  2 09:02:58 np0005466031 nova_compute[235803]: 2025-10-02 13:02:58.469 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.046660) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179046715, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 470, "num_deletes": 257, "total_data_size": 550824, "memory_usage": 561048, "flush_reason": "Manual Compaction"}
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179050700, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 363267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65090, "largest_seqno": 65555, "table_properties": {"data_size": 360627, "index_size": 675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6460, "raw_average_key_size": 18, "raw_value_size": 355171, "raw_average_value_size": 1029, "num_data_blocks": 29, "num_entries": 345, "num_filter_entries": 345, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410164, "oldest_key_time": 1759410164, "file_creation_time": 1759410179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 4074 microseconds, and 1760 cpu microseconds.
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.050740) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 363267 bytes OK
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.050757) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.051817) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.051829) EVENT_LOG_v1 {"time_micros": 1759410179051825, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.051843) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 547913, prev total WAL file size 547913, number of live WAL files 2.
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.052350) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353137' seq:0, type:0; will stop at (end)
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(354KB)], [129(11MB)]
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179052395, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 11962265, "oldest_snapshot_seqno": -1}
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.115 2 DEBUG nova.compute.manager [req-64617f80-0b86-4b39-bb5c-583971bed7f9 req-0fc1bc5e-814e-49c2-bfc3-c44cdee33aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received event network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.116 2 DEBUG oslo_concurrency.lockutils [req-64617f80-0b86-4b39-bb5c-583971bed7f9 req-0fc1bc5e-814e-49c2-bfc3-c44cdee33aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.116 2 DEBUG oslo_concurrency.lockutils [req-64617f80-0b86-4b39-bb5c-583971bed7f9 req-0fc1bc5e-814e-49c2-bfc3-c44cdee33aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.117 2 DEBUG oslo_concurrency.lockutils [req-64617f80-0b86-4b39-bb5c-583971bed7f9 req-0fc1bc5e-814e-49c2-bfc3-c44cdee33aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.117 2 DEBUG nova.compute.manager [req-64617f80-0b86-4b39-bb5c-583971bed7f9 req-0fc1bc5e-814e-49c2-bfc3-c44cdee33aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] No waiting events found dispatching network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.117 2 WARNING nova.compute.manager [req-64617f80-0b86-4b39-bb5c-583971bed7f9 req-0fc1bc5e-814e-49c2-bfc3-c44cdee33aef 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received unexpected event network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c for instance with vm_state active and task_state None.#033[00m
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 8706 keys, 11812812 bytes, temperature: kUnknown
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179147453, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 11812812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11756139, "index_size": 33776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 229143, "raw_average_key_size": 26, "raw_value_size": 11602904, "raw_average_value_size": 1332, "num_data_blocks": 1295, "num_entries": 8706, "num_filter_entries": 8706, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147751) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11812812 bytes
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.148847) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.7 rd, 124.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(65.4) write-amplify(32.5) OK, records in: 9236, records dropped: 530 output_compression: NoCompression
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.148866) EVENT_LOG_v1 {"time_micros": 1759410179148857, "job": 82, "event": "compaction_finished", "compaction_time_micros": 95129, "compaction_time_cpu_micros": 26801, "output_level": 6, "num_output_files": 1, "total_output_size": 11812812, "num_input_records": 9236, "num_output_records": 8706, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179149041, "job": 82, "event": "table_file_deletion", "file_number": 131}
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179151169, "job": 82, "event": "table_file_deletion", "file_number": 129}
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.052253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.151271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.151276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.151278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.151280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:02:59.151281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.526 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.527 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.527 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.527 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.528 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.529 2 INFO nova.compute.manager [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Terminating instance#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.530 2 DEBUG nova.compute.manager [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:02:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:59 np0005466031 kernel: tap43e0d421-2d (unregistering): left promiscuous mode
Oct  2 09:02:59 np0005466031 NetworkManager[44907]: <info>  [1759410179.5875] device (tap43e0d421-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:02:59 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:59Z|00694|binding|INFO|Releasing lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 from this chassis (sb_readonly=0)
Oct  2 09:02:59 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:59Z|00695|binding|INFO|Setting lport 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 down in Southbound
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:59 np0005466031 ovn_controller[132413]: 2025-10-02T13:02:59Z|00696|binding|INFO|Removing iface tap43e0d421-2d ovn-installed in OVS
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:59.610 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:f7:63 10.100.0.8'], port_security=['fa:16:3e:4a:f7:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4c2d0411-240e-42d5-b104-fafcf4d7fcf7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17b4abc9-fef6-4b35-b16d-0b00cb9b9f26, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=43e0d421-2dcd-4a63-a1ab-dc9711a2b840) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:59.612 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 in datapath e00eecd6-70d4-4b18-95b4-609dbcd626b1 unbound from our chassis#033[00m
Oct  2 09:02:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:59.614 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e00eecd6-70d4-4b18-95b4-609dbcd626b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:02:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:59.614 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba120bb-47cd-46f5-b317-51cda4cce615]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:02:59.615 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 namespace which is not needed anymore#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:59 np0005466031 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Oct  2 09:02:59 np0005466031 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000ad.scope: Consumed 13.549s CPU time.
Oct  2 09:02:59 np0005466031 systemd-machined[192227]: Machine qemu-79-instance-000000ad terminated.
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:59 np0005466031 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[312776]: [NOTICE]   (312781) : haproxy version is 2.8.14-c23fe91
Oct  2 09:02:59 np0005466031 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[312776]: [NOTICE]   (312781) : path to executable is /usr/sbin/haproxy
Oct  2 09:02:59 np0005466031 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[312776]: [WARNING]  (312781) : Exiting Master process...
Oct  2 09:02:59 np0005466031 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[312776]: [WARNING]  (312781) : Exiting Master process...
Oct  2 09:02:59 np0005466031 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[312776]: [ALERT]    (312781) : Current worker (312783) exited with code 143 (Terminated)
Oct  2 09:02:59 np0005466031 neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1[312776]: [WARNING]  (312781) : All workers exited. Exiting... (0)
Oct  2 09:02:59 np0005466031 systemd[1]: libpod-dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654.scope: Deactivated successfully.
Oct  2 09:02:59 np0005466031 podman[313511]: 2025-10-02 13:02:59.774025019 +0000 UTC m=+0.063692116 container died dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.781 2 INFO nova.virt.libvirt.driver [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Instance destroyed successfully.#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.784 2 DEBUG nova.objects.instance [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.809 2 DEBUG nova.virt.libvirt.vif [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1345789876',display_name='tempest-TestNetworkAdvancedServerOps-server-1345789876',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1345789876',id=173,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEIMyLEEiTsPEaNADNpznD0SvPywb5Pg8hG/EPPWqO2JIb485VcJfrXGgJByt8PJyHfyaT1SSoE+QlqZ2pUFHk8hDhg8WQsOqARgR1ox1fbDjGtF3fgbhMsuoES+3OcIQ==',key_name='tempest-TestNetworkAdvancedServerOps-295053675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xifn5c06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:47Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.810 2 DEBUG nova.network.os_vif_util [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "address": "fa:16:3e:4a:f7:63", "network": {"id": "e00eecd6-70d4-4b18-95b4-609dbcd626b1", "bridge": "br-int", "label": "tempest-network-smoke--1867044655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e0d421-2d", "ovs_interfaceid": "43e0d421-2dcd-4a63-a1ab-dc9711a2b840", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.810 2 DEBUG nova.network.os_vif_util [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.811 2 DEBUG os_vif [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.812 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43e0d421-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:59 np0005466031 nova_compute[235803]: 2025-10-02 13:02:59.820 2 INFO os_vif [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:f7:63,bridge_name='br-int',has_traffic_filtering=True,id=43e0d421-2dcd-4a63-a1ab-dc9711a2b840,network=Network(e00eecd6-70d4-4b18-95b4-609dbcd626b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e0d421-2d')#033[00m
Oct  2 09:02:59 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654-userdata-shm.mount: Deactivated successfully.
Oct  2 09:02:59 np0005466031 systemd[1]: var-lib-containers-storage-overlay-fd8889ebcf3c18d20254ca4562af350e7478cdfacf259d487819e86c8570b4be-merged.mount: Deactivated successfully.
Oct  2 09:02:59 np0005466031 podman[313511]: 2025-10-02 13:02:59.947987321 +0000 UTC m=+0.237654408 container cleanup dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:02:59 np0005466031 systemd[1]: libpod-conmon-dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654.scope: Deactivated successfully.
Oct  2 09:03:00 np0005466031 podman[313567]: 2025-10-02 13:03:00.012031666 +0000 UTC m=+0.042744052 container remove dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.018 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b84053e7-ff03-4e82-93b3-724b862c7184]: (4, ('Thu Oct  2 01:02:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 (dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654)\ndd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654\nThu Oct  2 01:02:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 (dd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654)\ndd4fec5a673a530a568b733a3eeb12c247df5479908e9157b5149478c47a2654\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.020 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[17ef0437-3536-4397-af4e-fb28e6e62a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.021 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape00eecd6-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:00 np0005466031 kernel: tape00eecd6-70: left promiscuous mode
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:00.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.045 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dc000b54-c0fe-4972-b0b3-adef7ec042c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.066 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[77de284b-96ee-40d9-863a-d5d8fa30cd60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.068 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b319df40-9853-4f92-a881-fb9ffa6bfe9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.085 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fea538e1-39d8-4fbf-9a4b-638594b502e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801387, 'reachable_time': 18947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313582, 'error': None, 'target': 'ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:00 np0005466031 systemd[1]: run-netns-ovnmeta\x2de00eecd6\x2d70d4\x2d4b18\x2d95b4\x2d609dbcd626b1.mount: Deactivated successfully.
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.087 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e00eecd6-70d4-4b18-95b4-609dbcd626b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:03:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:00.087 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[243d2968-2d8f-4115-ab59-112f16fc7755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:00.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.291 2 INFO nova.virt.libvirt.driver [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Deleting instance files /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_del#033[00m
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.292 2 INFO nova.virt.libvirt.driver [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Deletion of /var/lib/nova/instances/b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07_del complete#033[00m
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.342 2 INFO nova.compute.manager [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.343 2 DEBUG oslo.service.loopingcall [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.343 2 DEBUG nova.compute.manager [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.343 2 DEBUG nova.network.neutron [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:03:00 np0005466031 nova_compute[235803]: 2025-10-02 13:03:00.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.086 2 DEBUG oslo_concurrency.lockutils [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.087 2 DEBUG oslo_concurrency.lockutils [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.087 2 DEBUG nova.compute.manager [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.090 2 DEBUG nova.compute.manager [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.091 2 DEBUG nova.objects.instance [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'flavor' on Instance uuid 4961c831-88fa-4f95-ada9-0cc475306291 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.116 2 DEBUG nova.virt.libvirt.driver [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.256 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.256 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing instance network info cache due to event network-changed-43e0d421-2dcd-4a63-a1ab-dc9711a2b840. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.256 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.256 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.257 2 DEBUG nova.network.neutron [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Refreshing network info cache for port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.697 2 INFO nova.network.neutron [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Port 43e0d421-2dcd-4a63-a1ab-dc9711a2b840 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.697 2 DEBUG nova.network.neutron [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.716 2 DEBUG nova.network.neutron [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.755 2 INFO nova.compute.manager [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Took 1.41 seconds to deallocate network for instance.#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.759 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.760 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.760 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.760 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.760 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.761 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.761 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-unplugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.761 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.761 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.762 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.762 2 DEBUG oslo_concurrency.lockutils [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.762 2 DEBUG nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] No waiting events found dispatching network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.762 2 WARNING nova.compute.manager [req-3dbe905d-b2f9-4859-be57-a1b5fc4e6d37 req-ae404bae-ac5d-44e8-bd2a-eb55de0e0a82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received unexpected event network-vif-plugged-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.802 2 DEBUG nova.compute.manager [req-f130021b-4e23-482b-8fc3-3f06559c675f req-cc7889bb-ccda-4586-bd10-e3a111d7501e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Received event network-vif-deleted-43e0d421-2dcd-4a63-a1ab-dc9711a2b840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.824 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.824 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.829 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.887 2 INFO nova.scheduler.client.report [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07#033[00m
Oct  2 09:03:01 np0005466031 nova_compute[235803]: 2025-10-02 13:03:01.967 2 DEBUG oslo_concurrency.lockutils [None req-25c012b4-cc27-4d92-b252-16c51a52e5a1 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:02.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:02.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:02 np0005466031 podman[313585]: 2025-10-02 13:03:02.628524652 +0000 UTC m=+0.051686580 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:03:02 np0005466031 podman[313586]: 2025-10-02 13:03:02.65242107 +0000 UTC m=+0.076146085 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Oct  2 09:03:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:04.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:04.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:04 np0005466031 nova_compute[235803]: 2025-10-02 13:03:04.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:03:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/636709142' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:03:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:03:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/636709142' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:03:05 np0005466031 nova_compute[235803]: 2025-10-02 13:03:05.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:05 np0005466031 nova_compute[235803]: 2025-10-02 13:03:05.650 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:06.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:06.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:07 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:07Z|00697|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:03:07 np0005466031 nova_compute[235803]: 2025-10-02 13:03:07.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:07 np0005466031 nova_compute[235803]: 2025-10-02 13:03:07.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:07 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:07Z|00698|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:03:07 np0005466031 nova_compute[235803]: 2025-10-02 13:03:07.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:08.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:08.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:09 np0005466031 podman[313634]: 2025-10-02 13:03:09.634478505 +0000 UTC m=+0.053567154 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:03:09 np0005466031 podman[313633]: 2025-10-02 13:03:09.635073922 +0000 UTC m=+0.056881570 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:03:09 np0005466031 nova_compute[235803]: 2025-10-02 13:03:09.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:10.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:10.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:10 np0005466031 nova_compute[235803]: 2025-10-02 13:03:10.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:11 np0005466031 nova_compute[235803]: 2025-10-02 13:03:11.158 2 DEBUG nova.virt.libvirt.driver [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 09:03:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:12.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:12.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:14.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:14 np0005466031 kernel: tapac143b17-d8 (unregistering): left promiscuous mode
Oct  2 09:03:14 np0005466031 NetworkManager[44907]: <info>  [1759410194.1184] device (tapac143b17-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:03:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:14Z|00699|binding|INFO|Releasing lport ac143b17-d8a4-4339-86d5-dd95e00aaf7c from this chassis (sb_readonly=0)
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:14Z|00700|binding|INFO|Setting lport ac143b17-d8a4-4339-86d5-dd95e00aaf7c down in Southbound
Oct  2 09:03:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:14Z|00701|binding|INFO|Removing iface tapac143b17-d8 ovn-installed in OVS
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.137 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:49:4d 10.100.0.10'], port_security=['fa:16:3e:e7:49:4d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4961c831-88fa-4f95-ada9-0cc475306291', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=ac143b17-d8a4-4339-86d5-dd95e00aaf7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.138 141898 INFO neutron.agent.ovn.metadata.agent [-] Port ac143b17-d8a4-4339-86d5-dd95e00aaf7c in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.140 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052f341a-0628-4183-a5e0-76312bc986c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.144 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4f116d4f-4cee-4409-bdc4-18b681fc8aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.145 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace which is not needed anymore#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:14.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:14 np0005466031 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Oct  2 09:03:14 np0005466031 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000b2.scope: Consumed 13.299s CPU time.
Oct  2 09:03:14 np0005466031 systemd-machined[192227]: Machine qemu-80-instance-000000b2 terminated.
Oct  2 09:03:14 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[313471]: [NOTICE]   (313475) : haproxy version is 2.8.14-c23fe91
Oct  2 09:03:14 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[313471]: [NOTICE]   (313475) : path to executable is /usr/sbin/haproxy
Oct  2 09:03:14 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[313471]: [WARNING]  (313475) : Exiting Master process...
Oct  2 09:03:14 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[313471]: [ALERT]    (313475) : Current worker (313477) exited with code 143 (Terminated)
Oct  2 09:03:14 np0005466031 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[313471]: [WARNING]  (313475) : All workers exited. Exiting... (0)
Oct  2 09:03:14 np0005466031 systemd[1]: libpod-cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de.scope: Deactivated successfully.
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.378 2 DEBUG nova.compute.manager [req-6bc30fcc-aa54-47b4-8795-e3fa157b7332 req-4623c324-9a96-4651-8632-602defcfa32a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received event network-vif-unplugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.379 2 DEBUG oslo_concurrency.lockutils [req-6bc30fcc-aa54-47b4-8795-e3fa157b7332 req-4623c324-9a96-4651-8632-602defcfa32a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.379 2 DEBUG oslo_concurrency.lockutils [req-6bc30fcc-aa54-47b4-8795-e3fa157b7332 req-4623c324-9a96-4651-8632-602defcfa32a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.379 2 DEBUG oslo_concurrency.lockutils [req-6bc30fcc-aa54-47b4-8795-e3fa157b7332 req-4623c324-9a96-4651-8632-602defcfa32a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.379 2 DEBUG nova.compute.manager [req-6bc30fcc-aa54-47b4-8795-e3fa157b7332 req-4623c324-9a96-4651-8632-602defcfa32a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] No waiting events found dispatching network-vif-unplugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.379 2 WARNING nova.compute.manager [req-6bc30fcc-aa54-47b4-8795-e3fa157b7332 req-4623c324-9a96-4651-8632-602defcfa32a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received unexpected event network-vif-unplugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.381 2 INFO nova.virt.libvirt.driver [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 09:03:14 np0005466031 podman[313747]: 2025-10-02 13:03:14.382113133 +0000 UTC m=+0.147815690 container died cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.390 2 INFO nova.virt.libvirt.driver [-] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Instance destroyed successfully.#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.390 2 DEBUG nova.objects.instance [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'numa_topology' on Instance uuid 4961c831-88fa-4f95-ada9-0cc475306291 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.408 2 DEBUG nova.compute.manager [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.481 2 DEBUG oslo_concurrency.lockutils [None req-604532c4-705b-4695-b6ba-bb054b9cdd0c b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:14 np0005466031 systemd[1]: var-lib-containers-storage-overlay-dfff8af61fccffa78ec24d630b29aed421aae91b5d45e6b8d0b7d9d5321cca13-merged.mount: Deactivated successfully.
Oct  2 09:03:14 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de-userdata-shm.mount: Deactivated successfully.
Oct  2 09:03:14 np0005466031 podman[313747]: 2025-10-02 13:03:14.710515765 +0000 UTC m=+0.476218322 container cleanup cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:03:14 np0005466031 systemd[1]: libpod-conmon-cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de.scope: Deactivated successfully.
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.771 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410179.771167, b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.772 2 INFO nova.compute.manager [-] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:03:14 np0005466031 podman[313789]: 2025-10-02 13:03:14.774764646 +0000 UTC m=+0.038953693 container remove cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.780 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fe84e2ad-0853-4cd3-b350-1adbaf5cecda]: (4, ('Thu Oct  2 01:03:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de)\ncdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de\nThu Oct  2 01:03:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (cdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de)\ncdc7d0e8340e3914471536047b181c4d5c5ed8c7a4ce2ae485aadceafd9af8de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.782 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c141e4-8008-4eed-8b23-d9b23305cc8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.782 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:14 np0005466031 kernel: tap052f341a-00: left promiscuous mode
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.792 2 DEBUG nova.compute.manager [None req-3154e7fe-3780-467e-a8c4-b02f1f8bfb84 - - - - - -] [instance: b9c7ee34-5f00-4ab4-99bd-7bcf92b1ea07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.805 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[06445574-0b0e-4d6d-afb8-1f4b1e316ba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:14 np0005466031 nova_compute[235803]: 2025-10-02 13:03:14.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.839 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d1aecdd0-99ed-4f3f-8e2e-34b20c13a26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.840 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfc2fdd-bd78-4a8c-adb7-3039a791ec83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.854 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5fc8e7-cf49-428a-8a0f-5c4a7d377082]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803117, 'reachable_time': 17218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313807, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.857 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:03:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:14.857 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5dc176-c22c-45dd-b91d-ca0aee8cebe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:14 np0005466031 systemd[1]: run-netns-ovnmeta\x2d052f341a\x2d0628\x2d4183\x2da5e0\x2d76312bc986c6.mount: Deactivated successfully.
Oct  2 09:03:15 np0005466031 nova_compute[235803]: 2025-10-02 13:03:15.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:03:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:16.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:03:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:16.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:16 np0005466031 nova_compute[235803]: 2025-10-02 13:03:16.460 2 DEBUG nova.compute.manager [req-4fd79dc9-bed0-46d8-ae2f-3730f6b0aeee req-8a25d9ec-67e3-4363-a5a9-19b830e5833a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received event network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:16 np0005466031 nova_compute[235803]: 2025-10-02 13:03:16.461 2 DEBUG oslo_concurrency.lockutils [req-4fd79dc9-bed0-46d8-ae2f-3730f6b0aeee req-8a25d9ec-67e3-4363-a5a9-19b830e5833a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:16 np0005466031 nova_compute[235803]: 2025-10-02 13:03:16.461 2 DEBUG oslo_concurrency.lockutils [req-4fd79dc9-bed0-46d8-ae2f-3730f6b0aeee req-8a25d9ec-67e3-4363-a5a9-19b830e5833a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:16 np0005466031 nova_compute[235803]: 2025-10-02 13:03:16.461 2 DEBUG oslo_concurrency.lockutils [req-4fd79dc9-bed0-46d8-ae2f-3730f6b0aeee req-8a25d9ec-67e3-4363-a5a9-19b830e5833a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:16 np0005466031 nova_compute[235803]: 2025-10-02 13:03:16.462 2 DEBUG nova.compute.manager [req-4fd79dc9-bed0-46d8-ae2f-3730f6b0aeee req-8a25d9ec-67e3-4363-a5a9-19b830e5833a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] No waiting events found dispatching network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:16 np0005466031 nova_compute[235803]: 2025-10-02 13:03:16.462 2 WARNING nova.compute.manager [req-4fd79dc9-bed0-46d8-ae2f-3730f6b0aeee req-8a25d9ec-67e3-4363-a5a9-19b830e5833a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received unexpected event network-vif-plugged-ac143b17-d8a4-4339-86d5-dd95e00aaf7c for instance with vm_state stopped and task_state None.#033[00m
Oct  2 09:03:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:03:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2226714011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:03:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:17.189 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:17.190 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.554 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.554 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.554 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "4961c831-88fa-4f95-ada9-0cc475306291-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.555 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.555 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.556 2 INFO nova.compute.manager [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Terminating instance#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.557 2 DEBUG nova.compute.manager [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.561 2 INFO nova.virt.libvirt.driver [-] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Instance destroyed successfully.#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.562 2 DEBUG nova.objects.instance [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid 4961c831-88fa-4f95-ada9-0cc475306291 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.575 2 DEBUG nova.virt.libvirt.vif [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-263857536',display_name='tempest-Íñstáñcé-2119398679',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-263857536',id=178,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-063de3i4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:03:16Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=4961c831-88fa-4f95-ada9-0cc475306291,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.576 2 DEBUG nova.network.os_vif_util [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "address": "fa:16:3e:e7:49:4d", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac143b17-d8", "ovs_interfaceid": "ac143b17-d8a4-4339-86d5-dd95e00aaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.576 2 DEBUG nova.network.os_vif_util [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:49:4d,bridge_name='br-int',has_traffic_filtering=True,id=ac143b17-d8a4-4339-86d5-dd95e00aaf7c,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac143b17-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.577 2 DEBUG os_vif [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:49:4d,bridge_name='br-int',has_traffic_filtering=True,id=ac143b17-d8a4-4339-86d5-dd95e00aaf7c,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac143b17-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac143b17-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.583 2 INFO os_vif [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:49:4d,bridge_name='br-int',has_traffic_filtering=True,id=ac143b17-d8a4-4339-86d5-dd95e00aaf7c,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac143b17-d8')#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.663 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 09:03:17 np0005466031 nova_compute[235803]: 2025-10-02 13:03:17.664 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:03:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:18.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:18.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.298 2 INFO nova.virt.libvirt.driver [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Deleting instance files /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291_del#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.299 2 INFO nova.virt.libvirt.driver [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Deletion of /var/lib/nova/instances/4961c831-88fa-4f95-ada9-0cc475306291_del complete#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.356 2 INFO nova.compute.manager [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.356 2 DEBUG oslo.service.loopingcall [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.357 2 DEBUG nova.compute.manager [-] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.357 2 DEBUG nova.network.neutron [-] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.664 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.664 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.665 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.665 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:03:18 np0005466031 nova_compute[235803]: 2025-10-02 13:03:18.665 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2650273310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.104 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.318 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.319 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4205MB free_disk=20.785274505615234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.320 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.320 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.513 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 4961c831-88fa-4f95-ada9-0cc475306291 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.514 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.514 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:03:19 np0005466031 nova_compute[235803]: 2025-10-02 13:03:19.544 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:20.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4166443559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.102 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.108 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.182 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:20.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.322 2 DEBUG nova.network.neutron [-] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.351 2 DEBUG nova.compute.manager [req-e53d2c43-1f32-4a10-89fa-4affa59222bc req-dc0039e5-046b-4fa1-a0f4-6b7cd069c533 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Received event network-vif-deleted-ac143b17-d8a4-4339-86d5-dd95e00aaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.351 2 INFO nova.compute.manager [req-e53d2c43-1f32-4a10-89fa-4affa59222bc req-dc0039e5-046b-4fa1-a0f4-6b7cd069c533 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Neutron deleted interface ac143b17-d8a4-4339-86d5-dd95e00aaf7c; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.352 2 DEBUG nova.network.neutron [req-e53d2c43-1f32-4a10-89fa-4affa59222bc req-dc0039e5-046b-4fa1-a0f4-6b7cd069c533 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.419 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.421 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.517 2 INFO nova.compute.manager [-] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Took 2.16 seconds to deallocate network for instance.#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.526 2 DEBUG nova.compute.manager [req-e53d2c43-1f32-4a10-89fa-4affa59222bc req-dc0039e5-046b-4fa1-a0f4-6b7cd069c533 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Detach interface failed, port_id=ac143b17-d8a4-4339-86d5-dd95e00aaf7c, reason: Instance 4961c831-88fa-4f95-ada9-0cc475306291 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.706 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.707 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:20 np0005466031 nova_compute[235803]: 2025-10-02 13:03:20.783 2 DEBUG oslo_concurrency.processutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1206424062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:21 np0005466031 nova_compute[235803]: 2025-10-02 13:03:21.413 2 DEBUG oslo_concurrency.processutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:21 np0005466031 nova_compute[235803]: 2025-10-02 13:03:21.419 2 DEBUG nova.compute.provider_tree [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:21 np0005466031 nova_compute[235803]: 2025-10-02 13:03:21.437 2 DEBUG nova.scheduler.client.report [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:21 np0005466031 nova_compute[235803]: 2025-10-02 13:03:21.473 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:21 np0005466031 nova_compute[235803]: 2025-10-02 13:03:21.517 2 INFO nova.scheduler.client.report [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance 4961c831-88fa-4f95-ada9-0cc475306291#033[00m
Oct  2 09:03:21 np0005466031 nova_compute[235803]: 2025-10-02 13:03:21.660 2 DEBUG oslo_concurrency.lockutils [None req-d102ae54-7267-48a5-94ea-61a33cac05e8 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "4961c831-88fa-4f95-ada9-0cc475306291" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:22.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.093 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.094 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.113 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.185 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.186 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.191 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.192 2 INFO nova.compute.claims [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:03:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:22.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.424 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.457 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2882987751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.912 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.917 2 DEBUG nova.compute.provider_tree [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.937 2 DEBUG nova.scheduler.client.report [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.962 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:22 np0005466031 nova_compute[235803]: 2025-10-02 13:03:22.963 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.020 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.021 2 DEBUG nova.network.neutron [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.048 2 INFO nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.067 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.156 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.157 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.157 2 INFO nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Creating image(s)#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.340 2 DEBUG nova.storage.rbd_utils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.366 2 DEBUG nova.storage.rbd_utils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.390 2 DEBUG nova.storage.rbd_utils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.394 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.428 2 DEBUG nova.policy [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.466 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.467 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.467 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.468 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.491 2 DEBUG nova.storage.rbd_utils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:23 np0005466031 nova_compute[235803]: 2025-10-02 13:03:23.494 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:24.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:24 np0005466031 nova_compute[235803]: 2025-10-02 13:03:24.500 2 DEBUG nova.network.neutron [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Successfully created port: 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:03:24 np0005466031 nova_compute[235803]: 2025-10-02 13:03:24.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:24 np0005466031 nova_compute[235803]: 2025-10-02 13:03:24.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:03:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.011 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.079 2 DEBUG nova.storage.rbd_utils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:03:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:25.192 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.250 2 DEBUG nova.network.neutron [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Successfully updated port: 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.266 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.267 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.267 2 DEBUG nova.network.neutron [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.393 2 DEBUG nova.network.neutron [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.401 2 DEBUG nova.objects.instance [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.414 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.415 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Ensure instance console log exists: /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.416 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.416 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.416 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:25 np0005466031 nova_compute[235803]: 2025-10-02 13:03:25.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:25.871 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:25.872 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:25.872 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:26.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:03:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:26.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.220 2 DEBUG nova.network.neutron [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updating instance_info_cache with network_info: [{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.236 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.237 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance network_info: |[{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.239 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Start _get_guest_xml network_info=[{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.243 2 WARNING nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.248 2 DEBUG nova.virt.libvirt.host [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.248 2 DEBUG nova.virt.libvirt.host [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.251 2 DEBUG nova.virt.libvirt.host [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.252 2 DEBUG nova.virt.libvirt.host [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.252 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.253 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.253 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.253 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.253 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.253 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.254 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.254 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.254 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.254 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.254 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.255 2 DEBUG nova.virt.hardware [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.257 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.655 2 DEBUG nova.compute.manager [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-changed-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.655 2 DEBUG nova.compute.manager [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Refreshing instance network info cache due to event network-changed-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.656 2 DEBUG oslo_concurrency.lockutils [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.657 2 DEBUG oslo_concurrency.lockutils [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.657 2 DEBUG nova.network.neutron [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Refreshing network info cache for port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:03:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3195859859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.709 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.736 2 DEBUG nova.storage.rbd_utils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:26 np0005466031 nova_compute[235803]: 2025-10-02 13:03:26.740 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:03:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4201269597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.161 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.163 2 DEBUG nova.virt.libvirt.vif [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1545583047',display_name='tempest-TestNetworkAdvancedServerOps-server-1545583047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1545583047',id=180,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNqGiqvBYYQgcx8Ikwu0eTZOIhMMwxE6EFDSk2YFEplSUAvLfaz/Uj0dUiewPJq/5ERInECgxhZXsqalUYL+uLq+TdPPo9K8BMtPccFdyECaMZFinm6Te4479GTN81vtA==',key_name='tempest-TestNetworkAdvancedServerOps-1958524571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xtui4jjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:03:23Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=04c07020-71d5-4a5c-9f1b-cab14e08e014,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.163 2 DEBUG nova.network.os_vif_util [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.164 2 DEBUG nova.network.os_vif_util [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.165 2 DEBUG nova.objects.instance [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.193 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <uuid>04c07020-71d5-4a5c-9f1b-cab14e08e014</uuid>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <name>instance-000000b4</name>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1545583047</nova:name>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:03:26</nova:creationTime>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <nova:port uuid="0b22044a-cf3b-4bbe-ab44-dbcdd5becd06">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <entry name="serial">04c07020-71d5-4a5c-9f1b-cab14e08e014</entry>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <entry name="uuid">04c07020-71d5-4a5c-9f1b-cab14e08e014</entry>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/04c07020-71d5-4a5c-9f1b-cab14e08e014_disk">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/04c07020-71d5-4a5c-9f1b-cab14e08e014_disk.config">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:df:fc:c7"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <target dev="tap0b22044a-cf"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/console.log" append="off"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:03:27 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:03:27 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:03:27 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:03:27 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.194 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Preparing to wait for external event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.195 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.195 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.195 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.195 2 DEBUG nova.virt.libvirt.vif [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1545583047',display_name='tempest-TestNetworkAdvancedServerOps-server-1545583047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1545583047',id=180,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNqGiqvBYYQgcx8Ikwu0eTZOIhMMwxE6EFDSk2YFEplSUAvLfaz/Uj0dUiewPJq/5ERInECgxhZXsqalUYL+uLq+TdPPo9K8BMtPccFdyECaMZFinm6Te4479GTN81vtA==',key_name='tempest-TestNetworkAdvancedServerOps-1958524571',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xtui4jjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:03:23Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=04c07020-71d5-4a5c-9f1b-cab14e08e014,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.196 2 DEBUG nova.network.os_vif_util [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.196 2 DEBUG nova.network.os_vif_util [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.196 2 DEBUG os_vif [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b22044a-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b22044a-cf, col_values=(('external_ids', {'iface-id': '0b22044a-cf3b-4bbe-ab44-dbcdd5becd06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:fc:c7', 'vm-uuid': '04c07020-71d5-4a5c-9f1b-cab14e08e014'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:27 np0005466031 NetworkManager[44907]: <info>  [1759410207.2033] manager: (tap0b22044a-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.207 2 INFO os_vif [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf')#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.292 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.293 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.293 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:df:fc:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.294 2 INFO nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Using config drive#033[00m
Oct  2 09:03:27 np0005466031 nova_compute[235803]: 2025-10-02 13:03:27.316 2 DEBUG nova.storage.rbd_utils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:28.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:28.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.387 2 INFO nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Creating config drive at /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/disk.config#033[00m
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.394 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp02q_r33i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.547 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp02q_r33i" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.585 2 DEBUG nova.storage.rbd_utils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.590 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/disk.config 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.752 2 DEBUG oslo_concurrency.processutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/disk.config 04c07020-71d5-4a5c-9f1b-cab14e08e014_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.753 2 INFO nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Deleting local config drive /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/disk.config because it was imported into RBD.#033[00m
Oct  2 09:03:28 np0005466031 kernel: tap0b22044a-cf: entered promiscuous mode
Oct  2 09:03:28 np0005466031 NetworkManager[44907]: <info>  [1759410208.8013] manager: (tap0b22044a-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Oct  2 09:03:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:28Z|00702|binding|INFO|Claiming lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for this chassis.
Oct  2 09:03:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:28Z|00703|binding|INFO|0b22044a-cf3b-4bbe-ab44-dbcdd5becd06: Claiming fa:16:3e:df:fc:c7 10.100.0.3
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.813 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:fc:c7 10.100.0.3'], port_security=['fa:16:3e:df:fc:c7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '04c07020-71d5-4a5c-9f1b-cab14e08e014', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1088a2ea-14f6-41e6-bbf1-6c7509c324e2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dada802-689c-4406-8944-09b70eba9ae8, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.814 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 in datapath bec9e2c4-93a6-4b64-8993-f7e5c684995a bound to our chassis#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.816 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bec9e2c4-93a6-4b64-8993-f7e5c684995a#033[00m
Oct  2 09:03:28 np0005466031 systemd-udevd[314224]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.829 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[202bd7ac-594d-495b-9a1a-30b7c52a2e52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.829 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbec9e2c4-91 in ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.831 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbec9e2c4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.831 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd02f1c-17a6-4ba8-9bce-7c2c7b0af589]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.832 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ab0041-bd5b-4648-9210-dad70c71d385]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 NetworkManager[44907]: <info>  [1759410208.8381] device (tap0b22044a-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:03:28 np0005466031 systemd-machined[192227]: New machine qemu-81-instance-000000b4.
Oct  2 09:03:28 np0005466031 NetworkManager[44907]: <info>  [1759410208.8388] device (tap0b22044a-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.843 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5e7511-4b10-47be-a8a2-2ca09c3279d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 systemd[1]: Started Virtual Machine qemu-81-instance-000000b4.
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.866 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[06a5db4f-d19c-4872-9316-cf5bcc78ead8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:28Z|00704|binding|INFO|Setting lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 ovn-installed in OVS
Oct  2 09:03:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:28Z|00705|binding|INFO|Setting lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 up in Southbound
Oct  2 09:03:28 np0005466031 nova_compute[235803]: 2025-10-02 13:03:28.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.894 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[878520a8-e49b-4b99-990c-afd99d2cb99c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 NetworkManager[44907]: <info>  [1759410208.9012] manager: (tapbec9e2c4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/317)
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.901 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[16b1589a-7fb2-4895-bcbf-b567d01bab84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.932 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c3efda46-8850-499b-8095-15c7addae77f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.935 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a89e60-a7a8-4fb2-a219-a2b26442fa67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 NetworkManager[44907]: <info>  [1759410208.9540] device (tapbec9e2c4-90): carrier: link connected
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.958 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0ceb58-c004-4c44-872e-ba7c9d6ab94c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.972 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ead5ef-8abf-40dc-9e25-f467b5044b22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbec9e2c4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:0e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806454, 'reachable_time': 41474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314257, 'error': None, 'target': 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:28.989 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8a551fc4-c5a7-47cc-8b28-00a641a5c6f4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:e5a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806454, 'tstamp': 806454}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314258, 'error': None, 'target': 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.005 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[34e51faf-8ad3-4b49-abec-f1ec54d1039f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbec9e2c4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:0e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806454, 'reachable_time': 41474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314259, 'error': None, 'target': 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.039 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a33fcc5a-d93a-481c-9414-7c31ad29283b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.097 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[46df2bef-6bb8-45fc-8b5e-695dcd2ff44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.098 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbec9e2c4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.098 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.099 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbec9e2c4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:29 np0005466031 NetworkManager[44907]: <info>  [1759410209.1012] manager: (tapbec9e2c4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Oct  2 09:03:29 np0005466031 kernel: tapbec9e2c4-90: entered promiscuous mode
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.103 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbec9e2c4-90, col_values=(('external_ids', {'iface-id': '6e089b66-193b-421b-8274-489bc49dd155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:29 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:29Z|00706|binding|INFO|Releasing lport 6e089b66-193b-421b-8274-489bc49dd155 from this chassis (sb_readonly=0)
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.123 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bec9e2c4-93a6-4b64-8993-f7e5c684995a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bec9e2c4-93a6-4b64-8993-f7e5c684995a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.124 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b750b7b0-f8a7-477e-abb8-d06b8e8f04dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.126 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-bec9e2c4-93a6-4b64-8993-f7e5c684995a
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/bec9e2c4-93a6-4b64-8993-f7e5c684995a.pid.haproxy
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID bec9e2c4-93a6-4b64-8993-f7e5c684995a
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:03:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:29.128 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'env', 'PROCESS_TAG=haproxy-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bec9e2c4-93a6-4b64-8993-f7e5c684995a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.381 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410194.3759787, 4961c831-88fa-4f95-ada9-0cc475306291 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.381 2 INFO nova.compute.manager [-] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.404 2 DEBUG nova.compute.manager [None req-a3cf970c-a598-46ff-acfa-ed1448a8de82 - - - - - -] [instance: 4961c831-88fa-4f95-ada9-0cc475306291] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.456 2 DEBUG nova.network.neutron [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updated VIF entry in instance network info cache for port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.457 2 DEBUG nova.network.neutron [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updating instance_info_cache with network_info: [{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.481 2 DEBUG nova.compute.manager [req-bf0d7219-9f1e-458d-a9ca-b32043b738d7 req-f059cf7d-dce4-45cd-9632-647849c394d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.482 2 DEBUG oslo_concurrency.lockutils [req-bf0d7219-9f1e-458d-a9ca-b32043b738d7 req-f059cf7d-dce4-45cd-9632-647849c394d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.482 2 DEBUG oslo_concurrency.lockutils [req-bf0d7219-9f1e-458d-a9ca-b32043b738d7 req-f059cf7d-dce4-45cd-9632-647849c394d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.482 2 DEBUG oslo_concurrency.lockutils [req-bf0d7219-9f1e-458d-a9ca-b32043b738d7 req-f059cf7d-dce4-45cd-9632-647849c394d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.483 2 DEBUG nova.compute.manager [req-bf0d7219-9f1e-458d-a9ca-b32043b738d7 req-f059cf7d-dce4-45cd-9632-647849c394d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Processing event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:03:29 np0005466031 podman[314288]: 2025-10-02 13:03:29.491504341 +0000 UTC m=+0.051795884 container create 017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:03:29 np0005466031 nova_compute[235803]: 2025-10-02 13:03:29.498 2 DEBUG oslo_concurrency.lockutils [req-7fd57ab7-d359-4fb1-b6fc-bc96c1847714 req-91eca776-45e6-4a16-98d5-19cc98faca5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:29 np0005466031 systemd[1]: Started libpod-conmon-017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29.scope.
Oct  2 09:03:29 np0005466031 podman[314288]: 2025-10-02 13:03:29.464459281 +0000 UTC m=+0.024750844 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:03:29 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:03:29 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834a3bf6ed76a18b24a729563d7225a90d3b971bbd1d6fdfc298afd14e6fffc3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:03:29 np0005466031 podman[314288]: 2025-10-02 13:03:29.581787102 +0000 UTC m=+0.142078655 container init 017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 09:03:29 np0005466031 podman[314288]: 2025-10-02 13:03:29.587784275 +0000 UTC m=+0.148075818 container start 017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:03:29 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[314304]: [NOTICE]   (314308) : New worker (314310) forked
Oct  2 09:03:29 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[314304]: [NOTICE]   (314308) : Loading success.
Oct  2 09:03:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:30.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.284 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410210.283408, 04c07020-71d5-4a5c-9f1b-cab14e08e014 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.285 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] VM Started (Lifecycle Event)#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.287 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.291 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.294 2 INFO nova.virt.libvirt.driver [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance spawned successfully.#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.295 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.341 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.346 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.351 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.351 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.352 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.352 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.353 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.353 2 DEBUG nova.virt.libvirt.driver [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.395 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.395 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410210.2837346, 04c07020-71d5-4a5c-9f1b-cab14e08e014 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.395 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.434 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.439 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410210.2904341, 04c07020-71d5-4a5c-9f1b-cab14e08e014 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.439 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.450 2 INFO nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Took 7.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.450 2 DEBUG nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.458 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.463 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.487 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.532 2 INFO nova.compute.manager [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Took 8.36 seconds to build instance.#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.550 2 DEBUG oslo_concurrency.lockutils [None req-cc97e874-02f8-4f48-af89-5e09befa9420 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:30 np0005466031 nova_compute[235803]: 2025-10-02 13:03:30.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:31 np0005466031 nova_compute[235803]: 2025-10-02 13:03:31.644 2 DEBUG nova.compute.manager [req-a5e46127-3985-45df-bdbf-e74e003e5a99 req-341909fe-df6c-4168-aaad-167f6f4b858d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:31 np0005466031 nova_compute[235803]: 2025-10-02 13:03:31.644 2 DEBUG oslo_concurrency.lockutils [req-a5e46127-3985-45df-bdbf-e74e003e5a99 req-341909fe-df6c-4168-aaad-167f6f4b858d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:31 np0005466031 nova_compute[235803]: 2025-10-02 13:03:31.644 2 DEBUG oslo_concurrency.lockutils [req-a5e46127-3985-45df-bdbf-e74e003e5a99 req-341909fe-df6c-4168-aaad-167f6f4b858d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:31 np0005466031 nova_compute[235803]: 2025-10-02 13:03:31.645 2 DEBUG oslo_concurrency.lockutils [req-a5e46127-3985-45df-bdbf-e74e003e5a99 req-341909fe-df6c-4168-aaad-167f6f4b858d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:31 np0005466031 nova_compute[235803]: 2025-10-02 13:03:31.645 2 DEBUG nova.compute.manager [req-a5e46127-3985-45df-bdbf-e74e003e5a99 req-341909fe-df6c-4168-aaad-167f6f4b858d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] No waiting events found dispatching network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:31 np0005466031 nova_compute[235803]: 2025-10-02 13:03:31.645 2 WARNING nova.compute.manager [req-a5e46127-3985-45df-bdbf-e74e003e5a99 req-341909fe-df6c-4168-aaad-167f6f4b858d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received unexpected event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:03:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:32.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:32 np0005466031 nova_compute[235803]: 2025-10-02 13:03:32.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:03:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:32.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:03:33 np0005466031 podman[314413]: 2025-10-02 13:03:33.678987768 +0000 UTC m=+0.097982273 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:03:33 np0005466031 podman[314414]: 2025-10-02 13:03:33.703764162 +0000 UTC m=+0.122951812 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:03:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:03:34 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683143405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:03:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:03:34 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683143405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:03:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:34.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:34.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:35 np0005466031 NetworkManager[44907]: <info>  [1759410215.2880] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct  2 09:03:35 np0005466031 NetworkManager[44907]: <info>  [1759410215.2890] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:35 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:35Z|00707|binding|INFO|Releasing lport 6e089b66-193b-421b-8274-489bc49dd155 from this chassis (sb_readonly=0)
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.803 2 DEBUG nova.compute.manager [req-64e865e8-016e-495d-b4b3-aee40d96d04a req-3cd7d026-3c56-420c-b31f-d6e8b0d7fb31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-changed-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.803 2 DEBUG nova.compute.manager [req-64e865e8-016e-495d-b4b3-aee40d96d04a req-3cd7d026-3c56-420c-b31f-d6e8b0d7fb31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Refreshing instance network info cache due to event network-changed-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.804 2 DEBUG oslo_concurrency.lockutils [req-64e865e8-016e-495d-b4b3-aee40d96d04a req-3cd7d026-3c56-420c-b31f-d6e8b0d7fb31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.804 2 DEBUG oslo_concurrency.lockutils [req-64e865e8-016e-495d-b4b3-aee40d96d04a req-3cd7d026-3c56-420c-b31f-d6e8b0d7fb31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:35 np0005466031 nova_compute[235803]: 2025-10-02 13:03:35.804 2 DEBUG nova.network.neutron [req-64e865e8-016e-495d-b4b3-aee40d96d04a req-3cd7d026-3c56-420c-b31f-d6e8b0d7fb31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Refreshing network info cache for port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:36.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:36.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:36 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:36Z|00708|binding|INFO|Releasing lport 6e089b66-193b-421b-8274-489bc49dd155 from this chassis (sb_readonly=0)
Oct  2 09:03:37 np0005466031 nova_compute[235803]: 2025-10-02 13:03:37.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:37 np0005466031 nova_compute[235803]: 2025-10-02 13:03:37.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:38.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:38 np0005466031 nova_compute[235803]: 2025-10-02 13:03:38.146 2 DEBUG nova.network.neutron [req-64e865e8-016e-495d-b4b3-aee40d96d04a req-3cd7d026-3c56-420c-b31f-d6e8b0d7fb31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updated VIF entry in instance network info cache for port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:38 np0005466031 nova_compute[235803]: 2025-10-02 13:03:38.147 2 DEBUG nova.network.neutron [req-64e865e8-016e-495d-b4b3-aee40d96d04a req-3cd7d026-3c56-420c-b31f-d6e8b0d7fb31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updating instance_info_cache with network_info: [{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:38 np0005466031 nova_compute[235803]: 2025-10-02 13:03:38.185 2 DEBUG oslo_concurrency.lockutils [req-64e865e8-016e-495d-b4b3-aee40d96d04a req-3cd7d026-3c56-420c-b31f-d6e8b0d7fb31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:39 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:39Z|00709|binding|INFO|Releasing lport 6e089b66-193b-421b-8274-489bc49dd155 from this chassis (sb_readonly=0)
Oct  2 09:03:39 np0005466031 nova_compute[235803]: 2025-10-02 13:03:39.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:40.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:40.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:40 np0005466031 podman[314465]: 2025-10-02 13:03:40.619508878 +0000 UTC m=+0.048461347 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  2 09:03:40 np0005466031 nova_compute[235803]: 2025-10-02 13:03:40.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:40 np0005466031 podman[314464]: 2025-10-02 13:03:40.629531747 +0000 UTC m=+0.060281188 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:03:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:42.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:42 np0005466031 nova_compute[235803]: 2025-10-02 13:03:42.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:42.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.074677) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223074717, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 685, "num_deletes": 251, "total_data_size": 1130597, "memory_usage": 1158496, "flush_reason": "Manual Compaction"}
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223083001, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 745488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65560, "largest_seqno": 66240, "table_properties": {"data_size": 742207, "index_size": 1188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7812, "raw_average_key_size": 19, "raw_value_size": 735589, "raw_average_value_size": 1816, "num_data_blocks": 53, "num_entries": 405, "num_filter_entries": 405, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410179, "oldest_key_time": 1759410179, "file_creation_time": 1759410223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 8371 microseconds, and 3496 cpu microseconds.
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.083047) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 745488 bytes OK
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.083067) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.086902) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.086983) EVENT_LOG_v1 {"time_micros": 1759410223086967, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.087022) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1126871, prev total WAL file size 1126871, number of live WAL files 2.
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.087990) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(728KB)], [132(11MB)]
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223088082, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 12558300, "oldest_snapshot_seqno": -1}
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 8601 keys, 10707772 bytes, temperature: kUnknown
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223175831, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 10707772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10652821, "index_size": 32374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 227701, "raw_average_key_size": 26, "raw_value_size": 10502288, "raw_average_value_size": 1221, "num_data_blocks": 1230, "num_entries": 8601, "num_filter_entries": 8601, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.176302) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10707772 bytes
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.178165) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.8 rd, 122.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(31.2) write-amplify(14.4) OK, records in: 9111, records dropped: 510 output_compression: NoCompression
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.178227) EVENT_LOG_v1 {"time_micros": 1759410223178204, "job": 84, "event": "compaction_finished", "compaction_time_micros": 87319, "compaction_time_cpu_micros": 31045, "output_level": 6, "num_output_files": 1, "total_output_size": 10707772, "num_input_records": 9111, "num_output_records": 8601, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223179009, "job": 84, "event": "table_file_deletion", "file_number": 134}
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223182085, "job": 84, "event": "table_file_deletion", "file_number": 132}
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.087754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.182241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.182249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.182251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.182253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:03:43.182255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:43Z|00710|binding|INFO|Releasing lport 6e089b66-193b-421b-8274-489bc49dd155 from this chassis (sb_readonly=0)
Oct  2 09:03:43 np0005466031 nova_compute[235803]: 2025-10-02 13:03:43.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:44.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:44.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:45 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:45Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:fc:c7 10.100.0.3
Oct  2 09:03:45 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:45Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:fc:c7 10.100.0.3
Oct  2 09:03:45 np0005466031 nova_compute[235803]: 2025-10-02 13:03:45.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:46.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:46.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:46 np0005466031 nova_compute[235803]: 2025-10-02 13:03:46.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:47 np0005466031 nova_compute[235803]: 2025-10-02 13:03:47.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:48.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:03:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:03:48 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:03:48 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:03:48 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:03:49 np0005466031 nova_compute[235803]: 2025-10-02 13:03:49.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:49 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:49Z|00711|binding|INFO|Releasing lport 6e089b66-193b-421b-8274-489bc49dd155 from this chassis (sb_readonly=0)
Oct  2 09:03:50 np0005466031 nova_compute[235803]: 2025-10-02 13:03:50.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:50.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:50.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:50 np0005466031 nova_compute[235803]: 2025-10-02 13:03:50.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:51 np0005466031 nova_compute[235803]: 2025-10-02 13:03:51.668 2 INFO nova.compute.manager [None req-73a3ff1f-bbea-4de4-a48d-c66c0e2cee77 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Get console output#033[00m
Oct  2 09:03:51 np0005466031 nova_compute[235803]: 2025-10-02 13:03:51.674 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:03:52 np0005466031 nova_compute[235803]: 2025-10-02 13:03:52.001 2 DEBUG oslo_concurrency.lockutils [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:52 np0005466031 nova_compute[235803]: 2025-10-02 13:03:52.002 2 DEBUG oslo_concurrency.lockutils [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:52 np0005466031 nova_compute[235803]: 2025-10-02 13:03:52.002 2 DEBUG nova.compute.manager [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:52 np0005466031 nova_compute[235803]: 2025-10-02 13:03:52.005 2 DEBUG nova.compute.manager [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 09:03:52 np0005466031 nova_compute[235803]: 2025-10-02 13:03:52.006 2 DEBUG nova.objects.instance [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'flavor' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:52 np0005466031 nova_compute[235803]: 2025-10-02 13:03:52.032 2 DEBUG nova.virt.libvirt.driver [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:03:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:52.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:52 np0005466031 nova_compute[235803]: 2025-10-02 13:03:52.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:52.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:53 np0005466031 nova_compute[235803]: 2025-10-02 13:03:53.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:54.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:54 np0005466031 nova_compute[235803]: 2025-10-02 13:03:54.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:54.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.047 2 INFO nova.virt.libvirt.driver [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 09:03:55 np0005466031 kernel: tap0b22044a-cf (unregistering): left promiscuous mode
Oct  2 09:03:55 np0005466031 NetworkManager[44907]: <info>  [1759410235.4701] device (tap0b22044a-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:03:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:55Z|00712|binding|INFO|Releasing lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 from this chassis (sb_readonly=0)
Oct  2 09:03:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:55Z|00713|binding|INFO|Setting lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 down in Southbound
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:03:55Z|00714|binding|INFO|Removing iface tap0b22044a-cf ovn-installed in OVS
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:55.485 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:fc:c7 10.100.0.3'], port_security=['fa:16:3e:df:fc:c7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '04c07020-71d5-4a5c-9f1b-cab14e08e014', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1088a2ea-14f6-41e6-bbf1-6c7509c324e2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dada802-689c-4406-8944-09b70eba9ae8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:55.486 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 in datapath bec9e2c4-93a6-4b64-8993-f7e5c684995a unbound from our chassis#033[00m
Oct  2 09:03:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:55.487 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bec9e2c4-93a6-4b64-8993-f7e5c684995a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:03:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:55.488 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ee77aa26-5b74-4311-a5df-e1784d877d14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:55.489 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a namespace which is not needed anymore#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:55 np0005466031 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Oct  2 09:03:55 np0005466031 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000b4.scope: Consumed 14.510s CPU time.
Oct  2 09:03:55 np0005466031 systemd-machined[192227]: Machine qemu-81-instance-000000b4 terminated.
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.685 2 INFO nova.virt.libvirt.driver [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance destroyed successfully.#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.685 2 DEBUG nova.objects.instance [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.707 2 DEBUG nova.compute.manager [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.773 2 DEBUG oslo_concurrency.lockutils [None req-ce63d5da-da51-47f4-afaa-c9d7fec87226 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:55 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[314304]: [NOTICE]   (314308) : haproxy version is 2.8.14-c23fe91
Oct  2 09:03:55 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[314304]: [NOTICE]   (314308) : path to executable is /usr/sbin/haproxy
Oct  2 09:03:55 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[314304]: [WARNING]  (314308) : Exiting Master process...
Oct  2 09:03:55 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[314304]: [WARNING]  (314308) : Exiting Master process...
Oct  2 09:03:55 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[314304]: [ALERT]    (314308) : Current worker (314310) exited with code 143 (Terminated)
Oct  2 09:03:55 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[314304]: [WARNING]  (314308) : All workers exited. Exiting... (0)
Oct  2 09:03:55 np0005466031 systemd[1]: libpod-017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29.scope: Deactivated successfully.
Oct  2 09:03:55 np0005466031 podman[314764]: 2025-10-02 13:03:55.813143854 +0000 UTC m=+0.235807375 container died 017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 09:03:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:03:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.877 2 DEBUG nova.compute.manager [req-6141b4f1-8850-460c-9eb9-6f62293af69d req-3d9815ae-46b2-45bc-a908-e37d1110debe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-unplugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.878 2 DEBUG oslo_concurrency.lockutils [req-6141b4f1-8850-460c-9eb9-6f62293af69d req-3d9815ae-46b2-45bc-a908-e37d1110debe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.878 2 DEBUG oslo_concurrency.lockutils [req-6141b4f1-8850-460c-9eb9-6f62293af69d req-3d9815ae-46b2-45bc-a908-e37d1110debe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.879 2 DEBUG oslo_concurrency.lockutils [req-6141b4f1-8850-460c-9eb9-6f62293af69d req-3d9815ae-46b2-45bc-a908-e37d1110debe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.879 2 DEBUG nova.compute.manager [req-6141b4f1-8850-460c-9eb9-6f62293af69d req-3d9815ae-46b2-45bc-a908-e37d1110debe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] No waiting events found dispatching network-vif-unplugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:55 np0005466031 nova_compute[235803]: 2025-10-02 13:03:55.879 2 WARNING nova.compute.manager [req-6141b4f1-8850-460c-9eb9-6f62293af69d req-3d9815ae-46b2-45bc-a908-e37d1110debe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received unexpected event network-vif-unplugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 09:03:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:56.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:56.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:56 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29-userdata-shm.mount: Deactivated successfully.
Oct  2 09:03:56 np0005466031 systemd[1]: var-lib-containers-storage-overlay-834a3bf6ed76a18b24a729563d7225a90d3b971bbd1d6fdfc298afd14e6fffc3-merged.mount: Deactivated successfully.
Oct  2 09:03:56 np0005466031 podman[314764]: 2025-10-02 13:03:56.465252803 +0000 UTC m=+0.887916334 container cleanup 017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:03:56 np0005466031 systemd[1]: libpod-conmon-017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29.scope: Deactivated successfully.
Oct  2 09:03:56 np0005466031 podman[314806]: 2025-10-02 13:03:56.834410119 +0000 UTC m=+0.346513545 container remove 017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.841 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[12a4ab54-2598-4e34-b724-bb238b86d8d1]: (4, ('Thu Oct  2 01:03:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a (017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29)\n017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29\nThu Oct  2 01:03:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a (017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29)\n017ea8e52828f72efa02fdab69d6e26d9e7123fec86d6e38a553003f06eafa29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.843 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[de8fcd38-cc39-4039-8fa6-219c298778a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.844 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbec9e2c4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:56 np0005466031 kernel: tapbec9e2c4-90: left promiscuous mode
Oct  2 09:03:56 np0005466031 nova_compute[235803]: 2025-10-02 13:03:56.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:56 np0005466031 nova_compute[235803]: 2025-10-02 13:03:56.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.867 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1e724f-ffa2-427f-be92-ce827c4d7289]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.901 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[62a330f0-fa4b-41a3-803e-a907ae441651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.903 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[811f19c7-99d0-4004-af38-5c18368d2542]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.916 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[492e20e7-3564-4512-bb37-e3653ee9d792]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806448, 'reachable_time': 26731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314826, 'error': None, 'target': 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:56 np0005466031 systemd[1]: run-netns-ovnmeta\x2dbec9e2c4\x2d93a6\x2d4b64\x2d8993\x2df7e5c684995a.mount: Deactivated successfully.
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.922 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:03:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:03:56.922 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[35387de2-3e06-4108-a077-873b7249fc9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:57 np0005466031 nova_compute[235803]: 2025-10-02 13:03:57.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:57 np0005466031 nova_compute[235803]: 2025-10-02 13:03:57.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:58 np0005466031 nova_compute[235803]: 2025-10-02 13:03:58.052 2 DEBUG nova.compute.manager [req-5565d359-8ddc-4151-9b7e-87c35c6351bf req-b277f324-93fe-4e65-b353-be86cd5a65ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:58 np0005466031 nova_compute[235803]: 2025-10-02 13:03:58.054 2 DEBUG oslo_concurrency.lockutils [req-5565d359-8ddc-4151-9b7e-87c35c6351bf req-b277f324-93fe-4e65-b353-be86cd5a65ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:58 np0005466031 nova_compute[235803]: 2025-10-02 13:03:58.054 2 DEBUG oslo_concurrency.lockutils [req-5565d359-8ddc-4151-9b7e-87c35c6351bf req-b277f324-93fe-4e65-b353-be86cd5a65ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:58 np0005466031 nova_compute[235803]: 2025-10-02 13:03:58.055 2 DEBUG oslo_concurrency.lockutils [req-5565d359-8ddc-4151-9b7e-87c35c6351bf req-b277f324-93fe-4e65-b353-be86cd5a65ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:58 np0005466031 nova_compute[235803]: 2025-10-02 13:03:58.055 2 DEBUG nova.compute.manager [req-5565d359-8ddc-4151-9b7e-87c35c6351bf req-b277f324-93fe-4e65-b353-be86cd5a65ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] No waiting events found dispatching network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:58 np0005466031 nova_compute[235803]: 2025-10-02 13:03:58.055 2 WARNING nova.compute.manager [req-5565d359-8ddc-4151-9b7e-87c35c6351bf req-b277f324-93fe-4e65-b353-be86cd5a65ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received unexpected event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 09:03:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:03:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:58.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:03:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:03:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 09:03:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:58.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 09:03:59 np0005466031 nova_compute[235803]: 2025-10-02 13:03:59.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:59 np0005466031 nova_compute[235803]: 2025-10-02 13:03:59.382 2 INFO nova.compute.manager [None req-07a3c758-2a09-47bb-8c28-9f7ade75bd66 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Get console output#033[00m
Oct  2 09:03:59 np0005466031 nova_compute[235803]: 2025-10-02 13:03:59.613 2 DEBUG nova.objects.instance [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'flavor' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:59 np0005466031 nova_compute[235803]: 2025-10-02 13:03:59.638 2 DEBUG oslo_concurrency.lockutils [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:59 np0005466031 nova_compute[235803]: 2025-10-02 13:03:59.639 2 DEBUG oslo_concurrency.lockutils [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:59 np0005466031 nova_compute[235803]: 2025-10-02 13:03:59.639 2 DEBUG nova.network.neutron [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:03:59 np0005466031 nova_compute[235803]: 2025-10-02 13:03:59.639 2 DEBUG nova.objects.instance [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'info_cache' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:00.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:00.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:00 np0005466031 nova_compute[235803]: 2025-10-02 13:04:00.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.013 2 DEBUG nova.network.neutron [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updating instance_info_cache with network_info: [{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.052 2 DEBUG oslo_concurrency.lockutils [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.090 2 INFO nova.virt.libvirt.driver [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance destroyed successfully.#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.091 2 DEBUG nova.objects.instance [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.103 2 DEBUG nova.objects.instance [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.121 2 DEBUG nova.virt.libvirt.vif [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1545583047',display_name='tempest-TestNetworkAdvancedServerOps-server-1545583047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1545583047',id=180,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNqGiqvBYYQgcx8Ikwu0eTZOIhMMwxE6EFDSk2YFEplSUAvLfaz/Uj0dUiewPJq/5ERInECgxhZXsqalUYL+uLq+TdPPo9K8BMtPccFdyECaMZFinm6Te4479GTN81vtA==',key_name='tempest-TestNetworkAdvancedServerOps-1958524571',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:03:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xtui4jjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:03:55Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=04c07020-71d5-4a5c-9f1b-cab14e08e014,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.122 2 DEBUG nova.network.os_vif_util [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.123 2 DEBUG nova.network.os_vif_util [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.123 2 DEBUG os_vif [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.127 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b22044a-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.132 2 INFO os_vif [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf')#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.141 2 DEBUG nova.virt.libvirt.driver [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Start _get_guest_xml network_info=[{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.146 2 WARNING nova.virt.libvirt.driver [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.152 2 DEBUG nova.virt.libvirt.host [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.154 2 DEBUG nova.virt.libvirt.host [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.157 2 DEBUG nova.virt.libvirt.host [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.158 2 DEBUG nova.virt.libvirt.host [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.159 2 DEBUG nova.virt.libvirt.driver [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.159 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.160 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.160 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.160 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.160 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.160 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.161 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.161 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.161 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.161 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.162 2 DEBUG nova.virt.hardware [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.162 2 DEBUG nova.objects.instance [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.257 2 DEBUG oslo_concurrency.processutils [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3400097481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.698 2 DEBUG oslo_concurrency.processutils [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:01 np0005466031 nova_compute[235803]: 2025-10-02 13:04:01.737 2 DEBUG oslo_concurrency.processutils [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:02.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2489860827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.193 2 DEBUG oslo_concurrency.processutils [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.196 2 DEBUG nova.virt.libvirt.vif [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1545583047',display_name='tempest-TestNetworkAdvancedServerOps-server-1545583047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1545583047',id=180,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNqGiqvBYYQgcx8Ikwu0eTZOIhMMwxE6EFDSk2YFEplSUAvLfaz/Uj0dUiewPJq/5ERInECgxhZXsqalUYL+uLq+TdPPo9K8BMtPccFdyECaMZFinm6Te4479GTN81vtA==',key_name='tempest-TestNetworkAdvancedServerOps-1958524571',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:03:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xtui4jjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:03:55Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=04c07020-71d5-4a5c-9f1b-cab14e08e014,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.196 2 DEBUG nova.network.os_vif_util [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.197 2 DEBUG nova.network.os_vif_util [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.199 2 DEBUG nova.objects.instance [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.217 2 DEBUG nova.virt.libvirt.driver [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <uuid>04c07020-71d5-4a5c-9f1b-cab14e08e014</uuid>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <name>instance-000000b4</name>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1545583047</nova:name>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:04:01</nova:creationTime>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <nova:port uuid="0b22044a-cf3b-4bbe-ab44-dbcdd5becd06">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <entry name="serial">04c07020-71d5-4a5c-9f1b-cab14e08e014</entry>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <entry name="uuid">04c07020-71d5-4a5c-9f1b-cab14e08e014</entry>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/04c07020-71d5-4a5c-9f1b-cab14e08e014_disk">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/04c07020-71d5-4a5c-9f1b-cab14e08e014_disk.config">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:df:fc:c7"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <target dev="tap0b22044a-cf"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014/console.log" append="off"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:04:02 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:04:02 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:04:02 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:04:02 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.219 2 DEBUG nova.virt.libvirt.driver [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.220 2 DEBUG nova.virt.libvirt.driver [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.221 2 DEBUG nova.virt.libvirt.vif [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1545583047',display_name='tempest-TestNetworkAdvancedServerOps-server-1545583047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1545583047',id=180,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNqGiqvBYYQgcx8Ikwu0eTZOIhMMwxE6EFDSk2YFEplSUAvLfaz/Uj0dUiewPJq/5ERInECgxhZXsqalUYL+uLq+TdPPo9K8BMtPccFdyECaMZFinm6Te4479GTN81vtA==',key_name='tempest-TestNetworkAdvancedServerOps-1958524571',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:03:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xtui4jjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:03:55Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=04c07020-71d5-4a5c-9f1b-cab14e08e014,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.221 2 DEBUG nova.network.os_vif_util [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.223 2 DEBUG nova.network.os_vif_util [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.223 2 DEBUG os_vif [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b22044a-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b22044a-cf, col_values=(('external_ids', {'iface-id': '0b22044a-cf3b-4bbe-ab44-dbcdd5becd06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:fc:c7', 'vm-uuid': '04c07020-71d5-4a5c-9f1b-cab14e08e014'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 NetworkManager[44907]: <info>  [1759410242.2328] manager: (tap0b22044a-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.237 2 INFO os_vif [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf')#033[00m
Oct  2 09:04:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:02.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:02 np0005466031 kernel: tap0b22044a-cf: entered promiscuous mode
Oct  2 09:04:02 np0005466031 NetworkManager[44907]: <info>  [1759410242.3185] manager: (tap0b22044a-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:02Z|00715|binding|INFO|Claiming lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for this chassis.
Oct  2 09:04:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:02Z|00716|binding|INFO|0b22044a-cf3b-4bbe-ab44-dbcdd5becd06: Claiming fa:16:3e:df:fc:c7 10.100.0.3
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.329 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:fc:c7 10.100.0.3'], port_security=['fa:16:3e:df:fc:c7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '04c07020-71d5-4a5c-9f1b-cab14e08e014', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1088a2ea-14f6-41e6-bbf1-6c7509c324e2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dada802-689c-4406-8944-09b70eba9ae8, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.331 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 in datapath bec9e2c4-93a6-4b64-8993-f7e5c684995a bound to our chassis#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.332 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bec9e2c4-93a6-4b64-8993-f7e5c684995a#033[00m
Oct  2 09:04:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:02Z|00717|binding|INFO|Setting lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 ovn-installed in OVS
Oct  2 09:04:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:02Z|00718|binding|INFO|Setting lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 up in Southbound
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.345 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cf05c4e3-6291-4158-bad4-83e1137b4bcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.345 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbec9e2c4-91 in ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.346 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbec9e2c4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.347 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[99f9600a-b074-413d-b2d6-35b69b2db1f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.347 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b45acc-8d55-49c9-aac9-3fff8d859d83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 systemd-udevd[314909]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:04:02 np0005466031 systemd-machined[192227]: New machine qemu-82-instance-000000b4.
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.359 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[6d648364-5564-442f-b456-27f6d400e89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 NetworkManager[44907]: <info>  [1759410242.3711] device (tap0b22044a-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:04:02 np0005466031 NetworkManager[44907]: <info>  [1759410242.3721] device (tap0b22044a-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.373 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[509f235b-e733-4e57-9654-2fbef728693f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 systemd[1]: Started Virtual Machine qemu-82-instance-000000b4.
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.402 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e915d7d2-7e86-415b-9255-8aba51daaaf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 NetworkManager[44907]: <info>  [1759410242.4089] manager: (tapbec9e2c4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.407 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf1dfe1-3313-4b58-af84-7ac48e5dccf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.440 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[04d555eb-3bc8-46b4-921d-a198afa69fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.443 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0df03ba7-054f-4c87-9a88-ad967ffa74f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 NetworkManager[44907]: <info>  [1759410242.4668] device (tapbec9e2c4-90): carrier: link connected
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.471 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4c4ded-c06f-463d-a57f-666c0a51d965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.487 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4612b379-d0c9-4e25-bcc7-bb6731dae4fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbec9e2c4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:0e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809805, 'reachable_time': 28555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314940, 'error': None, 'target': 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.503 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[12bf948c-e086-438a-aa1b-738e1609c80a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:e5a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809805, 'tstamp': 809805}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314941, 'error': None, 'target': 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.519 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[33fdee85-f4f0-4ea4-9d84-704e89458ae2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbec9e2c4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:0e:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809805, 'reachable_time': 28555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314942, 'error': None, 'target': 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.548 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4850ff0f-2e27-4353-ac9e-b821d58b8414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.603 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[280a7874-7325-4b61-85c4-79ee3f8f43e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.604 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbec9e2c4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.605 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.605 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbec9e2c4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:02 np0005466031 NetworkManager[44907]: <info>  [1759410242.6081] manager: (tapbec9e2c4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Oct  2 09:04:02 np0005466031 kernel: tapbec9e2c4-90: entered promiscuous mode
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.611 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbec9e2c4-90, col_values=(('external_ids', {'iface-id': '6e089b66-193b-421b-8274-489bc49dd155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:02Z|00719|binding|INFO|Releasing lport 6e089b66-193b-421b-8274-489bc49dd155 from this chassis (sb_readonly=0)
Oct  2 09:04:02 np0005466031 nova_compute[235803]: 2025-10-02 13:04:02.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.629 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bec9e2c4-93a6-4b64-8993-f7e5c684995a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bec9e2c4-93a6-4b64-8993-f7e5c684995a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.630 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5998f2c3-9bc1-4b37-b880-b92e6d28c85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.631 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-bec9e2c4-93a6-4b64-8993-f7e5c684995a
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/bec9e2c4-93a6-4b64-8993-f7e5c684995a.pid.haproxy
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID bec9e2c4-93a6-4b64-8993-f7e5c684995a
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:04:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:02.633 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'env', 'PROCESS_TAG=haproxy-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bec9e2c4-93a6-4b64-8993-f7e5c684995a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:04:02 np0005466031 podman[315016]: 2025-10-02 13:04:02.993743789 +0000 UTC m=+0.049852207 container create 25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:04:03 np0005466031 systemd[1]: Started libpod-conmon-25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f.scope.
Oct  2 09:04:03 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:04:03 np0005466031 podman[315016]: 2025-10-02 13:04:02.966035331 +0000 UTC m=+0.022143769 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:04:03 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/245dbab066619af2db053fd4be12d8a5ff847b9de82e8fce849bbc105c4e7835/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:04:03 np0005466031 podman[315016]: 2025-10-02 13:04:03.090350612 +0000 UTC m=+0.146459050 container init 25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 09:04:03 np0005466031 podman[315016]: 2025-10-02 13:04:03.096346425 +0000 UTC m=+0.152454843 container start 25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:04:03 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[315032]: [NOTICE]   (315036) : New worker (315038) forked
Oct  2 09:04:03 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[315032]: [NOTICE]   (315036) : Loading success.
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.218 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for 04c07020-71d5-4a5c-9f1b-cab14e08e014 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.218 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410243.2176218, 04c07020-71d5-4a5c-9f1b-cab14e08e014 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.218 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.221 2 DEBUG nova.compute.manager [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.224 2 INFO nova.virt.libvirt.driver [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance rebooted successfully.#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.225 2 DEBUG nova.compute.manager [None req-4de92219-f78a-4c79-ad37-f656457e9254 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.250 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.253 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.276 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.276 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410243.2201316, 04c07020-71d5-4a5c-9f1b-cab14e08e014 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.277 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] VM Started (Lifecycle Event)#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.298 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:03 np0005466031 nova_compute[235803]: 2025-10-02 13:04:03.302 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:04.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:04 np0005466031 nova_compute[235803]: 2025-10-02 13:04:04.135 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:04 np0005466031 nova_compute[235803]: 2025-10-02 13:04:04.135 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:04 np0005466031 nova_compute[235803]: 2025-10-02 13:04:04.150 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:04:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:04.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:04 np0005466031 nova_compute[235803]: 2025-10-02 13:04:04.584 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:04 np0005466031 nova_compute[235803]: 2025-10-02 13:04:04.585 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:04 np0005466031 nova_compute[235803]: 2025-10-02 13:04:04.592 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:04:04 np0005466031 nova_compute[235803]: 2025-10-02 13:04:04.593 2 INFO nova.compute.claims [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:04:04 np0005466031 podman[315048]: 2025-10-02 13:04:04.624864633 +0000 UTC m=+0.055953713 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:04:04 np0005466031 podman[315049]: 2025-10-02 13:04:04.66290723 +0000 UTC m=+0.091680003 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:04:04 np0005466031 nova_compute[235803]: 2025-10-02 13:04:04.785 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2200722556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.292 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.297 2 DEBUG nova.compute.provider_tree [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.311 2 DEBUG nova.scheduler.client.report [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.332 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.332 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.380 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.380 2 DEBUG nova.network.neutron [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.401 2 INFO nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.417 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.506 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.507 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.508 2 INFO nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Creating image(s)#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.533 2 DEBUG nova.storage.rbd_utils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.566 2 DEBUG nova.storage.rbd_utils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.594 2 DEBUG nova.storage.rbd_utils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.598 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.677 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.678 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.678 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.680 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.704 2 DEBUG nova.storage.rbd_utils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.707 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:05 np0005466031 nova_compute[235803]: 2025-10-02 13:04:05.806 2 DEBUG nova.policy [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:04:06 np0005466031 nova_compute[235803]: 2025-10-02 13:04:06.055 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:06.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:06 np0005466031 nova_compute[235803]: 2025-10-02 13:04:06.127 2 DEBUG nova.storage.rbd_utils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:04:06 np0005466031 nova_compute[235803]: 2025-10-02 13:04:06.249 2 DEBUG nova.objects.instance [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid cb892d5f-0907-47e7-94e9-5903cdb0cd5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:06 np0005466031 nova_compute[235803]: 2025-10-02 13:04:06.265 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:04:06 np0005466031 nova_compute[235803]: 2025-10-02 13:04:06.266 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Ensure instance console log exists: /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:04:06 np0005466031 nova_compute[235803]: 2025-10-02 13:04:06.267 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:06 np0005466031 nova_compute[235803]: 2025-10-02 13:04:06.268 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:06 np0005466031 nova_compute[235803]: 2025-10-02 13:04:06.268 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:04:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:06.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:04:07 np0005466031 nova_compute[235803]: 2025-10-02 13:04:07.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:07 np0005466031 nova_compute[235803]: 2025-10-02 13:04:07.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:07 np0005466031 nova_compute[235803]: 2025-10-02 13:04:07.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:08.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:08.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:08 np0005466031 nova_compute[235803]: 2025-10-02 13:04:08.807 2 DEBUG nova.compute.manager [req-76300e17-2e7d-42dc-83db-e99b6d5034b7 req-73f2ffa9-b954-4bd2-97e2-f15f7d295643 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:08 np0005466031 nova_compute[235803]: 2025-10-02 13:04:08.808 2 DEBUG oslo_concurrency.lockutils [req-76300e17-2e7d-42dc-83db-e99b6d5034b7 req-73f2ffa9-b954-4bd2-97e2-f15f7d295643 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:08 np0005466031 nova_compute[235803]: 2025-10-02 13:04:08.808 2 DEBUG oslo_concurrency.lockutils [req-76300e17-2e7d-42dc-83db-e99b6d5034b7 req-73f2ffa9-b954-4bd2-97e2-f15f7d295643 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:08 np0005466031 nova_compute[235803]: 2025-10-02 13:04:08.809 2 DEBUG oslo_concurrency.lockutils [req-76300e17-2e7d-42dc-83db-e99b6d5034b7 req-73f2ffa9-b954-4bd2-97e2-f15f7d295643 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:08 np0005466031 nova_compute[235803]: 2025-10-02 13:04:08.809 2 DEBUG nova.compute.manager [req-76300e17-2e7d-42dc-83db-e99b6d5034b7 req-73f2ffa9-b954-4bd2-97e2-f15f7d295643 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] No waiting events found dispatching network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:08 np0005466031 nova_compute[235803]: 2025-10-02 13:04:08.809 2 WARNING nova.compute.manager [req-76300e17-2e7d-42dc-83db-e99b6d5034b7 req-73f2ffa9-b954-4bd2-97e2-f15f7d295643 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received unexpected event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:04:08 np0005466031 nova_compute[235803]: 2025-10-02 13:04:08.861 2 DEBUG nova.network.neutron [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Successfully created port: dfaecef4-f9c7-4386-8395-afcb6cc7b23f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:04:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:09 np0005466031 nova_compute[235803]: 2025-10-02 13:04:09.951 2 DEBUG nova.network.neutron [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Successfully updated port: dfaecef4-f9c7-4386-8395-afcb6cc7b23f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:04:09 np0005466031 nova_compute[235803]: 2025-10-02 13:04:09.975 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:09 np0005466031 nova_compute[235803]: 2025-10-02 13:04:09.976 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:09 np0005466031 nova_compute[235803]: 2025-10-02 13:04:09.976 2 DEBUG nova.network.neutron [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:04:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:10.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.136 2 DEBUG nova.network.neutron [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:04:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:10.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.925 2 DEBUG nova.compute.manager [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.926 2 DEBUG oslo_concurrency.lockutils [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.926 2 DEBUG oslo_concurrency.lockutils [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.926 2 DEBUG oslo_concurrency.lockutils [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.927 2 DEBUG nova.compute.manager [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] No waiting events found dispatching network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.927 2 WARNING nova.compute.manager [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received unexpected event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.927 2 DEBUG nova.compute.manager [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-changed-dfaecef4-f9c7-4386-8395-afcb6cc7b23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.927 2 DEBUG nova.compute.manager [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Refreshing instance network info cache due to event network-changed-dfaecef4-f9c7-4386-8395-afcb6cc7b23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:10 np0005466031 nova_compute[235803]: 2025-10-02 13:04:10.927 2 DEBUG oslo_concurrency.lockutils [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:11 np0005466031 podman[315334]: 2025-10-02 13:04:11.626253976 +0000 UTC m=+0.049807246 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:04:11 np0005466031 podman[315333]: 2025-10-02 13:04:11.629693135 +0000 UTC m=+0.058374503 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.814 2 DEBUG nova.network.neutron [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updating instance_info_cache with network_info: [{"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.842 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.842 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Instance network_info: |[{"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.842 2 DEBUG oslo_concurrency.lockutils [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.843 2 DEBUG nova.network.neutron [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Refreshing network info cache for port dfaecef4-f9c7-4386-8395-afcb6cc7b23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.845 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Start _get_guest_xml network_info=[{"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.850 2 WARNING nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.855 2 DEBUG nova.virt.libvirt.host [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.855 2 DEBUG nova.virt.libvirt.host [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.862 2 DEBUG nova.virt.libvirt.host [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.863 2 DEBUG nova.virt.libvirt.host [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.864 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.864 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.865 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.865 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.865 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.866 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.866 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.866 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.866 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.867 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.867 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.867 2 DEBUG nova.virt.hardware [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:04:11 np0005466031 nova_compute[235803]: 2025-10-02 13:04:11.870 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:12.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:12.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/647808684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.363 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.392 2 DEBUG nova.storage.rbd_utils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.398 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1493426402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.854 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.856 2 DEBUG nova.virt.libvirt.vif [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1438879093',display_name='tempest-TestNetworkBasicOps-server-1438879093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1438879093',id=183,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILv6FdTSAvyymUNT7+kFPYh8XkeLW0mSI5Iclk4+UUSGYVrSKA1ikodl+eTDXPjlvIxLRi6fJ4xtLjQqk9lT+lc0TDH6xeHzyhqW1IJ50dJGVI/tWnRRfVIrWPwYFZLDg==',key_name='tempest-TestNetworkBasicOps-1862097483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-f8binyyv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:05Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cb892d5f-0907-47e7-94e9-5903cdb0cd5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.856 2 DEBUG nova.network.os_vif_util [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.857 2 DEBUG nova.network.os_vif_util [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:e6,bridge_name='br-int',has_traffic_filtering=True,id=dfaecef4-f9c7-4386-8395-afcb6cc7b23f,network=Network(95353de3-ad96-46d0-a73f-c8bf0356983c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfaecef4-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.858 2 DEBUG nova.objects.instance [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb892d5f-0907-47e7-94e9-5903cdb0cd5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.880 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <uuid>cb892d5f-0907-47e7-94e9-5903cdb0cd5c</uuid>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <name>instance-000000b7</name>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkBasicOps-server-1438879093</nova:name>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:04:11</nova:creationTime>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <nova:port uuid="dfaecef4-f9c7-4386-8395-afcb6cc7b23f">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <entry name="serial">cb892d5f-0907-47e7-94e9-5903cdb0cd5c</entry>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <entry name="uuid">cb892d5f-0907-47e7-94e9-5903cdb0cd5c</entry>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk.config">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:62:3b:e6"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <target dev="tapdfaecef4-f9"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c/console.log" append="off"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:04:12 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:04:12 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:04:12 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:04:12 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.886 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Preparing to wait for external event network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.887 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.889 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.889 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.890 2 DEBUG nova.virt.libvirt.vif [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1438879093',display_name='tempest-TestNetworkBasicOps-server-1438879093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1438879093',id=183,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILv6FdTSAvyymUNT7+kFPYh8XkeLW0mSI5Iclk4+UUSGYVrSKA1ikodl+eTDXPjlvIxLRi6fJ4xtLjQqk9lT+lc0TDH6xeHzyhqW1IJ50dJGVI/tWnRRfVIrWPwYFZLDg==',key_name='tempest-TestNetworkBasicOps-1862097483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-f8binyyv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:05Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cb892d5f-0907-47e7-94e9-5903cdb0cd5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.890 2 DEBUG nova.network.os_vif_util [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.891 2 DEBUG nova.network.os_vif_util [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:e6,bridge_name='br-int',has_traffic_filtering=True,id=dfaecef4-f9c7-4386-8395-afcb6cc7b23f,network=Network(95353de3-ad96-46d0-a73f-c8bf0356983c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfaecef4-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.892 2 DEBUG os_vif [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:e6,bridge_name='br-int',has_traffic_filtering=True,id=dfaecef4-f9c7-4386-8395-afcb6cc7b23f,network=Network(95353de3-ad96-46d0-a73f-c8bf0356983c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfaecef4-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.897 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfaecef4-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdfaecef4-f9, col_values=(('external_ids', {'iface-id': 'dfaecef4-f9c7-4386-8395-afcb6cc7b23f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:3b:e6', 'vm-uuid': 'cb892d5f-0907-47e7-94e9-5903cdb0cd5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:12 np0005466031 NetworkManager[44907]: <info>  [1759410252.9415] manager: (tapdfaecef4-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:12 np0005466031 nova_compute[235803]: 2025-10-02 13:04:12.949 2 INFO os_vif [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:e6,bridge_name='br-int',has_traffic_filtering=True,id=dfaecef4-f9c7-4386-8395-afcb6cc7b23f,network=Network(95353de3-ad96-46d0-a73f-c8bf0356983c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfaecef4-f9')#033[00m
Oct  2 09:04:13 np0005466031 nova_compute[235803]: 2025-10-02 13:04:13.016 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:04:13 np0005466031 nova_compute[235803]: 2025-10-02 13:04:13.016 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:04:13 np0005466031 nova_compute[235803]: 2025-10-02 13:04:13.016 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:62:3b:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:04:13 np0005466031 nova_compute[235803]: 2025-10-02 13:04:13.017 2 INFO nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Using config drive#033[00m
Oct  2 09:04:13 np0005466031 nova_compute[235803]: 2025-10-02 13:04:13.042 2 DEBUG nova.storage.rbd_utils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:14.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.274 2 INFO nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Creating config drive at /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c/disk.config#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.279 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbg130jpq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:14.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.310 2 DEBUG nova.network.neutron [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updated VIF entry in instance network info cache for port dfaecef4-f9c7-4386-8395-afcb6cc7b23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.312 2 DEBUG nova.network.neutron [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updating instance_info_cache with network_info: [{"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.329 2 DEBUG oslo_concurrency.lockutils [req-0eeb0baf-1bf6-4d04-93bd-96f2cdaf40e0 req-ceb37f7a-7b1e-44d5-a41e-bb8cb218899b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.418 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbg130jpq" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.449 2 DEBUG nova.storage.rbd_utils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.453 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c/disk.config cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.611 2 DEBUG oslo_concurrency.processutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c/disk.config cb892d5f-0907-47e7-94e9-5903cdb0cd5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.612 2 INFO nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Deleting local config drive /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c/disk.config because it was imported into RBD.#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:14 np0005466031 kernel: tapdfaecef4-f9: entered promiscuous mode
Oct  2 09:04:14 np0005466031 NetworkManager[44907]: <info>  [1759410254.6665] manager: (tapdfaecef4-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:14Z|00720|binding|INFO|Claiming lport dfaecef4-f9c7-4386-8395-afcb6cc7b23f for this chassis.
Oct  2 09:04:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:14Z|00721|binding|INFO|dfaecef4-f9c7-4386-8395-afcb6cc7b23f: Claiming fa:16:3e:62:3b:e6 10.100.0.13
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.672 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:3b:e6 10.100.0.13'], port_security=['fa:16:3e:62:3b:e6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cb892d5f-0907-47e7-94e9-5903cdb0cd5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95353de3-ad96-46d0-a73f-c8bf0356983c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33395d74-6330-4693-9dd8-d3a57ede4814', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b963636b-2f1b-4fc4-bf02-66c5b332b062, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=dfaecef4-f9c7-4386-8395-afcb6cc7b23f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.675 141898 INFO neutron.agent.ovn.metadata.agent [-] Port dfaecef4-f9c7-4386-8395-afcb6cc7b23f in datapath 95353de3-ad96-46d0-a73f-c8bf0356983c bound to our chassis#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.677 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95353de3-ad96-46d0-a73f-c8bf0356983c#033[00m
Oct  2 09:04:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:14Z|00722|binding|INFO|Setting lport dfaecef4-f9c7-4386-8395-afcb6cc7b23f ovn-installed in OVS
Oct  2 09:04:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:14Z|00723|binding|INFO|Setting lport dfaecef4-f9c7-4386-8395-afcb6cc7b23f up in Southbound
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.692 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b240a908-4c67-472a-87a1-b86a438cefc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.693 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap95353de3-a1 in ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.694 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap95353de3-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.695 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0b59b5ff-f90d-4598-93cf-bd105c415e15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.695 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[15f1cb24-6a4d-4446-83e7-956ec365b090]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 systemd-machined[192227]: New machine qemu-83-instance-000000b7.
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.706 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[e875073d-13f3-459c-af9d-dc9960470e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.720 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a688bf8f-9056-4573-8765-2c808ce4041c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 systemd[1]: Started Virtual Machine qemu-83-instance-000000b7.
Oct  2 09:04:14 np0005466031 systemd-udevd[315511]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:04:14 np0005466031 NetworkManager[44907]: <info>  [1759410254.7476] device (tapdfaecef4-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:04:14 np0005466031 NetworkManager[44907]: <info>  [1759410254.7488] device (tapdfaecef4-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.752 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[08546e9a-c1f4-4152-8c42-1b36ba772224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.756 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4104b506-5392-45a4-8ef3-da6d117e5af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 NetworkManager[44907]: <info>  [1759410254.7572] manager: (tap95353de3-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Oct  2 09:04:14 np0005466031 systemd-udevd[315514]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.793 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[690beca0-cd3f-4e19-a0ef-4d83d781b165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.797 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb63968-2b83-4922-b16c-7abf898ae2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 NetworkManager[44907]: <info>  [1759410254.8193] device (tap95353de3-a0): carrier: link connected
Oct  2 09:04:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.825 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd46ee8-24e3-4f2c-9d8f-852c596aa69e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.844 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4522f04b-b1d2-4c53-8679-f1aac420d4ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95353de3-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:24:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 811041, 'reachable_time': 24531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315540, 'error': None, 'target': 'ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.867 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e89060e4-a2ec-4408-8ec6-44328b05e2ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:24c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 811041, 'tstamp': 811041}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315541, 'error': None, 'target': 'ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.878 2 DEBUG nova.compute.manager [req-995f087a-9722-4795-aee8-c0f3a0203985 req-2585704b-93d5-4fe8-b827-49ebbebd4e3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.878 2 DEBUG oslo_concurrency.lockutils [req-995f087a-9722-4795-aee8-c0f3a0203985 req-2585704b-93d5-4fe8-b827-49ebbebd4e3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.879 2 DEBUG oslo_concurrency.lockutils [req-995f087a-9722-4795-aee8-c0f3a0203985 req-2585704b-93d5-4fe8-b827-49ebbebd4e3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.879 2 DEBUG oslo_concurrency.lockutils [req-995f087a-9722-4795-aee8-c0f3a0203985 req-2585704b-93d5-4fe8-b827-49ebbebd4e3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:14 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.879 2 DEBUG nova.compute.manager [req-995f087a-9722-4795-aee8-c0f3a0203985 req-2585704b-93d5-4fe8-b827-49ebbebd4e3e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Processing event network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.890 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[167cf7b6-14fe-4501-9d10-056afcb41810]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95353de3-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:24:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 811041, 'reachable_time': 24531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315542, 'error': None, 'target': 'ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.924 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[974d4f57-d98e-454d-82d6-daf9bf9e3625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.993 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8cffa960-fe0c-4baf-8034-60b5c90692a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.996 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95353de3-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.996 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:14 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:14.996 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95353de3-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:14 np0005466031 NetworkManager[44907]: <info>  [1759410254.9991] manager: (tap95353de3-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Oct  2 09:04:14 np0005466031 kernel: tap95353de3-a0: entered promiscuous mode
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:14.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:15.001 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95353de3-a0, col_values=(('external_ids', {'iface-id': 'bdf2c131-530f-4b25-98ee-832bcaebc620'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:15 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:15Z|00724|binding|INFO|Releasing lport bdf2c131-530f-4b25-98ee-832bcaebc620 from this chassis (sb_readonly=0)
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:15.020 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/95353de3-ad96-46d0-a73f-c8bf0356983c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/95353de3-ad96-46d0-a73f-c8bf0356983c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:15.021 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fedb5968-b590-436f-8621-de52a899a5ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:15.022 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-95353de3-ad96-46d0-a73f-c8bf0356983c
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/95353de3-ad96-46d0-a73f-c8bf0356983c.pid.haproxy
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 95353de3-ad96-46d0-a73f-c8bf0356983c
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:04:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:15.025 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c', 'env', 'PROCESS_TAG=haproxy-95353de3-ad96-46d0-a73f-c8bf0356983c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/95353de3-ad96-46d0-a73f-c8bf0356983c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:04:15 np0005466031 podman[315613]: 2025-10-02 13:04:15.460319652 +0000 UTC m=+0.049153677 container create d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:04:15 np0005466031 systemd[1]: Started libpod-conmon-d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff.scope.
Oct  2 09:04:15 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:04:15 np0005466031 podman[315613]: 2025-10-02 13:04:15.435709063 +0000 UTC m=+0.024543118 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:04:15 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737fea02bb1c24cc2e62227bbbf4915578bcb6876aef2aff6730dfdbbd2ae03f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:04:15 np0005466031 podman[315613]: 2025-10-02 13:04:15.547111323 +0000 UTC m=+0.135945358 container init d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:04:15 np0005466031 podman[315613]: 2025-10-02 13:04:15.554127145 +0000 UTC m=+0.142961180 container start d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 09:04:15 np0005466031 neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c[315628]: [NOTICE]   (315632) : New worker (315634) forked
Oct  2 09:04:15 np0005466031 neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c[315628]: [NOTICE]   (315632) : Loading success.
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.736 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410255.7362618, cb892d5f-0907-47e7-94e9-5903cdb0cd5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.737 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] VM Started (Lifecycle Event)#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.739 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.743 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.746 2 INFO nova.virt.libvirt.driver [-] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Instance spawned successfully.#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.747 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.769 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.775 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.778 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.778 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.779 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.779 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.779 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.780 2 DEBUG nova.virt.libvirt.driver [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.810 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.810 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410255.736373, cb892d5f-0907-47e7-94e9-5903cdb0cd5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.810 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.844 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.847 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410255.741888, cb892d5f-0907-47e7-94e9-5903cdb0cd5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.847 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.862 2 INFO nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Took 10.36 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.862 2 DEBUG nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.870 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.872 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.900 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.923 2 INFO nova.compute.manager [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Took 11.72 seconds to build instance.#033[00m
Oct  2 09:04:15 np0005466031 nova_compute[235803]: 2025-10-02 13:04:15.941 2 DEBUG oslo_concurrency.lockutils [None req-d9100aef-7cd1-4f71-9d79-a0bc7bc10d43 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:16.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:16.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:16 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:16Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:fc:c7 10.100.0.3
Oct  2 09:04:17 np0005466031 nova_compute[235803]: 2025-10-02 13:04:17.042 2 DEBUG nova.compute.manager [req-6f7a22e2-706e-4044-97b0-df4f86a655dc req-aa48f664-e1f4-4348-aeb6-bb4a8a2bb410 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:17 np0005466031 nova_compute[235803]: 2025-10-02 13:04:17.042 2 DEBUG oslo_concurrency.lockutils [req-6f7a22e2-706e-4044-97b0-df4f86a655dc req-aa48f664-e1f4-4348-aeb6-bb4a8a2bb410 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:17 np0005466031 nova_compute[235803]: 2025-10-02 13:04:17.043 2 DEBUG oslo_concurrency.lockutils [req-6f7a22e2-706e-4044-97b0-df4f86a655dc req-aa48f664-e1f4-4348-aeb6-bb4a8a2bb410 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:17 np0005466031 nova_compute[235803]: 2025-10-02 13:04:17.043 2 DEBUG oslo_concurrency.lockutils [req-6f7a22e2-706e-4044-97b0-df4f86a655dc req-aa48f664-e1f4-4348-aeb6-bb4a8a2bb410 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:17 np0005466031 nova_compute[235803]: 2025-10-02 13:04:17.043 2 DEBUG nova.compute.manager [req-6f7a22e2-706e-4044-97b0-df4f86a655dc req-aa48f664-e1f4-4348-aeb6-bb4a8a2bb410 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] No waiting events found dispatching network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:17 np0005466031 nova_compute[235803]: 2025-10-02 13:04:17.043 2 WARNING nova.compute.manager [req-6f7a22e2-706e-4044-97b0-df4f86a655dc req-aa48f664-e1f4-4348-aeb6-bb4a8a2bb410 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received unexpected event network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f for instance with vm_state active and task_state None.#033[00m
Oct  2 09:04:17 np0005466031 nova_compute[235803]: 2025-10-02 13:04:17.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:18.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:18 np0005466031 nova_compute[235803]: 2025-10-02 13:04:18.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:18 np0005466031 nova_compute[235803]: 2025-10-02 13:04:18.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:04:18 np0005466031 nova_compute[235803]: 2025-10-02 13:04:18.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:04:18 np0005466031 nova_compute[235803]: 2025-10-02 13:04:18.974 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:18 np0005466031 nova_compute[235803]: 2025-10-02 13:04:18.974 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:18 np0005466031 nova_compute[235803]: 2025-10-02 13:04:18.975 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:04:18 np0005466031 nova_compute[235803]: 2025-10-02 13:04:18.975 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:20.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:04:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:20.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.418 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updating instance_info_cache with network_info: [{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.440 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.442 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.443 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.443 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.466 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.468 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.469 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.469 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.470 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1231900243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:20 np0005466031 nova_compute[235803]: 2025-10-02 13:04:20.949 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.042 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.043 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.045 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.046 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:21.111 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:21.112 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.250 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.251 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3891MB free_disk=20.834693908691406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.252 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.252 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.285 2 DEBUG nova.compute.manager [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-changed-dfaecef4-f9c7-4386-8395-afcb6cc7b23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.286 2 DEBUG nova.compute.manager [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Refreshing instance network info cache due to event network-changed-dfaecef4-f9c7-4386-8395-afcb6cc7b23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.286 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.287 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.287 2 DEBUG nova.network.neutron [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Refreshing network info cache for port dfaecef4-f9c7-4386-8395-afcb6cc7b23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.317 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 04c07020-71d5-4a5c-9f1b-cab14e08e014 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.318 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance cb892d5f-0907-47e7-94e9-5903cdb0cd5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.318 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.318 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.362 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2877419818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.819 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.824 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.844 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.873 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:04:21 np0005466031 nova_compute[235803]: 2025-10-02 13:04:21.873 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:22 np0005466031 nova_compute[235803]: 2025-10-02 13:04:22.067 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:22.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:22.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:22 np0005466031 nova_compute[235803]: 2025-10-02 13:04:22.395 2 INFO nova.compute.manager [None req-fd664390-4f54-480e-95ce-fdd4706b5135 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Get console output#033[00m
Oct  2 09:04:22 np0005466031 nova_compute[235803]: 2025-10-02 13:04:22.400 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.260 2 DEBUG nova.network.neutron [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updated VIF entry in instance network info cache for port dfaecef4-f9c7-4386-8395-afcb6cc7b23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.261 2 DEBUG nova.network.neutron [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updating instance_info_cache with network_info: [{"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.287 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.553 2 DEBUG nova.compute.manager [req-88f5dc62-b28e-45ab-8b3f-cba108cfd437 req-3529ce97-b66f-4df3-bf63-85624ad26db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-changed-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.553 2 DEBUG nova.compute.manager [req-88f5dc62-b28e-45ab-8b3f-cba108cfd437 req-3529ce97-b66f-4df3-bf63-85624ad26db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Refreshing instance network info cache due to event network-changed-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.554 2 DEBUG oslo_concurrency.lockutils [req-88f5dc62-b28e-45ab-8b3f-cba108cfd437 req-3529ce97-b66f-4df3-bf63-85624ad26db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.554 2 DEBUG oslo_concurrency.lockutils [req-88f5dc62-b28e-45ab-8b3f-cba108cfd437 req-3529ce97-b66f-4df3-bf63-85624ad26db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.554 2 DEBUG nova.network.neutron [req-88f5dc62-b28e-45ab-8b3f-cba108cfd437 req-3529ce97-b66f-4df3-bf63-85624ad26db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Refreshing network info cache for port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.629 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.630 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.630 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.631 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.631 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.632 2 INFO nova.compute.manager [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Terminating instance#033[00m
Oct  2 09:04:23 np0005466031 nova_compute[235803]: 2025-10-02 13:04:23.633 2 DEBUG nova.compute.manager [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:04:24 np0005466031 kernel: tap0b22044a-cf (unregistering): left promiscuous mode
Oct  2 09:04:24 np0005466031 NetworkManager[44907]: <info>  [1759410264.1321] device (tap0b22044a-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:04:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:24.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:24Z|00725|binding|INFO|Releasing lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 from this chassis (sb_readonly=0)
Oct  2 09:04:24 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:24Z|00726|binding|INFO|Setting lport 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 down in Southbound
Oct  2 09:04:24 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:24Z|00727|binding|INFO|Removing iface tap0b22044a-cf ovn-installed in OVS
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.201 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:fc:c7 10.100.0.3'], port_security=['fa:16:3e:df:fc:c7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '04c07020-71d5-4a5c-9f1b-cab14e08e014', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1088a2ea-14f6-41e6-bbf1-6c7509c324e2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dada802-689c-4406-8944-09b70eba9ae8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.203 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 in datapath bec9e2c4-93a6-4b64-8993-f7e5c684995a unbound from our chassis#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.204 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bec9e2c4-93a6-4b64-8993-f7e5c684995a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.205 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[72fa81d9-1674-4262-b03c-f1b6992ab346]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.205 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a namespace which is not needed anymore#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Oct  2 09:04:24 np0005466031 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b4.scope: Consumed 13.927s CPU time.
Oct  2 09:04:24 np0005466031 systemd-machined[192227]: Machine qemu-82-instance-000000b4 terminated.
Oct  2 09:04:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:24.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.468 2 INFO nova.virt.libvirt.driver [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Instance destroyed successfully.#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.468 2 DEBUG nova.objects.instance [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid 04c07020-71d5-4a5c-9f1b-cab14e08e014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:24 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[315032]: [NOTICE]   (315036) : haproxy version is 2.8.14-c23fe91
Oct  2 09:04:24 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[315032]: [NOTICE]   (315036) : path to executable is /usr/sbin/haproxy
Oct  2 09:04:24 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[315032]: [WARNING]  (315036) : Exiting Master process...
Oct  2 09:04:24 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[315032]: [WARNING]  (315036) : Exiting Master process...
Oct  2 09:04:24 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[315032]: [ALERT]    (315036) : Current worker (315038) exited with code 143 (Terminated)
Oct  2 09:04:24 np0005466031 neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a[315032]: [WARNING]  (315036) : All workers exited. Exiting... (0)
Oct  2 09:04:24 np0005466031 systemd[1]: libpod-25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f.scope: Deactivated successfully.
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.492 2 DEBUG nova.virt.libvirt.vif [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:03:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1545583047',display_name='tempest-TestNetworkAdvancedServerOps-server-1545583047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1545583047',id=180,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNqGiqvBYYQgcx8Ikwu0eTZOIhMMwxE6EFDSk2YFEplSUAvLfaz/Uj0dUiewPJq/5ERInECgxhZXsqalUYL+uLq+TdPPo9K8BMtPccFdyECaMZFinm6Te4479GTN81vtA==',key_name='tempest-TestNetworkAdvancedServerOps-1958524571',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:03:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-xtui4jjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:04:03Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=04c07020-71d5-4a5c-9f1b-cab14e08e014,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.492 2 DEBUG nova.network.os_vif_util [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.493 2 DEBUG nova.network.os_vif_util [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.493 2 DEBUG os_vif [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b22044a-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:24 np0005466031 podman[315715]: 2025-10-02 13:04:24.497758087 +0000 UTC m=+0.200759006 container died 25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.504 2 INFO os_vif [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:fc:c7,bridge_name='br-int',has_traffic_filtering=True,id=0b22044a-cf3b-4bbe-ab44-dbcdd5becd06,network=Network(bec9e2c4-93a6-4b64-8993-f7e5c684995a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b22044a-cf')#033[00m
Oct  2 09:04:24 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f-userdata-shm.mount: Deactivated successfully.
Oct  2 09:04:24 np0005466031 systemd[1]: var-lib-containers-storage-overlay-245dbab066619af2db053fd4be12d8a5ff847b9de82e8fce849bbc105c4e7835-merged.mount: Deactivated successfully.
Oct  2 09:04:24 np0005466031 podman[315715]: 2025-10-02 13:04:24.566835187 +0000 UTC m=+0.269836106 container cleanup 25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 09:04:24 np0005466031 systemd[1]: libpod-conmon-25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f.scope: Deactivated successfully.
Oct  2 09:04:24 np0005466031 podman[315773]: 2025-10-02 13:04:24.643969159 +0000 UTC m=+0.051726861 container remove 25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.651 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[51d3ca15-ca92-4257-83ad-8bb9228786ea]: (4, ('Thu Oct  2 01:04:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a (25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f)\n25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f\nThu Oct  2 01:04:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a (25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f)\n25c295b785b18b4fe4e51970b666e0f1b2c59764995a3aff560e14ed62e7918f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.652 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5a00bb-0399-4e9c-9e54-62586052ce82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.653 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbec9e2c4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 kernel: tapbec9e2c4-90: left promiscuous mode
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.662 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cbfede-77cf-41dc-be98-a4b1bb2470aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:24 np0005466031 nova_compute[235803]: 2025-10-02 13:04:24.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.691 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[69fc525d-586b-45cb-b4f7-777d77f46ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.692 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4b0a5c-1a80-44d5-88c9-47cc9dd07b5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.710 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4910a8-396f-4e45-9ae5-4f70b3b7e62a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809798, 'reachable_time': 44789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315788, 'error': None, 'target': 'ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:24 np0005466031 systemd[1]: run-netns-ovnmeta\x2dbec9e2c4\x2d93a6\x2d4b64\x2d8993\x2df7e5c684995a.mount: Deactivated successfully.
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.717 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bec9e2c4-93a6-4b64-8993-f7e5c684995a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:04:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:24.717 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[22671773-1573-4070-b7dc-d018e129a0f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.077 2 INFO nova.virt.libvirt.driver [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Deleting instance files /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014_del#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.077 2 INFO nova.virt.libvirt.driver [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Deletion of /var/lib/nova/instances/04c07020-71d5-4a5c-9f1b-cab14e08e014_del complete#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.124 2 INFO nova.compute.manager [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Took 1.49 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.124 2 DEBUG oslo.service.loopingcall [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.125 2 DEBUG nova.compute.manager [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.125 2 DEBUG nova.network.neutron [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.167 2 DEBUG nova.network.neutron [req-88f5dc62-b28e-45ab-8b3f-cba108cfd437 req-3529ce97-b66f-4df3-bf63-85624ad26db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updated VIF entry in instance network info cache for port 0b22044a-cf3b-4bbe-ab44-dbcdd5becd06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.168 2 DEBUG nova.network.neutron [req-88f5dc62-b28e-45ab-8b3f-cba108cfd437 req-3529ce97-b66f-4df3-bf63-85624ad26db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updating instance_info_cache with network_info: [{"id": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "address": "fa:16:3e:df:fc:c7", "network": {"id": "bec9e2c4-93a6-4b64-8993-f7e5c684995a", "bridge": "br-int", "label": "tempest-network-smoke--87786950", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b22044a-cf", "ovs_interfaceid": "0b22044a-cf3b-4bbe-ab44-dbcdd5becd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.192 2 DEBUG oslo_concurrency.lockutils [req-88f5dc62-b28e-45ab-8b3f-cba108cfd437 req-3529ce97-b66f-4df3-bf63-85624ad26db5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-04c07020-71d5-4a5c-9f1b-cab14e08e014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.456 2 DEBUG nova.compute.manager [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-unplugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.456 2 DEBUG oslo_concurrency.lockutils [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.456 2 DEBUG oslo_concurrency.lockutils [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.456 2 DEBUG oslo_concurrency.lockutils [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.456 2 DEBUG nova.compute.manager [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] No waiting events found dispatching network-vif-unplugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.457 2 DEBUG nova.compute.manager [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-unplugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.457 2 DEBUG nova.compute.manager [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.457 2 DEBUG oslo_concurrency.lockutils [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.457 2 DEBUG oslo_concurrency.lockutils [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.457 2 DEBUG oslo_concurrency.lockutils [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.457 2 DEBUG nova.compute.manager [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] No waiting events found dispatching network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.457 2 WARNING nova.compute.manager [req-783b429e-4466-45cc-bca0-0e2920dbf591 req-6786e76d-95b1-4272-8f68-ac7288740d55 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received unexpected event network-vif-plugged-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:04:25 np0005466031 nova_compute[235803]: 2025-10-02 13:04:25.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:25.872 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:25.873 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:25.873 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.086 2 DEBUG nova.network.neutron [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.112 2 INFO nova.compute.manager [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Took 0.99 seconds to deallocate network for instance.#033[00m
Oct  2 09:04:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:26.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.175 2 DEBUG nova.compute.manager [req-b69838da-6e61-4366-88ff-c8f1e63e9828 req-8ffdb1da-847e-4cce-9869-5f973bfb359d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Received event network-vif-deleted-0b22044a-cf3b-4bbe-ab44-dbcdd5becd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.208 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.208 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.259 2 DEBUG oslo_concurrency.processutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:26.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:04:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1733589827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.732 2 DEBUG oslo_concurrency.processutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.737 2 DEBUG nova.compute.provider_tree [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.755 2 DEBUG nova.scheduler.client.report [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.777 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.805 2 INFO nova.scheduler.client.report [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance 04c07020-71d5-4a5c-9f1b-cab14e08e014#033[00m
Oct  2 09:04:26 np0005466031 nova_compute[235803]: 2025-10-02 13:04:26.913 2 DEBUG oslo_concurrency.lockutils [None req-7398f7d8-fcbd-4a4c-a2df-d27d10eae141 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "04c07020-71d5-4a5c-9f1b-cab14e08e014" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:27 np0005466031 nova_compute[235803]: 2025-10-02 13:04:27.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:28.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:28.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:28Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:3b:e6 10.100.0.13
Oct  2 09:04:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:28Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:3b:e6 10.100.0.13
Oct  2 09:04:29 np0005466031 nova_compute[235803]: 2025-10-02 13:04:29.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:30 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:30Z|00728|binding|INFO|Releasing lport bdf2c131-530f-4b25-98ee-832bcaebc620 from this chassis (sb_readonly=0)
Oct  2 09:04:30 np0005466031 nova_compute[235803]: 2025-10-02 13:04:30.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:30.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:30 np0005466031 nova_compute[235803]: 2025-10-02 13:04:30.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:31.114 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:32.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:32.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:34.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:34 np0005466031 nova_compute[235803]: 2025-10-02 13:04:34.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:35 np0005466031 nova_compute[235803]: 2025-10-02 13:04:35.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:35 np0005466031 podman[315868]: 2025-10-02 13:04:35.671412029 +0000 UTC m=+0.088660925 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct  2 09:04:35 np0005466031 podman[315867]: 2025-10-02 13:04:35.672295735 +0000 UTC m=+0.089596843 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:04:35 np0005466031 nova_compute[235803]: 2025-10-02 13:04:35.696 2 INFO nova.compute.manager [None req-75dcf561-24f3-474d-a428-89b002c20943 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Get console output#033[00m
Oct  2 09:04:35 np0005466031 nova_compute[235803]: 2025-10-02 13:04:35.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:35 np0005466031 nova_compute[235803]: 2025-10-02 13:04:35.703 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:04:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:36.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:36.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:36 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:36Z|00729|binding|INFO|Releasing lport bdf2c131-530f-4b25-98ee-832bcaebc620 from this chassis (sb_readonly=0)
Oct  2 09:04:36 np0005466031 nova_compute[235803]: 2025-10-02 13:04:36.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:36 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:36Z|00730|binding|INFO|Releasing lport bdf2c131-530f-4b25-98ee-832bcaebc620 from this chassis (sb_readonly=0)
Oct  2 09:04:36 np0005466031 nova_compute[235803]: 2025-10-02 13:04:36.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:38.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:38.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:38 np0005466031 nova_compute[235803]: 2025-10-02 13:04:38.651 2 INFO nova.compute.manager [None req-7f10127e-ae33-4859-bfac-b1fb92c815c9 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Get console output#033[00m
Oct  2 09:04:38 np0005466031 nova_compute[235803]: 2025-10-02 13:04:38.655 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:04:39 np0005466031 nova_compute[235803]: 2025-10-02 13:04:39.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:39 np0005466031 NetworkManager[44907]: <info>  [1759410279.2886] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Oct  2 09:04:39 np0005466031 NetworkManager[44907]: <info>  [1759410279.2897] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Oct  2 09:04:39 np0005466031 nova_compute[235803]: 2025-10-02 13:04:39.467 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410264.4664311, 04c07020-71d5-4a5c-9f1b-cab14e08e014 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:39 np0005466031 nova_compute[235803]: 2025-10-02 13:04:39.468 2 INFO nova.compute.manager [-] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:04:39 np0005466031 nova_compute[235803]: 2025-10-02 13:04:39.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:39 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:39Z|00731|binding|INFO|Releasing lport bdf2c131-530f-4b25-98ee-832bcaebc620 from this chassis (sb_readonly=0)
Oct  2 09:04:39 np0005466031 nova_compute[235803]: 2025-10-02 13:04:39.493 2 DEBUG nova.compute.manager [None req-3dc399c9-456e-4f0b-887f-bb84492dc38a - - - - - -] [instance: 04c07020-71d5-4a5c-9f1b-cab14e08e014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:39 np0005466031 nova_compute[235803]: 2025-10-02 13:04:39.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:39 np0005466031 nova_compute[235803]: 2025-10-02 13:04:39.758 2 INFO nova.compute.manager [None req-0fc5f517-600a-4fb8-9527-de1eb2a07aef 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Get console output#033[00m
Oct  2 09:04:39 np0005466031 nova_compute[235803]: 2025-10-02 13:04:39.762 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:04:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:40.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:40.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:40 np0005466031 nova_compute[235803]: 2025-10-02 13:04:40.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:40 np0005466031 nova_compute[235803]: 2025-10-02 13:04:40.975 2 DEBUG nova.compute.manager [req-8b7e7f6d-ec8e-46ce-b9ce-abd9e90005a4 req-1af79330-2cdd-46e2-b326-49bb93ac8fe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-changed-dfaecef4-f9c7-4386-8395-afcb6cc7b23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:40 np0005466031 nova_compute[235803]: 2025-10-02 13:04:40.976 2 DEBUG nova.compute.manager [req-8b7e7f6d-ec8e-46ce-b9ce-abd9e90005a4 req-1af79330-2cdd-46e2-b326-49bb93ac8fe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Refreshing instance network info cache due to event network-changed-dfaecef4-f9c7-4386-8395-afcb6cc7b23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:40 np0005466031 nova_compute[235803]: 2025-10-02 13:04:40.976 2 DEBUG oslo_concurrency.lockutils [req-8b7e7f6d-ec8e-46ce-b9ce-abd9e90005a4 req-1af79330-2cdd-46e2-b326-49bb93ac8fe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:40 np0005466031 nova_compute[235803]: 2025-10-02 13:04:40.976 2 DEBUG oslo_concurrency.lockutils [req-8b7e7f6d-ec8e-46ce-b9ce-abd9e90005a4 req-1af79330-2cdd-46e2-b326-49bb93ac8fe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:40 np0005466031 nova_compute[235803]: 2025-10-02 13:04:40.976 2 DEBUG nova.network.neutron [req-8b7e7f6d-ec8e-46ce-b9ce-abd9e90005a4 req-1af79330-2cdd-46e2-b326-49bb93ac8fe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Refreshing network info cache for port dfaecef4-f9c7-4386-8395-afcb6cc7b23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.064 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.065 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.065 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.066 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.066 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.067 2 INFO nova.compute.manager [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Terminating instance#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.068 2 DEBUG nova.compute.manager [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:04:41 np0005466031 kernel: tapdfaecef4-f9 (unregistering): left promiscuous mode
Oct  2 09:04:41 np0005466031 NetworkManager[44907]: <info>  [1759410281.1356] device (tapdfaecef4-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:41Z|00732|binding|INFO|Releasing lport dfaecef4-f9c7-4386-8395-afcb6cc7b23f from this chassis (sb_readonly=0)
Oct  2 09:04:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:41Z|00733|binding|INFO|Setting lport dfaecef4-f9c7-4386-8395-afcb6cc7b23f down in Southbound
Oct  2 09:04:41 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:41Z|00734|binding|INFO|Removing iface tapdfaecef4-f9 ovn-installed in OVS
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.161 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:3b:e6 10.100.0.13'], port_security=['fa:16:3e:62:3b:e6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cb892d5f-0907-47e7-94e9-5903cdb0cd5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95353de3-ad96-46d0-a73f-c8bf0356983c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33395d74-6330-4693-9dd8-d3a57ede4814', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b963636b-2f1b-4fc4-bf02-66c5b332b062, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=dfaecef4-f9c7-4386-8395-afcb6cc7b23f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.163 141898 INFO neutron.agent.ovn.metadata.agent [-] Port dfaecef4-f9c7-4386-8395-afcb6cc7b23f in datapath 95353de3-ad96-46d0-a73f-c8bf0356983c unbound from our chassis#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.165 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 95353de3-ad96-46d0-a73f-c8bf0356983c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.166 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[430e5432-2b66-4eeb-b30c-bc5015cb3ff5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.166 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c namespace which is not needed anymore#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Oct  2 09:04:41 np0005466031 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b7.scope: Consumed 13.324s CPU time.
Oct  2 09:04:41 np0005466031 systemd-machined[192227]: Machine qemu-83-instance-000000b7 terminated.
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.300 2 INFO nova.virt.libvirt.driver [-] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Instance destroyed successfully.#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.301 2 DEBUG nova.objects.instance [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid cb892d5f-0907-47e7-94e9-5903cdb0cd5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:41 np0005466031 neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c[315628]: [NOTICE]   (315632) : haproxy version is 2.8.14-c23fe91
Oct  2 09:04:41 np0005466031 neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c[315628]: [NOTICE]   (315632) : path to executable is /usr/sbin/haproxy
Oct  2 09:04:41 np0005466031 neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c[315628]: [WARNING]  (315632) : Exiting Master process...
Oct  2 09:04:41 np0005466031 neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c[315628]: [ALERT]    (315632) : Current worker (315634) exited with code 143 (Terminated)
Oct  2 09:04:41 np0005466031 neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c[315628]: [WARNING]  (315632) : All workers exited. Exiting... (0)
Oct  2 09:04:41 np0005466031 systemd[1]: libpod-d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff.scope: Deactivated successfully.
Oct  2 09:04:41 np0005466031 conmon[315628]: conmon d203484ce8ece2d9e8d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff.scope/container/memory.events
Oct  2 09:04:41 np0005466031 podman[315940]: 2025-10-02 13:04:41.315532027 +0000 UTC m=+0.063086568 container died d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.323 2 DEBUG nova.virt.libvirt.vif [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1438879093',display_name='tempest-TestNetworkBasicOps-server-1438879093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1438879093',id=183,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILv6FdTSAvyymUNT7+kFPYh8XkeLW0mSI5Iclk4+UUSGYVrSKA1ikodl+eTDXPjlvIxLRi6fJ4xtLjQqk9lT+lc0TDH6xeHzyhqW1IJ50dJGVI/tWnRRfVIrWPwYFZLDg==',key_name='tempest-TestNetworkBasicOps-1862097483',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:15Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-f8binyyv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:04:15Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cb892d5f-0907-47e7-94e9-5903cdb0cd5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.325 2 DEBUG nova.network.os_vif_util [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.325 2 DEBUG nova.network.os_vif_util [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:62:3b:e6,bridge_name='br-int',has_traffic_filtering=True,id=dfaecef4-f9c7-4386-8395-afcb6cc7b23f,network=Network(95353de3-ad96-46d0-a73f-c8bf0356983c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfaecef4-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.326 2 DEBUG os_vif [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:3b:e6,bridge_name='br-int',has_traffic_filtering=True,id=dfaecef4-f9c7-4386-8395-afcb6cc7b23f,network=Network(95353de3-ad96-46d0-a73f-c8bf0356983c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfaecef4-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfaecef4-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.332 2 INFO os_vif [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:3b:e6,bridge_name='br-int',has_traffic_filtering=True,id=dfaecef4-f9c7-4386-8395-afcb6cc7b23f,network=Network(95353de3-ad96-46d0-a73f-c8bf0356983c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfaecef4-f9')#033[00m
Oct  2 09:04:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff-userdata-shm.mount: Deactivated successfully.
Oct  2 09:04:41 np0005466031 systemd[1]: var-lib-containers-storage-overlay-737fea02bb1c24cc2e62227bbbf4915578bcb6876aef2aff6730dfdbbd2ae03f-merged.mount: Deactivated successfully.
Oct  2 09:04:41 np0005466031 podman[315940]: 2025-10-02 13:04:41.357725763 +0000 UTC m=+0.105280304 container cleanup d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:04:41 np0005466031 systemd[1]: libpod-conmon-d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff.scope: Deactivated successfully.
Oct  2 09:04:41 np0005466031 podman[315997]: 2025-10-02 13:04:41.425159156 +0000 UTC m=+0.045094080 container remove d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.431 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ca838c28-01c9-4bf7-b777-182ea6b3cc58]: (4, ('Thu Oct  2 01:04:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c (d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff)\nd203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff\nThu Oct  2 01:04:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c (d203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff)\nd203484ce8ece2d9e8d1354d3b4564468c9ef86aba75946a980f77cc991f55ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.432 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[92791e13-23dd-4c49-a4de-2717465bc722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.433 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95353de3-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 kernel: tap95353de3-a0: left promiscuous mode
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.440 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e77475d0-fdff-40ae-901d-0cc277947b27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.466 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3412139b-0890-44a5-ac5b-ea785c35e7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.467 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[42464c1d-fa24-461a-8c5b-df528ea6c456]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.482 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b7539208-fe3a-4302-949d-c5802a31eb44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 811033, 'reachable_time': 36682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316014, 'error': None, 'target': 'ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.484 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-95353de3-ad96-46d0-a73f-c8bf0356983c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:04:41 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:41.484 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[774750d9-89ff-438c-babc-d0c7fcbfa44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:41 np0005466031 systemd[1]: run-netns-ovnmeta\x2d95353de3\x2dad96\x2d46d0\x2da73f\x2dc8bf0356983c.mount: Deactivated successfully.
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.853 2 INFO nova.virt.libvirt.driver [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Deleting instance files /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c_del#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.854 2 INFO nova.virt.libvirt.driver [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Deletion of /var/lib/nova/instances/cb892d5f-0907-47e7-94e9-5903cdb0cd5c_del complete#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.922 2 INFO nova.compute.manager [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.923 2 DEBUG oslo.service.loopingcall [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.924 2 DEBUG nova.compute.manager [-] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:04:41 np0005466031 nova_compute[235803]: 2025-10-02 13:04:41.924 2 DEBUG nova.network.neutron [-] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.018 2 DEBUG nova.compute.manager [req-d82c01c8-d3c5-4870-804b-cd30565417dd req-7d5fda41-b914-4180-8fa4-3d7afd846250 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-vif-unplugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.019 2 DEBUG oslo_concurrency.lockutils [req-d82c01c8-d3c5-4870-804b-cd30565417dd req-7d5fda41-b914-4180-8fa4-3d7afd846250 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.019 2 DEBUG oslo_concurrency.lockutils [req-d82c01c8-d3c5-4870-804b-cd30565417dd req-7d5fda41-b914-4180-8fa4-3d7afd846250 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.019 2 DEBUG oslo_concurrency.lockutils [req-d82c01c8-d3c5-4870-804b-cd30565417dd req-7d5fda41-b914-4180-8fa4-3d7afd846250 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.020 2 DEBUG nova.compute.manager [req-d82c01c8-d3c5-4870-804b-cd30565417dd req-7d5fda41-b914-4180-8fa4-3d7afd846250 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] No waiting events found dispatching network-vif-unplugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.020 2 DEBUG nova.compute.manager [req-d82c01c8-d3c5-4870-804b-cd30565417dd req-7d5fda41-b914-4180-8fa4-3d7afd846250 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-vif-unplugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:04:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:42 np0005466031 podman[316017]: 2025-10-02 13:04:42.636479505 +0000 UTC m=+0.061911345 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.637 2 DEBUG nova.network.neutron [-] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:42 np0005466031 podman[316018]: 2025-10-02 13:04:42.659741955 +0000 UTC m=+0.083303281 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.658 2 INFO nova.compute.manager [-] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Took 0.73 seconds to deallocate network for instance.#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.716 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.716 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.785 2 DEBUG oslo_concurrency.processutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.949 2 DEBUG nova.network.neutron [req-8b7e7f6d-ec8e-46ce-b9ce-abd9e90005a4 req-1af79330-2cdd-46e2-b326-49bb93ac8fe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updated VIF entry in instance network info cache for port dfaecef4-f9c7-4386-8395-afcb6cc7b23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:42 np0005466031 nova_compute[235803]: 2025-10-02 13:04:42.950 2 DEBUG nova.network.neutron [req-8b7e7f6d-ec8e-46ce-b9ce-abd9e90005a4 req-1af79330-2cdd-46e2-b326-49bb93ac8fe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updating instance_info_cache with network_info: [{"id": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "address": "fa:16:3e:62:3b:e6", "network": {"id": "95353de3-ad96-46d0-a73f-c8bf0356983c", "bridge": "br-int", "label": "tempest-network-smoke--1089909448", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfaecef4-f9", "ovs_interfaceid": "dfaecef4-f9c7-4386-8395-afcb6cc7b23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.006 2 DEBUG oslo_concurrency.lockutils [req-8b7e7f6d-ec8e-46ce-b9ce-abd9e90005a4 req-1af79330-2cdd-46e2-b326-49bb93ac8fe1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-cb892d5f-0907-47e7-94e9-5903cdb0cd5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.085 2 DEBUG nova.compute.manager [req-d236c7a6-a1aa-4ca2-8684-70801df6f1ac req-0748bcb1-60ad-46c6-8d38-49705fe92087 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-vif-deleted-dfaecef4-f9c7-4386-8395-afcb6cc7b23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.085 2 INFO nova.compute.manager [req-d236c7a6-a1aa-4ca2-8684-70801df6f1ac req-0748bcb1-60ad-46c6-8d38-49705fe92087 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Neutron deleted interface dfaecef4-f9c7-4386-8395-afcb6cc7b23f; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.086 2 DEBUG nova.network.neutron [req-d236c7a6-a1aa-4ca2-8684-70801df6f1ac req-0748bcb1-60ad-46c6-8d38-49705fe92087 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.131 2 DEBUG nova.compute.manager [req-d236c7a6-a1aa-4ca2-8684-70801df6f1ac req-0748bcb1-60ad-46c6-8d38-49705fe92087 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Detach interface failed, port_id=dfaecef4-f9c7-4386-8395-afcb6cc7b23f, reason: Instance cb892d5f-0907-47e7-94e9-5903cdb0cd5c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 09:04:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/341473276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.232 2 DEBUG oslo_concurrency.processutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.237 2 DEBUG nova.compute.provider_tree [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.291 2 DEBUG nova.scheduler.client.report [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.320 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.355 2 INFO nova.scheduler.client.report [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance cb892d5f-0907-47e7-94e9-5903cdb0cd5c#033[00m
Oct  2 09:04:43 np0005466031 nova_compute[235803]: 2025-10-02 13:04:43.437 2 DEBUG oslo_concurrency.lockutils [None req-61d14d3e-36f2-4358-9d54-ca61f345b472 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:44 np0005466031 nova_compute[235803]: 2025-10-02 13:04:44.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:44 np0005466031 nova_compute[235803]: 2025-10-02 13:04:44.154 2 DEBUG nova.compute.manager [req-1b3623ff-3c36-4329-bb90-3587a4af2110 req-e84f78dd-901c-416a-bb5a-dd4197bdb4d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received event network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:44 np0005466031 nova_compute[235803]: 2025-10-02 13:04:44.155 2 DEBUG oslo_concurrency.lockutils [req-1b3623ff-3c36-4329-bb90-3587a4af2110 req-e84f78dd-901c-416a-bb5a-dd4197bdb4d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:44 np0005466031 nova_compute[235803]: 2025-10-02 13:04:44.155 2 DEBUG oslo_concurrency.lockutils [req-1b3623ff-3c36-4329-bb90-3587a4af2110 req-e84f78dd-901c-416a-bb5a-dd4197bdb4d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:44 np0005466031 nova_compute[235803]: 2025-10-02 13:04:44.156 2 DEBUG oslo_concurrency.lockutils [req-1b3623ff-3c36-4329-bb90-3587a4af2110 req-e84f78dd-901c-416a-bb5a-dd4197bdb4d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cb892d5f-0907-47e7-94e9-5903cdb0cd5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:44 np0005466031 nova_compute[235803]: 2025-10-02 13:04:44.156 2 DEBUG nova.compute.manager [req-1b3623ff-3c36-4329-bb90-3587a4af2110 req-e84f78dd-901c-416a-bb5a-dd4197bdb4d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] No waiting events found dispatching network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:44 np0005466031 nova_compute[235803]: 2025-10-02 13:04:44.156 2 WARNING nova.compute.manager [req-1b3623ff-3c36-4329-bb90-3587a4af2110 req-e84f78dd-901c-416a-bb5a-dd4197bdb4d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Received unexpected event network-vif-plugged-dfaecef4-f9c7-4386-8395-afcb6cc7b23f for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:04:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:44.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:44 np0005466031 nova_compute[235803]: 2025-10-02 13:04:44.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:44.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:45 np0005466031 nova_compute[235803]: 2025-10-02 13:04:45.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:46.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:46 np0005466031 nova_compute[235803]: 2025-10-02 13:04:46.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:46 np0005466031 nova_compute[235803]: 2025-10-02 13:04:46.884 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:46 np0005466031 nova_compute[235803]: 2025-10-02 13:04:46.885 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:46 np0005466031 nova_compute[235803]: 2025-10-02 13:04:46.916 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:04:46 np0005466031 nova_compute[235803]: 2025-10-02 13:04:46.984 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:46 np0005466031 nova_compute[235803]: 2025-10-02 13:04:46.985 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:46 np0005466031 nova_compute[235803]: 2025-10-02 13:04:46.990 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:04:46 np0005466031 nova_compute[235803]: 2025-10-02 13:04:46.990 2 INFO nova.compute.claims [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.138 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3717693724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.560 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.573 2 DEBUG nova.compute.provider_tree [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.592 2 DEBUG nova.scheduler.client.report [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.668 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.669 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.721 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.721 2 DEBUG nova.network.neutron [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.772 2 INFO nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.810 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.942 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.943 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.943 2 INFO nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Creating image(s)#033[00m
Oct  2 09:04:47 np0005466031 nova_compute[235803]: 2025-10-02 13:04:47.978 2 DEBUG nova.storage.rbd_utils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.006 2 DEBUG nova.storage.rbd_utils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.031 2 DEBUG nova.storage.rbd_utils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.034 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.063 2 DEBUG nova.policy [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.097 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.098 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.098 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.099 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.128 2 DEBUG nova.storage.rbd_utils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.131 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.396 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.465 2 DEBUG nova.storage.rbd_utils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.583 2 DEBUG nova.objects.instance [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid e32508e3-e3b9-4e61-a988-8a3e0ada7848 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.608 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.609 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Ensure instance console log exists: /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.609 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.610 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:48 np0005466031 nova_compute[235803]: 2025-10-02 13:04:48.610 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:49 np0005466031 nova_compute[235803]: 2025-10-02 13:04:49.141 2 DEBUG nova.network.neutron [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Successfully created port: 01cd52b4-b38a-475e-81eb-435d4253cc58 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:04:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:49 np0005466031 nova_compute[235803]: 2025-10-02 13:04:49.947 2 DEBUG nova.network.neutron [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Successfully updated port: 01cd52b4-b38a-475e-81eb-435d4253cc58 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:04:49 np0005466031 nova_compute[235803]: 2025-10-02 13:04:49.962 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:49 np0005466031 nova_compute[235803]: 2025-10-02 13:04:49.963 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:49 np0005466031 nova_compute[235803]: 2025-10-02 13:04:49.963 2 DEBUG nova.network.neutron [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:04:50 np0005466031 nova_compute[235803]: 2025-10-02 13:04:50.100 2 DEBUG nova.compute.manager [req-4c56cf7f-e5b3-4677-ba24-1eb611c46296 req-b0762bd2-a50a-49f3-977b-23788ad681c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-changed-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:50 np0005466031 nova_compute[235803]: 2025-10-02 13:04:50.101 2 DEBUG nova.compute.manager [req-4c56cf7f-e5b3-4677-ba24-1eb611c46296 req-b0762bd2-a50a-49f3-977b-23788ad681c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Refreshing instance network info cache due to event network-changed-01cd52b4-b38a-475e-81eb-435d4253cc58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:50 np0005466031 nova_compute[235803]: 2025-10-02 13:04:50.101 2 DEBUG oslo_concurrency.lockutils [req-4c56cf7f-e5b3-4677-ba24-1eb611c46296 req-b0762bd2-a50a-49f3-977b-23788ad681c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:50.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:50 np0005466031 nova_compute[235803]: 2025-10-02 13:04:50.255 2 DEBUG nova.network.neutron [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:04:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:50 np0005466031 nova_compute[235803]: 2025-10-02 13:04:50.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.340 2 DEBUG nova.network.neutron [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updating instance_info_cache with network_info: [{"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.357 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.358 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Instance network_info: |[{"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.358 2 DEBUG oslo_concurrency.lockutils [req-4c56cf7f-e5b3-4677-ba24-1eb611c46296 req-b0762bd2-a50a-49f3-977b-23788ad681c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.359 2 DEBUG nova.network.neutron [req-4c56cf7f-e5b3-4677-ba24-1eb611c46296 req-b0762bd2-a50a-49f3-977b-23788ad681c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Refreshing network info cache for port 01cd52b4-b38a-475e-81eb-435d4253cc58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.362 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Start _get_guest_xml network_info=[{"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.367 2 WARNING nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.372 2 DEBUG nova.virt.libvirt.host [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.372 2 DEBUG nova.virt.libvirt.host [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.379 2 DEBUG nova.virt.libvirt.host [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.380 2 DEBUG nova.virt.libvirt.host [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.381 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.381 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.382 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.382 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.382 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.383 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.383 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.383 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.383 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.384 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.384 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.384 2 DEBUG nova.virt.hardware [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.388 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2652707074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.843 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.871 2 DEBUG nova.storage.rbd_utils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:51 np0005466031 nova_compute[235803]: 2025-10-02 13:04:51.876 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:52.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2608477789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.291 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.292 2 DEBUG nova.virt.libvirt.vif [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-772464044',display_name='tempest-TestNetworkAdvancedServerOps-server-772464044',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-772464044',id=185,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqMbsXEb0nlmiCBUDGaQ25+lSABGmHZAahFbd9qUbZ5D41UhMPo5l72icdDUbZWbwyUm0mAcIVROdlXx9RsskHHInH4DzmIdBmWSx+8TV711AVo6/b7Vb80vU0lzfQ5fg==',key_name='tempest-TestNetworkAdvancedServerOps-1650343482',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-0kdz48mq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:47Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e32508e3-e3b9-4e61-a988-8a3e0ada7848,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.292 2 DEBUG nova.network.os_vif_util [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.293 2 DEBUG nova.network.os_vif_util [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.294 2 DEBUG nova.objects.instance [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e32508e3-e3b9-4e61-a988-8a3e0ada7848 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.311 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <uuid>e32508e3-e3b9-4e61-a988-8a3e0ada7848</uuid>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <name>instance-000000b9</name>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-772464044</nova:name>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:04:51</nova:creationTime>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <nova:port uuid="01cd52b4-b38a-475e-81eb-435d4253cc58">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <entry name="serial">e32508e3-e3b9-4e61-a988-8a3e0ada7848</entry>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <entry name="uuid">e32508e3-e3b9-4e61-a988-8a3e0ada7848</entry>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk.config">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:ab:d1:b6"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <target dev="tap01cd52b4-b3"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848/console.log" append="off"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:04:52 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:04:52 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:04:52 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:04:52 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.312 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Preparing to wait for external event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.312 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.313 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.313 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.314 2 DEBUG nova.virt.libvirt.vif [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-772464044',display_name='tempest-TestNetworkAdvancedServerOps-server-772464044',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-772464044',id=185,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqMbsXEb0nlmiCBUDGaQ25+lSABGmHZAahFbd9qUbZ5D41UhMPo5l72icdDUbZWbwyUm0mAcIVROdlXx9RsskHHInH4DzmIdBmWSx+8TV711AVo6/b7Vb80vU0lzfQ5fg==',key_name='tempest-TestNetworkAdvancedServerOps-1650343482',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-0kdz48mq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:47Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e32508e3-e3b9-4e61-a988-8a3e0ada7848,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.314 2 DEBUG nova.network.os_vif_util [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.315 2 DEBUG nova.network.os_vif_util [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.315 2 DEBUG os_vif [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.317 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01cd52b4-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01cd52b4-b3, col_values=(('external_ids', {'iface-id': '01cd52b4-b38a-475e-81eb-435d4253cc58', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:d1:b6', 'vm-uuid': 'e32508e3-e3b9-4e61-a988-8a3e0ada7848'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:52 np0005466031 NetworkManager[44907]: <info>  [1759410292.3220] manager: (tap01cd52b4-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.327 2 INFO os_vif [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3')#033[00m
Oct  2 09:04:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:52.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.390 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.390 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.390 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:ab:d1:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.391 2 INFO nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Using config drive#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.417 2 DEBUG nova.storage.rbd_utils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.933 2 DEBUG nova.network.neutron [req-4c56cf7f-e5b3-4677-ba24-1eb611c46296 req-b0762bd2-a50a-49f3-977b-23788ad681c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updated VIF entry in instance network info cache for port 01cd52b4-b38a-475e-81eb-435d4253cc58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.934 2 DEBUG nova.network.neutron [req-4c56cf7f-e5b3-4677-ba24-1eb611c46296 req-b0762bd2-a50a-49f3-977b-23788ad681c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updating instance_info_cache with network_info: [{"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:52 np0005466031 nova_compute[235803]: 2025-10-02 13:04:52.950 2 DEBUG oslo_concurrency.lockutils [req-4c56cf7f-e5b3-4677-ba24-1eb611c46296 req-b0762bd2-a50a-49f3-977b-23788ad681c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:53 np0005466031 nova_compute[235803]: 2025-10-02 13:04:53.016 2 INFO nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Creating config drive at /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848/disk.config#033[00m
Oct  2 09:04:53 np0005466031 nova_compute[235803]: 2025-10-02 13:04:53.021 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2sax8rgk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:53 np0005466031 nova_compute[235803]: 2025-10-02 13:04:53.159 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2sax8rgk" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:53 np0005466031 nova_compute[235803]: 2025-10-02 13:04:53.186 2 DEBUG nova.storage.rbd_utils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:53 np0005466031 nova_compute[235803]: 2025-10-02 13:04:53.191 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848/disk.config e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:54.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.492 2 DEBUG oslo_concurrency.processutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848/disk.config e32508e3-e3b9-4e61-a988-8a3e0ada7848_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.493 2 INFO nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Deleting local config drive /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848/disk.config because it was imported into RBD.#033[00m
Oct  2 09:04:55 np0005466031 kernel: tap01cd52b4-b3: entered promiscuous mode
Oct  2 09:04:55 np0005466031 NetworkManager[44907]: <info>  [1759410295.5528] manager: (tap01cd52b4-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/332)
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:55Z|00735|binding|INFO|Claiming lport 01cd52b4-b38a-475e-81eb-435d4253cc58 for this chassis.
Oct  2 09:04:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:55Z|00736|binding|INFO|01cd52b4-b38a-475e-81eb-435d4253cc58: Claiming fa:16:3e:ab:d1:b6 10.100.0.3
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.562 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:d1:b6 10.100.0.3'], port_security=['fa:16:3e:ab:d1:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e32508e3-e3b9-4e61-a988-8a3e0ada7848', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2f43cb28-dfdc-48d9-8ac7-f9abc25cb786', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a7d4888-1040-4020-a961-84a13219efad, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=01cd52b4-b38a-475e-81eb-435d4253cc58) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.563 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 01cd52b4-b38a-475e-81eb-435d4253cc58 in datapath 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 bound to our chassis#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.564 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.576 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[964138be-1e53-4b9e-ab20-1506057d1d40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.577 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ebd38c9-b1 in ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.580 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ebd38c9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.580 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[20f1b0c1-a078-4937-aecd-9986e1484e54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.581 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[368ba47f-c605-4d72-99f3-900022aac6cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.593 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[37aa9b24-6f92-4b47-83b4-02503f048730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 systemd-machined[192227]: New machine qemu-84-instance-000000b9.
Oct  2 09:04:55 np0005466031 systemd-udevd[316569]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:04:55 np0005466031 systemd[1]: Started Virtual Machine qemu-84-instance-000000b9.
Oct  2 09:04:55 np0005466031 NetworkManager[44907]: <info>  [1759410295.6136] device (tap01cd52b4-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:04:55 np0005466031 NetworkManager[44907]: <info>  [1759410295.6144] device (tap01cd52b4-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.618 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e90a07d1-5dee-476f-b120-2ecd8292b211]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:55Z|00737|binding|INFO|Setting lport 01cd52b4-b38a-475e-81eb-435d4253cc58 ovn-installed in OVS
Oct  2 09:04:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:55Z|00738|binding|INFO|Setting lport 01cd52b4-b38a-475e-81eb-435d4253cc58 up in Southbound
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.648 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f54787-1d09-4fe2-9c01-70f69de34b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.653 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cfff2f93-f0ae-4a2f-9649-df0eeaacc480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 NetworkManager[44907]: <info>  [1759410295.6537] manager: (tap2ebd38c9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/333)
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.686 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e561c29d-133c-4f88-a837-e5bb888e0625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.692 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a91909-f290-452f-aa51-4f66e9e3bd34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 NetworkManager[44907]: <info>  [1759410295.7130] device (tap2ebd38c9-b0): carrier: link connected
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.718 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4f5902-9281-4532-9a11-73538684bed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.745 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0793531f-7d4e-4e3b-940c-1b49e2e2b558]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ebd38c9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:5b:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815130, 'reachable_time': 36558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316607, 'error': None, 'target': 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.768 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4139547d-16db-4508-89ce-082eb1fefa99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:5b49'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815130, 'tstamp': 815130}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316610, 'error': None, 'target': 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.782 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[195baa72-80d4-452e-8277-fc5ff22b7ffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ebd38c9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:5b:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815130, 'reachable_time': 36558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316611, 'error': None, 'target': 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.810 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c4168851-1334-4a70-a8be-3e554ded2678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.861 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f417a835-617f-46c7-989e-a12da472ec98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.862 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ebd38c9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.863 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.863 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ebd38c9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:55 np0005466031 NetworkManager[44907]: <info>  [1759410295.8665] manager: (tap2ebd38c9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466031 kernel: tap2ebd38c9-b0: entered promiscuous mode
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.869 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ebd38c9-b0, col_values=(('external_ids', {'iface-id': 'f6b572e4-961b-4dad-8089-6d3ad0927ecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:55 np0005466031 ovn_controller[132413]: 2025-10-02T13:04:55Z|00739|binding|INFO|Releasing lport f6b572e4-961b-4dad-8089-6d3ad0927ecc from this chassis (sb_readonly=0)
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466031 nova_compute[235803]: 2025-10-02 13:04:55.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.889 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ebd38c9-b7bf-496a-bf5d-d126c7d6c970.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ebd38c9-b7bf-496a-bf5d-d126c7d6c970.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.890 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[05374f98-42ba-4ad3-a619-3312f7d79cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.891 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/2ebd38c9-b7bf-496a-bf5d-d126c7d6c970.pid.haproxy
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:04:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:04:55.893 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'env', 'PROCESS_TAG=haproxy-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ebd38c9-b7bf-496a-bf5d-d126c7d6c970.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:04:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:56.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:56 np0005466031 podman[316657]: 2025-10-02 13:04:56.23006825 +0000 UTC m=+0.049775945 container create d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 09:04:56 np0005466031 systemd[1]: Started libpod-conmon-d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07.scope.
Oct  2 09:04:56 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:04:56 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c1290c6868a80a6e45dacacd3b446cc8c5c56fc21e984f7fbcc5c0b3fbcac9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:04:56 np0005466031 podman[316657]: 2025-10-02 13:04:56.200365224 +0000 UTC m=+0.020072939 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:04:56 np0005466031 podman[316657]: 2025-10-02 13:04:56.30016798 +0000 UTC m=+0.119875705 container init d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:04:56 np0005466031 nova_compute[235803]: 2025-10-02 13:04:56.300 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410281.2988746, cb892d5f-0907-47e7-94e9-5903cdb0cd5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:56 np0005466031 nova_compute[235803]: 2025-10-02 13:04:56.301 2 INFO nova.compute.manager [-] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:04:56 np0005466031 podman[316657]: 2025-10-02 13:04:56.305667918 +0000 UTC m=+0.125375613 container start d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 09:04:56 np0005466031 nova_compute[235803]: 2025-10-02 13:04:56.320 2 DEBUG nova.compute.manager [req-b2b5ef00-abd6-4900-b0bc-0d8541f7cd28 req-9935f4c8-3f13-450a-9025-c3df62ed6509 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:56 np0005466031 nova_compute[235803]: 2025-10-02 13:04:56.320 2 DEBUG oslo_concurrency.lockutils [req-b2b5ef00-abd6-4900-b0bc-0d8541f7cd28 req-9935f4c8-3f13-450a-9025-c3df62ed6509 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:56 np0005466031 nova_compute[235803]: 2025-10-02 13:04:56.320 2 DEBUG oslo_concurrency.lockutils [req-b2b5ef00-abd6-4900-b0bc-0d8541f7cd28 req-9935f4c8-3f13-450a-9025-c3df62ed6509 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:56 np0005466031 nova_compute[235803]: 2025-10-02 13:04:56.321 2 DEBUG oslo_concurrency.lockutils [req-b2b5ef00-abd6-4900-b0bc-0d8541f7cd28 req-9935f4c8-3f13-450a-9025-c3df62ed6509 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:56 np0005466031 nova_compute[235803]: 2025-10-02 13:04:56.321 2 DEBUG nova.compute.manager [req-b2b5ef00-abd6-4900-b0bc-0d8541f7cd28 req-9935f4c8-3f13-450a-9025-c3df62ed6509 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Processing event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:04:56 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[316673]: [NOTICE]   (316677) : New worker (316679) forked
Oct  2 09:04:56 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[316673]: [NOTICE]   (316677) : Loading success.
Oct  2 09:04:56 np0005466031 nova_compute[235803]: 2025-10-02 13:04:56.329 2 DEBUG nova.compute.manager [None req-f127ff1d-dbfb-4da2-9bf0-31e7a5973572 - - - - - -] [instance: cb892d5f-0907-47e7-94e9-5903cdb0cd5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:56.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.444 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410297.4445653, e32508e3-e3b9-4e61-a988-8a3e0ada7848 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.445 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] VM Started (Lifecycle Event)#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.447 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.450 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.453 2 INFO nova.virt.libvirt.driver [-] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Instance spawned successfully.#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.453 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.468 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.473 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.476 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.476 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.477 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.477 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.477 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:04:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:04:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.478 2 DEBUG nova.virt.libvirt.driver [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.517 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.518 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410297.4446738, e32508e3-e3b9-4e61-a988-8a3e0ada7848 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.518 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.555 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.558 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410297.4494112, e32508e3-e3b9-4e61-a988-8a3e0ada7848 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.559 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.580 2 INFO nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Took 9.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.580 2 DEBUG nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.581 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.591 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.624 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.656 2 INFO nova.compute.manager [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Took 10.69 seconds to build instance.#033[00m
Oct  2 09:04:57 np0005466031 nova_compute[235803]: 2025-10-02 13:04:57.674 2 DEBUG oslo_concurrency.lockutils [None req-1bedf389-63a3-4159-8d9d-2f5e7e6c305d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:04:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:04:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:04:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:58 np0005466031 nova_compute[235803]: 2025-10-02 13:04:58.541 2 DEBUG nova.compute.manager [req-8c30bfd7-e3eb-4bd1-b7fd-5cdf42972d08 req-0a318d8b-1329-492c-8ec7-a5a55dfa4f44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:58 np0005466031 nova_compute[235803]: 2025-10-02 13:04:58.541 2 DEBUG oslo_concurrency.lockutils [req-8c30bfd7-e3eb-4bd1-b7fd-5cdf42972d08 req-0a318d8b-1329-492c-8ec7-a5a55dfa4f44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:58 np0005466031 nova_compute[235803]: 2025-10-02 13:04:58.541 2 DEBUG oslo_concurrency.lockutils [req-8c30bfd7-e3eb-4bd1-b7fd-5cdf42972d08 req-0a318d8b-1329-492c-8ec7-a5a55dfa4f44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:58 np0005466031 nova_compute[235803]: 2025-10-02 13:04:58.542 2 DEBUG oslo_concurrency.lockutils [req-8c30bfd7-e3eb-4bd1-b7fd-5cdf42972d08 req-0a318d8b-1329-492c-8ec7-a5a55dfa4f44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:58 np0005466031 nova_compute[235803]: 2025-10-02 13:04:58.542 2 DEBUG nova.compute.manager [req-8c30bfd7-e3eb-4bd1-b7fd-5cdf42972d08 req-0a318d8b-1329-492c-8ec7-a5a55dfa4f44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] No waiting events found dispatching network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:58 np0005466031 nova_compute[235803]: 2025-10-02 13:04:58.542 2 WARNING nova.compute.manager [req-8c30bfd7-e3eb-4bd1-b7fd-5cdf42972d08 req-0a318d8b-1329-492c-8ec7-a5a55dfa4f44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received unexpected event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:04:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:00.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:00 np0005466031 nova_compute[235803]: 2025-10-02 13:05:00.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:01 np0005466031 NetworkManager[44907]: <info>  [1759410301.0267] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Oct  2 09:05:01 np0005466031 NetworkManager[44907]: <info>  [1759410301.0275] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Oct  2 09:05:01 np0005466031 nova_compute[235803]: 2025-10-02 13:05:01.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:01 np0005466031 nova_compute[235803]: 2025-10-02 13:05:01.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:01 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:01Z|00740|binding|INFO|Releasing lport f6b572e4-961b-4dad-8089-6d3ad0927ecc from this chassis (sb_readonly=0)
Oct  2 09:05:01 np0005466031 nova_compute[235803]: 2025-10-02 13:05:01.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:01 np0005466031 nova_compute[235803]: 2025-10-02 13:05:01.372 2 DEBUG nova.compute.manager [req-abba25d4-5b3e-4654-8202-198d0927363b req-e6819a4d-6ae0-461a-a3c1-465971c5ee94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-changed-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:01 np0005466031 nova_compute[235803]: 2025-10-02 13:05:01.373 2 DEBUG nova.compute.manager [req-abba25d4-5b3e-4654-8202-198d0927363b req-e6819a4d-6ae0-461a-a3c1-465971c5ee94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Refreshing instance network info cache due to event network-changed-01cd52b4-b38a-475e-81eb-435d4253cc58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:05:01 np0005466031 nova_compute[235803]: 2025-10-02 13:05:01.373 2 DEBUG oslo_concurrency.lockutils [req-abba25d4-5b3e-4654-8202-198d0927363b req-e6819a4d-6ae0-461a-a3c1-465971c5ee94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:01 np0005466031 nova_compute[235803]: 2025-10-02 13:05:01.373 2 DEBUG oslo_concurrency.lockutils [req-abba25d4-5b3e-4654-8202-198d0927363b req-e6819a4d-6ae0-461a-a3c1-465971c5ee94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:01 np0005466031 nova_compute[235803]: 2025-10-02 13:05:01.374 2 DEBUG nova.network.neutron [req-abba25d4-5b3e-4654-8202-198d0927363b req-e6819a4d-6ae0-461a-a3c1-465971c5ee94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Refreshing network info cache for port 01cd52b4-b38a-475e-81eb-435d4253cc58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:05:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:02.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:02 np0005466031 nova_compute[235803]: 2025-10-02 13:05:02.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:02 np0005466031 nova_compute[235803]: 2025-10-02 13:05:02.856 2 DEBUG nova.network.neutron [req-abba25d4-5b3e-4654-8202-198d0927363b req-e6819a4d-6ae0-461a-a3c1-465971c5ee94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updated VIF entry in instance network info cache for port 01cd52b4-b38a-475e-81eb-435d4253cc58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:05:02 np0005466031 nova_compute[235803]: 2025-10-02 13:05:02.857 2 DEBUG nova.network.neutron [req-abba25d4-5b3e-4654-8202-198d0927363b req-e6819a4d-6ae0-461a-a3c1-465971c5ee94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updating instance_info_cache with network_info: [{"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:02 np0005466031 nova_compute[235803]: 2025-10-02 13:05:02.877 2 DEBUG oslo_concurrency.lockutils [req-abba25d4-5b3e-4654-8202-198d0927363b req-e6819a4d-6ae0-461a-a3c1-465971c5ee94 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:04.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:05:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:05:05 np0005466031 nova_compute[235803]: 2025-10-02 13:05:05.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:06.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:06 np0005466031 podman[316786]: 2025-10-02 13:05:06.635303194 +0000 UTC m=+0.051619678 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:05:06 np0005466031 podman[316787]: 2025-10-02 13:05:06.668079448 +0000 UTC m=+0.083224558 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:05:07 np0005466031 nova_compute[235803]: 2025-10-02 13:05:07.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:08.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:08 np0005466031 nova_compute[235803]: 2025-10-02 13:05:08.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:08 np0005466031 nova_compute[235803]: 2025-10-02 13:05:08.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:09 np0005466031 nova_compute[235803]: 2025-10-02 13:05:09.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:10.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:10Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:d1:b6 10.100.0.3
Oct  2 09:05:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:10Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:d1:b6 10.100.0.3
Oct  2 09:05:10 np0005466031 nova_compute[235803]: 2025-10-02 13:05:10.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:12.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:12 np0005466031 nova_compute[235803]: 2025-10-02 13:05:12.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:13 np0005466031 podman[316884]: 2025-10-02 13:05:13.644007878 +0000 UTC m=+0.069438722 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:05:13 np0005466031 podman[316883]: 2025-10-02 13:05:13.649475035 +0000 UTC m=+0.074981041 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:14.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:14 np0005466031 nova_compute[235803]: 2025-10-02 13:05:14.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:15 np0005466031 nova_compute[235803]: 2025-10-02 13:05:15.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:15 np0005466031 nova_compute[235803]: 2025-10-02 13:05:15.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:16.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:16.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:16 np0005466031 nova_compute[235803]: 2025-10-02 13:05:16.894 2 INFO nova.compute.manager [None req-33480d59-3ae6-4c18-9561-b0b259fda89d 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Get console output#033[00m
Oct  2 09:05:16 np0005466031 nova_compute[235803]: 2025-10-02 13:05:16.899 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:05:17 np0005466031 nova_compute[235803]: 2025-10-02 13:05:17.146 2 DEBUG nova.objects.instance [None req-6e053069-a09e-46a7-ba1d-356eb007ac5f 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e32508e3-e3b9-4e61-a988-8a3e0ada7848 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:17 np0005466031 nova_compute[235803]: 2025-10-02 13:05:17.239 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410317.2391052, e32508e3-e3b9-4e61-a988-8a3e0ada7848 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:17 np0005466031 nova_compute[235803]: 2025-10-02 13:05:17.240 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:05:17 np0005466031 nova_compute[235803]: 2025-10-02 13:05:17.267 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:17 np0005466031 nova_compute[235803]: 2025-10-02 13:05:17.271 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:17 np0005466031 nova_compute[235803]: 2025-10-02 13:05:17.295 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 09:05:17 np0005466031 nova_compute[235803]: 2025-10-02 13:05:17.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:17 np0005466031 kernel: tap01cd52b4-b3 (unregistering): left promiscuous mode
Oct  2 09:05:17 np0005466031 NetworkManager[44907]: <info>  [1759410317.9680] device (tap01cd52b4-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:18 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:18Z|00741|binding|INFO|Releasing lport 01cd52b4-b38a-475e-81eb-435d4253cc58 from this chassis (sb_readonly=0)
Oct  2 09:05:18 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:18Z|00742|binding|INFO|Setting lport 01cd52b4-b38a-475e-81eb-435d4253cc58 down in Southbound
Oct  2 09:05:18 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:18Z|00743|binding|INFO|Removing iface tap01cd52b4-b3 ovn-installed in OVS
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.036 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:d1:b6 10.100.0.3'], port_security=['fa:16:3e:ab:d1:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e32508e3-e3b9-4e61-a988-8a3e0ada7848', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f43cb28-dfdc-48d9-8ac7-f9abc25cb786', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a7d4888-1040-4020-a961-84a13219efad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=01cd52b4-b38a-475e-81eb-435d4253cc58) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.038 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 01cd52b4-b38a-475e-81eb-435d4253cc58 in datapath 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 unbound from our chassis#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.039 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.040 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4ba155-f052-4ef7-96a1-0a2fe1c8d04e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.041 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 namespace which is not needed anymore#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:18 np0005466031 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Oct  2 09:05:18 np0005466031 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b9.scope: Consumed 14.817s CPU time.
Oct  2 09:05:18 np0005466031 systemd-machined[192227]: Machine qemu-84-instance-000000b9 terminated.
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.148 2 DEBUG nova.compute.manager [None req-6e053069-a09e-46a7-ba1d-356eb007ac5f 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:18 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[316673]: [NOTICE]   (316677) : haproxy version is 2.8.14-c23fe91
Oct  2 09:05:18 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[316673]: [NOTICE]   (316677) : path to executable is /usr/sbin/haproxy
Oct  2 09:05:18 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[316673]: [WARNING]  (316677) : Exiting Master process...
Oct  2 09:05:18 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[316673]: [ALERT]    (316677) : Current worker (316679) exited with code 143 (Terminated)
Oct  2 09:05:18 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[316673]: [WARNING]  (316677) : All workers exited. Exiting... (0)
Oct  2 09:05:18 np0005466031 systemd[1]: libpod-d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07.scope: Deactivated successfully.
Oct  2 09:05:18 np0005466031 podman[316954]: 2025-10-02 13:05:18.1828597 +0000 UTC m=+0.047152060 container died d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:18 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07-userdata-shm.mount: Deactivated successfully.
Oct  2 09:05:18 np0005466031 systemd[1]: var-lib-containers-storage-overlay-e6c1290c6868a80a6e45dacacd3b446cc8c5c56fc21e984f7fbcc5c0b3fbcac9-merged.mount: Deactivated successfully.
Oct  2 09:05:18 np0005466031 podman[316954]: 2025-10-02 13:05:18.220956068 +0000 UTC m=+0.085248448 container cleanup d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:05:18 np0005466031 systemd[1]: libpod-conmon-d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07.scope: Deactivated successfully.
Oct  2 09:05:18 np0005466031 podman[316997]: 2025-10-02 13:05:18.281100881 +0000 UTC m=+0.039767557 container remove d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.288 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9b8bd9-ecf4-4dc8-876d-6984b7123c97]: (4, ('Thu Oct  2 01:05:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 (d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07)\nd144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07\nThu Oct  2 01:05:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 (d144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07)\nd144c173bfe71706e812fa2324f766567072ce29ad33d7a0df1e932745c90b07\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.290 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[36200364-f9f9-4c7b-8f9f-86f2554df54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.291 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ebd38c9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:18 np0005466031 kernel: tap2ebd38c9-b0: left promiscuous mode
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.301 2 DEBUG nova.compute.manager [req-000a3178-3eb8-4c15-ae99-bb4cb7dc7599 req-b01e12cf-c5f1-4ded-932a-180e5830d055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-unplugged-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.301 2 DEBUG oslo_concurrency.lockutils [req-000a3178-3eb8-4c15-ae99-bb4cb7dc7599 req-b01e12cf-c5f1-4ded-932a-180e5830d055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.301 2 DEBUG oslo_concurrency.lockutils [req-000a3178-3eb8-4c15-ae99-bb4cb7dc7599 req-b01e12cf-c5f1-4ded-932a-180e5830d055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.302 2 DEBUG oslo_concurrency.lockutils [req-000a3178-3eb8-4c15-ae99-bb4cb7dc7599 req-b01e12cf-c5f1-4ded-932a-180e5830d055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.302 2 DEBUG nova.compute.manager [req-000a3178-3eb8-4c15-ae99-bb4cb7dc7599 req-b01e12cf-c5f1-4ded-932a-180e5830d055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] No waiting events found dispatching network-vif-unplugged-01cd52b4-b38a-475e-81eb-435d4253cc58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.302 2 WARNING nova.compute.manager [req-000a3178-3eb8-4c15-ae99-bb4cb7dc7599 req-b01e12cf-c5f1-4ded-932a-180e5830d055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received unexpected event network-vif-unplugged-01cd52b4-b38a-475e-81eb-435d4253cc58 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.320 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1404920f-830e-428b-8842-6d87d32d2e06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.353 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fd954c5c-0b38-4c7d-9885-8086448f4bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.354 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c6e6dd-bbff-40e0-8809-7b3c3903f4e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.368 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[84f654d3-c352-4432-8b0e-9cc27c849e72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815123, 'reachable_time': 42198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317016, 'error': None, 'target': 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:18 np0005466031 systemd[1]: run-netns-ovnmeta\x2d2ebd38c9\x2db7bf\x2d496a\x2dbf5d\x2dd126c7d6c970.mount: Deactivated successfully.
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.373 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:05:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:18.373 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d1428f56-947c-49ae-b6b6-c5feba87d736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:18.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:05:18 np0005466031 nova_compute[235803]: 2025-10-02 13:05:18.655 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:05:19 np0005466031 nova_compute[235803]: 2025-10-02 13:05:19.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:19 np0005466031 nova_compute[235803]: 2025-10-02 13:05:19.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:19 np0005466031 nova_compute[235803]: 2025-10-02 13:05:19.661 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:19 np0005466031 nova_compute[235803]: 2025-10-02 13:05:19.661 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:19 np0005466031 nova_compute[235803]: 2025-10-02 13:05:19.661 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:05:19 np0005466031 nova_compute[235803]: 2025-10-02 13:05:19.661 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/634107591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.102 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.166 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.166 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:05:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.299 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.300 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4114MB free_disk=20.80620574951172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.300 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.301 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.367 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance e32508e3-e3b9-4e61-a988-8a3e0ada7848 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.368 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.368 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.382 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.400 2 DEBUG nova.compute.manager [req-0d2e6cde-5abf-4cad-bdc3-9510e0ea1271 req-82e3f94b-48e5-4c7b-8d0a-11790b8803b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.401 2 DEBUG oslo_concurrency.lockutils [req-0d2e6cde-5abf-4cad-bdc3-9510e0ea1271 req-82e3f94b-48e5-4c7b-8d0a-11790b8803b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.401 2 DEBUG oslo_concurrency.lockutils [req-0d2e6cde-5abf-4cad-bdc3-9510e0ea1271 req-82e3f94b-48e5-4c7b-8d0a-11790b8803b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.402 2 DEBUG oslo_concurrency.lockutils [req-0d2e6cde-5abf-4cad-bdc3-9510e0ea1271 req-82e3f94b-48e5-4c7b-8d0a-11790b8803b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.402 2 DEBUG nova.compute.manager [req-0d2e6cde-5abf-4cad-bdc3-9510e0ea1271 req-82e3f94b-48e5-4c7b-8d0a-11790b8803b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] No waiting events found dispatching network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.402 2 WARNING nova.compute.manager [req-0d2e6cde-5abf-4cad-bdc3-9510e0ea1271 req-82e3f94b-48e5-4c7b-8d0a-11790b8803b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received unexpected event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.403 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.403 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:05:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.419 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.440 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.479 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/470398889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.883 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.889 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.905 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.929 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:05:20 np0005466031 nova_compute[235803]: 2025-10-02 13:05:20.930 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.084 2 INFO nova.compute.manager [None req-a5eabbe9-4672-41ad-9b57-081b93357280 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Get console output#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.330 2 INFO nova.compute.manager [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Resuming#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.331 2 DEBUG nova.objects.instance [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'flavor' on Instance uuid e32508e3-e3b9-4e61-a988-8a3e0ada7848 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.406 2 DEBUG oslo_concurrency.lockutils [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.407 2 DEBUG oslo_concurrency.lockutils [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.407 2 DEBUG nova.network.neutron [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:05:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:21.898 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:21.899 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.931 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:21 np0005466031 nova_compute[235803]: 2025-10-02 13:05:21.931 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:22.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:22 np0005466031 nova_compute[235803]: 2025-10-02 13:05:22.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/588557104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.345 2 DEBUG nova.network.neutron [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updating instance_info_cache with network_info: [{"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.372 2 DEBUG oslo_concurrency.lockutils [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.376 2 DEBUG nova.virt.libvirt.vif [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-772464044',display_name='tempest-TestNetworkAdvancedServerOps-server-772464044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-772464044',id=185,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqMbsXEb0nlmiCBUDGaQ25+lSABGmHZAahFbd9qUbZ5D41UhMPo5l72icdDUbZWbwyUm0mAcIVROdlXx9RsskHHInH4DzmIdBmWSx+8TV711AVo6/b7Vb80vU0lzfQ5fg==',key_name='tempest-TestNetworkAdvancedServerOps-1650343482',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-0kdz48mq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:18Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e32508e3-e3b9-4e61-a988-8a3e0ada7848,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.376 2 DEBUG nova.network.os_vif_util [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.376 2 DEBUG nova.network.os_vif_util [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.377 2 DEBUG os_vif [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01cd52b4-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01cd52b4-b3, col_values=(('external_ids', {'iface-id': '01cd52b4-b38a-475e-81eb-435d4253cc58', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:d1:b6', 'vm-uuid': 'e32508e3-e3b9-4e61-a988-8a3e0ada7848'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.381 2 INFO os_vif [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3')#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.399 2 DEBUG nova.objects.instance [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'numa_topology' on Instance uuid e32508e3-e3b9-4e61-a988-8a3e0ada7848 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:23 np0005466031 kernel: tap01cd52b4-b3: entered promiscuous mode
Oct  2 09:05:23 np0005466031 NetworkManager[44907]: <info>  [1759410323.4664] manager: (tap01cd52b4-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Oct  2 09:05:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:23Z|00744|binding|INFO|Claiming lport 01cd52b4-b38a-475e-81eb-435d4253cc58 for this chassis.
Oct  2 09:05:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:23Z|00745|binding|INFO|01cd52b4-b38a-475e-81eb-435d4253cc58: Claiming fa:16:3e:ab:d1:b6 10.100.0.3
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.474 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:d1:b6 10.100.0.3'], port_security=['fa:16:3e:ab:d1:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e32508e3-e3b9-4e61-a988-8a3e0ada7848', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2f43cb28-dfdc-48d9-8ac7-f9abc25cb786', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a7d4888-1040-4020-a961-84a13219efad, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=01cd52b4-b38a-475e-81eb-435d4253cc58) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.475 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 01cd52b4-b38a-475e-81eb-435d4253cc58 in datapath 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 bound to our chassis#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.477 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970#033[00m
Oct  2 09:05:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:23Z|00746|binding|INFO|Setting lport 01cd52b4-b38a-475e-81eb-435d4253cc58 ovn-installed in OVS
Oct  2 09:05:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:23Z|00747|binding|INFO|Setting lport 01cd52b4-b38a-475e-81eb-435d4253cc58 up in Southbound
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.488 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0e75bf19-945d-4865-8f31-bcf6242cf910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.489 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ebd38c9-b1 in ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.491 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ebd38c9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.491 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[707404ce-394b-440f-9d04-d41057cbf3c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.492 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4013de78-5a7a-4205-8b65-e0cf9b4ce298]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 systemd-udevd[317080]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.504 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4857a3a8-f8f8-4f8b-acbf-e68f2ebc1733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 systemd-machined[192227]: New machine qemu-85-instance-000000b9.
Oct  2 09:05:23 np0005466031 NetworkManager[44907]: <info>  [1759410323.5153] device (tap01cd52b4-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:05:23 np0005466031 NetworkManager[44907]: <info>  [1759410323.5161] device (tap01cd52b4-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.518 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d0965d55-8666-400c-834f-2101ad31e6fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 systemd[1]: Started Virtual Machine qemu-85-instance-000000b9.
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.551 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3405ff-2cbe-43e6-896b-2559f2fa74d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.556 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f062f2-c5fb-4e64-8497-db1f39139cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 NetworkManager[44907]: <info>  [1759410323.5579] manager: (tap2ebd38c9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/338)
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.587 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[005dc014-b58e-40ca-ae29-1c53076f6df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.592 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[69151522-67be-40f4-ade1-3ea48538e4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 NetworkManager[44907]: <info>  [1759410323.6158] device (tap2ebd38c9-b0): carrier: link connected
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.624 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4bea1ea9-bf2e-4b22-865c-a14aeb66f9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.647 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0d7d8b-0f4b-47ea-bcd6-f4341f350465]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ebd38c9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:5b:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817920, 'reachable_time': 44952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317111, 'error': None, 'target': 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.672 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e0e1ce-8fb6-4fc4-b77c-2fe57d57576e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:5b49'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817920, 'tstamp': 817920}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317112, 'error': None, 'target': 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.690 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8f631bb4-c568-4580-830a-185a25ee0616]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ebd38c9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:5b:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817920, 'reachable_time': 44952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317113, 'error': None, 'target': 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.720 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b829de04-471f-4f1e-aa9f-6feb7c398c92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.782 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[130d5554-16a5-4016-af44-011de714f7ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.784 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ebd38c9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.784 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.784 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ebd38c9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 NetworkManager[44907]: <info>  [1759410323.7870] manager: (tap2ebd38c9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Oct  2 09:05:23 np0005466031 kernel: tap2ebd38c9-b0: entered promiscuous mode
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.792 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ebd38c9-b0, col_values=(('external_ids', {'iface-id': 'f6b572e4-961b-4dad-8089-6d3ad0927ecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:23Z|00748|binding|INFO|Releasing lport f6b572e4-961b-4dad-8089-6d3ad0927ecc from this chassis (sb_readonly=0)
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.796 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ebd38c9-b7bf-496a-bf5d-d126c7d6c970.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ebd38c9-b7bf-496a-bf5d-d126c7d6c970.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.797 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[098716df-57d3-4c40-a400-089b899a37e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.798 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/2ebd38c9-b7bf-496a-bf5d-d126c7d6c970.pid.haproxy
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:05:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:23.798 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'env', 'PROCESS_TAG=haproxy-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ebd38c9-b7bf-496a-bf5d-d126c7d6c970.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.984 2 DEBUG nova.compute.manager [req-4665cfda-1805-4339-a5db-60c7762fb8b2 req-13154414-d49a-4ef9-bd1c-bc31561164c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.985 2 DEBUG oslo_concurrency.lockutils [req-4665cfda-1805-4339-a5db-60c7762fb8b2 req-13154414-d49a-4ef9-bd1c-bc31561164c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.985 2 DEBUG oslo_concurrency.lockutils [req-4665cfda-1805-4339-a5db-60c7762fb8b2 req-13154414-d49a-4ef9-bd1c-bc31561164c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.986 2 DEBUG oslo_concurrency.lockutils [req-4665cfda-1805-4339-a5db-60c7762fb8b2 req-13154414-d49a-4ef9-bd1c-bc31561164c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.986 2 DEBUG nova.compute.manager [req-4665cfda-1805-4339-a5db-60c7762fb8b2 req-13154414-d49a-4ef9-bd1c-bc31561164c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] No waiting events found dispatching network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:23 np0005466031 nova_compute[235803]: 2025-10-02 13:05:23.986 2 WARNING nova.compute.manager [req-4665cfda-1805-4339-a5db-60c7762fb8b2 req-13154414-d49a-4ef9-bd1c-bc31561164c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received unexpected event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 09:05:24 np0005466031 podman[317147]: 2025-10-02 13:05:24.162465074 +0000 UTC m=+0.063980755 container create ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 09:05:24 np0005466031 systemd[1]: Started libpod-conmon-ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf.scope.
Oct  2 09:05:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:24.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:24 np0005466031 podman[317147]: 2025-10-02 13:05:24.119790094 +0000 UTC m=+0.021305795 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:05:24 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:05:24 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf030704fcb38102009fcedc07816c16e5e0f50a8ffee5c2925bb0fea6f6fd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:05:24 np0005466031 podman[317147]: 2025-10-02 13:05:24.252445296 +0000 UTC m=+0.153961017 container init ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 09:05:24 np0005466031 podman[317147]: 2025-10-02 13:05:24.259227941 +0000 UTC m=+0.160743622 container start ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.269 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.270 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:24 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[317162]: [NOTICE]   (317166) : New worker (317168) forked
Oct  2 09:05:24 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[317162]: [NOTICE]   (317166) : Loading success.
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.291 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.359 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.360 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.372 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.372 2 INFO nova.compute.claims [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:05:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:24.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.506 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3473546659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.951 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.957 2 DEBUG nova.compute.provider_tree [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:24 np0005466031 nova_compute[235803]: 2025-10-02 13:05:24.982 2 DEBUG nova.scheduler.client.report [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.008 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.008 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.065 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.065 2 DEBUG nova.network.neutron [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.083 2 INFO nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.102 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.130 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for e32508e3-e3b9-4e61-a988-8a3e0ada7848 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.130 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410325.1297088, e32508e3-e3b9-4e61-a988-8a3e0ada7848 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.130 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] VM Started (Lifecycle Event)#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.166 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.169 2 DEBUG nova.compute.manager [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.169 2 DEBUG nova.objects.instance [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e32508e3-e3b9-4e61-a988-8a3e0ada7848 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.174 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.239 2 INFO nova.virt.libvirt.driver [-] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Instance running successfully.#033[00m
Oct  2 09:05:25 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.241 2 DEBUG nova.virt.libvirt.guest [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.242 2 DEBUG nova.compute.manager [None req-502134bb-e5e8-4568-a1ce-b9051dca2d07 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.252 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.253 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410325.1329834, e32508e3-e3b9-4e61-a988-8a3e0ada7848 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.253 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.284 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.286 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.302 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.304 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.304 2 INFO nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Creating image(s)#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.343 2 DEBUG nova.storage.rbd_utils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.375 2 DEBUG nova.storage.rbd_utils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.401 2 DEBUG nova.storage.rbd_utils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.404 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.442 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.485 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.485 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.486 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.486 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.512 2 DEBUG nova.storage.rbd_utils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.516 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:25 np0005466031 nova_compute[235803]: 2025-10-02 13:05:25.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:25.873 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:25.874 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:25.875 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.055 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.119 2 DEBUG nova.storage.rbd_utils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] resizing rbd image 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.161 2 DEBUG nova.compute.manager [req-6727b3e0-b559-4db3-bd28-1c38c2f0ccf1 req-0ad279fa-c154-4f02-9ba0-a07a49850859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.163 2 DEBUG oslo_concurrency.lockutils [req-6727b3e0-b559-4db3-bd28-1c38c2f0ccf1 req-0ad279fa-c154-4f02-9ba0-a07a49850859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.163 2 DEBUG oslo_concurrency.lockutils [req-6727b3e0-b559-4db3-bd28-1c38c2f0ccf1 req-0ad279fa-c154-4f02-9ba0-a07a49850859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.164 2 DEBUG oslo_concurrency.lockutils [req-6727b3e0-b559-4db3-bd28-1c38c2f0ccf1 req-0ad279fa-c154-4f02-9ba0-a07a49850859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.164 2 DEBUG nova.compute.manager [req-6727b3e0-b559-4db3-bd28-1c38c2f0ccf1 req-0ad279fa-c154-4f02-9ba0-a07a49850859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] No waiting events found dispatching network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.164 2 WARNING nova.compute.manager [req-6727b3e0-b559-4db3-bd28-1c38c2f0ccf1 req-0ad279fa-c154-4f02-9ba0-a07a49850859 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received unexpected event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:05:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:26.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.622 2 DEBUG nova.network.neutron [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Successfully created port: 93d37651-cbe2-402a-80a0-36a875bc866a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.632 2 DEBUG nova.objects.instance [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'migration_context' on Instance uuid 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.652 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.653 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Ensure instance console log exists: /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.653 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.653 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:26 np0005466031 nova_compute[235803]: 2025-10-02 13:05:26.654 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:27 np0005466031 nova_compute[235803]: 2025-10-02 13:05:27.116 2 INFO nova.compute.manager [None req-a3ce0b9c-e165-4adb-8b5d-852323f22a75 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Get console output#033[00m
Oct  2 09:05:27 np0005466031 nova_compute[235803]: 2025-10-02 13:05:27.126 18228 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:05:27 np0005466031 nova_compute[235803]: 2025-10-02 13:05:27.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005466031 nova_compute[235803]: 2025-10-02 13:05:27.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.134 2 DEBUG nova.network.neutron [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Successfully updated port: 93d37651-cbe2-402a-80a0-36a875bc866a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.172 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "refresh_cache-6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.172 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquired lock "refresh_cache-6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.172 2 DEBUG nova.network.neutron [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:05:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:28.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.347 2 DEBUG nova.compute.manager [req-04fe2aca-9078-47e6-b700-a48e8aa35d31 req-a2e2c143-6549-4b6b-a7ed-39f0d2958fa5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received event network-changed-93d37651-cbe2-402a-80a0-36a875bc866a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.347 2 DEBUG nova.compute.manager [req-04fe2aca-9078-47e6-b700-a48e8aa35d31 req-a2e2c143-6549-4b6b-a7ed-39f0d2958fa5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Refreshing instance network info cache due to event network-changed-93d37651-cbe2-402a-80a0-36a875bc866a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.348 2 DEBUG oslo_concurrency.lockutils [req-04fe2aca-9078-47e6-b700-a48e8aa35d31 req-a2e2c143-6549-4b6b-a7ed-39f0d2958fa5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.424 2 DEBUG nova.network.neutron [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.818 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.818 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.819 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.819 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.819 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.820 2 INFO nova.compute.manager [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Terminating instance#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.821 2 DEBUG nova.compute.manager [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:05:28 np0005466031 kernel: tap01cd52b4-b3 (unregistering): left promiscuous mode
Oct  2 09:05:28 np0005466031 NetworkManager[44907]: <info>  [1759410328.8720] device (tap01cd52b4-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:05:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:28Z|00749|binding|INFO|Releasing lport 01cd52b4-b38a-475e-81eb-435d4253cc58 from this chassis (sb_readonly=0)
Oct  2 09:05:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:28Z|00750|binding|INFO|Setting lport 01cd52b4-b38a-475e-81eb-435d4253cc58 down in Southbound
Oct  2 09:05:28 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:28Z|00751|binding|INFO|Removing iface tap01cd52b4-b3 ovn-installed in OVS
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:28.887 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:d1:b6 10.100.0.3'], port_security=['fa:16:3e:ab:d1:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e32508e3-e3b9-4e61-a988-8a3e0ada7848', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2f43cb28-dfdc-48d9-8ac7-f9abc25cb786', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a7d4888-1040-4020-a961-84a13219efad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=01cd52b4-b38a-475e-81eb-435d4253cc58) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:28.888 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 01cd52b4-b38a-475e-81eb-435d4253cc58 in datapath 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 unbound from our chassis#033[00m
Oct  2 09:05:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:28.890 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:05:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:28.891 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a69d30ba-662a-481b-a67e-5ca8b96f79cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:28.892 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 namespace which is not needed anymore#033[00m
Oct  2 09:05:28 np0005466031 nova_compute[235803]: 2025-10-02 13:05:28.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466031 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Oct  2 09:05:28 np0005466031 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b9.scope: Consumed 1.681s CPU time.
Oct  2 09:05:28 np0005466031 systemd-machined[192227]: Machine qemu-85-instance-000000b9 terminated.
Oct  2 09:05:29 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[317162]: [NOTICE]   (317166) : haproxy version is 2.8.14-c23fe91
Oct  2 09:05:29 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[317162]: [NOTICE]   (317166) : path to executable is /usr/sbin/haproxy
Oct  2 09:05:29 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[317162]: [WARNING]  (317166) : Exiting Master process...
Oct  2 09:05:29 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[317162]: [ALERT]    (317166) : Current worker (317168) exited with code 143 (Terminated)
Oct  2 09:05:29 np0005466031 neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970[317162]: [WARNING]  (317166) : All workers exited. Exiting... (0)
Oct  2 09:05:29 np0005466031 systemd[1]: libpod-ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf.scope: Deactivated successfully.
Oct  2 09:05:29 np0005466031 podman[317433]: 2025-10-02 13:05:29.035161693 +0000 UTC m=+0.053231634 container died ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.055 2 INFO nova.virt.libvirt.driver [-] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Instance destroyed successfully.#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.056 2 DEBUG nova.objects.instance [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid e32508e3-e3b9-4e61-a988-8a3e0ada7848 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:29 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf-userdata-shm.mount: Deactivated successfully.
Oct  2 09:05:29 np0005466031 systemd[1]: var-lib-containers-storage-overlay-eaf030704fcb38102009fcedc07816c16e5e0f50a8ffee5c2925bb0fea6f6fd5-merged.mount: Deactivated successfully.
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.086 2 DEBUG nova.virt.libvirt.vif [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-772464044',display_name='tempest-TestNetworkAdvancedServerOps-server-772464044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-772464044',id=185,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqMbsXEb0nlmiCBUDGaQ25+lSABGmHZAahFbd9qUbZ5D41UhMPo5l72icdDUbZWbwyUm0mAcIVROdlXx9RsskHHInH4DzmIdBmWSx+8TV711AVo6/b7Vb80vU0lzfQ5fg==',key_name='tempest-TestNetworkAdvancedServerOps-1650343482',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-0kdz48mq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:25Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=e32508e3-e3b9-4e61-a988-8a3e0ada7848,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.087 2 DEBUG nova.network.os_vif_util [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.088 2 DEBUG nova.network.os_vif_util [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.088 2 DEBUG os_vif [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01cd52b4-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:29 np0005466031 podman[317433]: 2025-10-02 13:05:29.092366902 +0000 UTC m=+0.110436843 container cleanup ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.098 2 INFO os_vif [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:d1:b6,bridge_name='br-int',has_traffic_filtering=True,id=01cd52b4-b38a-475e-81eb-435d4253cc58,network=Network(2ebd38c9-b7bf-496a-bf5d-d126c7d6c970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd52b4-b3')#033[00m
Oct  2 09:05:29 np0005466031 systemd[1]: libpod-conmon-ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf.scope: Deactivated successfully.
Oct  2 09:05:29 np0005466031 podman[317472]: 2025-10-02 13:05:29.166369874 +0000 UTC m=+0.048327164 container remove ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.172 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ef569b3e-dac4-411b-a4bb-b89756f85f0f]: (4, ('Thu Oct  2 01:05:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 (ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf)\ned330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf\nThu Oct  2 01:05:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 (ed330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf)\ned330a656677b0c3342e314482930cac8d2573d592db9a6bc0008462f1c145bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.173 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[800501af-c22c-44c6-9e70-ce4539818d33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.174 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ebd38c9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:29 np0005466031 kernel: tap2ebd38c9-b0: left promiscuous mode
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.194 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5146261a-e8c8-4bf9-b87d-68fc5a0d57ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.231 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[801fcafd-27b4-4b90-85a5-db4af7e3c80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.232 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6e48b54e-1517-4921-b721-302ed45d455b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.249 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5da9d11c-c68a-464a-bdf7-51a3984d2aee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817913, 'reachable_time': 16168, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317505, 'error': None, 'target': 'ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:29 np0005466031 systemd[1]: run-netns-ovnmeta\x2d2ebd38c9\x2db7bf\x2d496a\x2dbf5d\x2dd126c7d6c970.mount: Deactivated successfully.
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.253 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ebd38c9-b7bf-496a-bf5d-d126c7d6c970 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:05:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:29.253 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[79e548e7-00f8-4615-866c-55a0dfcdcf6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.890 2 INFO nova.virt.libvirt.driver [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Deleting instance files /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848_del#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.890 2 INFO nova.virt.libvirt.driver [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Deletion of /var/lib/nova/instances/e32508e3-e3b9-4e61-a988-8a3e0ada7848_del complete#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.934 2 DEBUG nova.compute.manager [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-changed-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.935 2 DEBUG nova.compute.manager [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Refreshing instance network info cache due to event network-changed-01cd52b4-b38a-475e-81eb-435d4253cc58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.935 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.936 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.936 2 DEBUG nova.network.neutron [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Refreshing network info cache for port 01cd52b4-b38a-475e-81eb-435d4253cc58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.957 2 DEBUG nova.network.neutron [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Updating instance_info_cache with network_info: [{"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.997 2 INFO nova.compute.manager [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Took 1.18 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.998 2 DEBUG oslo.service.loopingcall [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.998 2 DEBUG nova.compute.manager [-] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:05:29 np0005466031 nova_compute[235803]: 2025-10-02 13:05:29.998 2 DEBUG nova.network.neutron [-] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.012 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Releasing lock "refresh_cache-6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.013 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Instance network_info: |[{"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.013 2 DEBUG oslo_concurrency.lockutils [req-04fe2aca-9078-47e6-b700-a48e8aa35d31 req-a2e2c143-6549-4b6b-a7ed-39f0d2958fa5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.014 2 DEBUG nova.network.neutron [req-04fe2aca-9078-47e6-b700-a48e8aa35d31 req-a2e2c143-6549-4b6b-a7ed-39f0d2958fa5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Refreshing network info cache for port 93d37651-cbe2-402a-80a0-36a875bc866a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.016 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Start _get_guest_xml network_info=[{"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.020 2 WARNING nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.031 2 DEBUG nova.virt.libvirt.host [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.031 2 DEBUG nova.virt.libvirt.host [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.036 2 DEBUG nova.virt.libvirt.host [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.036 2 DEBUG nova.virt.libvirt.host [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.037 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.038 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.038 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.038 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.039 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.039 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.039 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.040 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.040 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.040 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.040 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.041 2 DEBUG nova.virt.hardware [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.043 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:30.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:05:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1962952617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.464 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.486 2 DEBUG nova.storage.rbd_utils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.490 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.715 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-unplugged-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.715 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.715 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.716 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.716 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] No waiting events found dispatching network-vif-unplugged-01cd52b4-b38a-475e-81eb-435d4253cc58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.716 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-unplugged-01cd52b4-b38a-475e-81eb-435d4253cc58 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.716 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.716 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.717 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.717 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.717 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] No waiting events found dispatching network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.717 2 WARNING nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received unexpected event network-vif-plugged-01cd52b4-b38a-475e-81eb-435d4253cc58 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:05:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1386256853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:05:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:30.900 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.913 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.915 2 DEBUG nova.virt.libvirt.vif [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-463662906',display_name='tempest-TestServerMultinode-server-463662906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-463662906',id=189,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-gnjgw18t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:25Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.915 2 DEBUG nova.network.os_vif_util [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.916 2 DEBUG nova.network.os_vif_util [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:86:94,bridge_name='br-int',has_traffic_filtering=True,id=93d37651-cbe2-402a-80a0-36a875bc866a,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93d37651-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.917 2 DEBUG nova.objects.instance [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.935 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <uuid>6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e</uuid>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <name>instance-000000bd</name>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestServerMultinode-server-463662906</nova:name>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:05:30</nova:creationTime>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <nova:user uuid="7fb7e45069d34870bc5f4fa70bd8c6de">tempest-TestServerMultinode-2060715482-project-admin</nova:user>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <nova:project uuid="19365f54974d4109ae80bc13ac9ba55a">tempest-TestServerMultinode-2060715482</nova:project>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <nova:port uuid="93d37651-cbe2-402a-80a0-36a875bc866a">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <entry name="serial">6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e</entry>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <entry name="uuid">6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e</entry>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk.config">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:38:86:94"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <target dev="tap93d37651-cb"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e/console.log" append="off"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:05:30 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:05:30 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:05:30 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:05:30 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.937 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Preparing to wait for external event network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.937 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.938 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.938 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.939 2 DEBUG nova.virt.libvirt.vif [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-463662906',display_name='tempest-TestServerMultinode-server-463662906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-463662906',id=189,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-gnjgw18t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:25Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.939 2 DEBUG nova.network.os_vif_util [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.940 2 DEBUG nova.network.os_vif_util [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:86:94,bridge_name='br-int',has_traffic_filtering=True,id=93d37651-cbe2-402a-80a0-36a875bc866a,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93d37651-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.941 2 DEBUG os_vif [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:86:94,bridge_name='br-int',has_traffic_filtering=True,id=93d37651-cbe2-402a-80a0-36a875bc866a,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93d37651-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.942 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.942 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.947 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93d37651-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.947 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap93d37651-cb, col_values=(('external_ids', {'iface-id': '93d37651-cbe2-402a-80a0-36a875bc866a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:86:94', 'vm-uuid': '6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:30 np0005466031 NetworkManager[44907]: <info>  [1759410330.9502] manager: (tap93d37651-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:30 np0005466031 nova_compute[235803]: 2025-10-02 13:05:30.956 2 INFO os_vif [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:86:94,bridge_name='br-int',has_traffic_filtering=True,id=93d37651-cbe2-402a-80a0-36a875bc866a,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93d37651-cb')#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.016 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.017 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.018 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No VIF found with MAC fa:16:3e:38:86:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.019 2 INFO nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Using config drive#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.057 2 DEBUG nova.storage.rbd_utils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e354 e354: 3 total, 3 up, 3 in
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.344 2 DEBUG nova.network.neutron [-] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.408 2 INFO nova.compute.manager [-] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Took 1.41 seconds to deallocate network for instance.#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.463 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.464 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.582 2 DEBUG oslo_concurrency.processutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.719 2 INFO nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Creating config drive at /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e/disk.config#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.724 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2w1z2wjo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.861 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2w1z2wjo" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.901 2 DEBUG nova.storage.rbd_utils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:31 np0005466031 nova_compute[235803]: 2025-10-02 13:05:31.904 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e/disk.config 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1531540515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.022 2 DEBUG oslo_concurrency.processutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.027 2 DEBUG nova.compute.provider_tree [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.049 2 DEBUG nova.scheduler.client.report [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.079 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.109 2 INFO nova.scheduler.client.report [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance e32508e3-e3b9-4e61-a988-8a3e0ada7848#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.117 2 DEBUG oslo_concurrency.processutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e/disk.config 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.118 2 INFO nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Deleting local config drive /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e/disk.config because it was imported into RBD.#033[00m
Oct  2 09:05:32 np0005466031 kernel: tap93d37651-cb: entered promiscuous mode
Oct  2 09:05:32 np0005466031 NetworkManager[44907]: <info>  [1759410332.1803] manager: (tap93d37651-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Oct  2 09:05:32 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:32Z|00752|binding|INFO|Claiming lport 93d37651-cbe2-402a-80a0-36a875bc866a for this chassis.
Oct  2 09:05:32 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:32Z|00753|binding|INFO|93d37651-cbe2-402a-80a0-36a875bc866a: Claiming fa:16:3e:38:86:94 10.100.0.3
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.189 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:86:94 10.100.0.3'], port_security=['fa:16:3e:38:86:94 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=93d37651-cbe2-402a-80a0-36a875bc866a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.190 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 93d37651-cbe2-402a-80a0-36a875bc866a in datapath 962339a8-ad45-401e-ae58-50cd40858566 bound to our chassis#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.192 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 962339a8-ad45-401e-ae58-50cd40858566#033[00m
Oct  2 09:05:32 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:32Z|00754|binding|INFO|Setting lport 93d37651-cbe2-402a-80a0-36a875bc866a ovn-installed in OVS
Oct  2 09:05:32 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:32Z|00755|binding|INFO|Setting lport 93d37651-cbe2-402a-80a0-36a875bc866a up in Southbound
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.204 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd8c9a1-27cf-4a8d-bc45-1b91c107d769]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.205 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap962339a8-a1 in ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.207 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap962339a8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.208 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6429f9d4-6fae-493d-bee0-a1396642da04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.208 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[33bac185-ae1e-4865-aa5f-b07248b42bd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:32.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.220 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[90909ce6-6c42-45c9-a124-b9e372dde555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 systemd-machined[192227]: New machine qemu-86-instance-000000bd.
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.227 2 DEBUG oslo_concurrency.lockutils [None req-01e2d679-7ac8-40ff-a88c-98e2fe814f67 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "e32508e3-e3b9-4e61-a988-8a3e0ada7848" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.234 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cd0565-fd4d-4102-8aff-87b21e062250]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 systemd[1]: Started Virtual Machine qemu-86-instance-000000bd.
Oct  2 09:05:32 np0005466031 systemd-udevd[317722]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:05:32 np0005466031 NetworkManager[44907]: <info>  [1759410332.2655] device (tap93d37651-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:05:32 np0005466031 NetworkManager[44907]: <info>  [1759410332.2664] device (tap93d37651-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.266 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3a09c536-4f4e-4e12-b90e-080c52b72d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 NetworkManager[44907]: <info>  [1759410332.2725] manager: (tap962339a8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/342)
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.272 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5525dd-da32-4efa-9a97-12d21d488c0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.304 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9faf8e-a69d-4184-bb7a-8438f5d33fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.308 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[3b35e3b0-888f-4083-af7a-004f41d1ac72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e355 e355: 3 total, 3 up, 3 in
Oct  2 09:05:32 np0005466031 NetworkManager[44907]: <info>  [1759410332.3333] device (tap962339a8-a0): carrier: link connected
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.342 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b99408d7-6ceb-454e-b11e-264a88892466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.360 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d807d2-893f-4483-96de-0df4a5310c8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962339a8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:f8:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 818792, 'reachable_time': 28532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317750, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.378 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c80ccba4-ad66-4d0b-a416-39cd1b2ec4ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:f8da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 818792, 'tstamp': 818792}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317751, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.398 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5df7bbe6-c809-4ab1-ba27-809e63ab6226]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962339a8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:f8:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 818792, 'reachable_time': 28532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317752, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:32.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.430 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6d952f04-f1b6-4412-9708-38d2e164be85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.493 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1baf7ae8-ac86-4eaf-8abc-3f69ea7a460e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.494 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962339a8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.495 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.495 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap962339a8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:32 np0005466031 NetworkManager[44907]: <info>  [1759410332.4976] manager: (tap962339a8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.569 2 DEBUG nova.network.neutron [req-04fe2aca-9078-47e6-b700-a48e8aa35d31 req-a2e2c143-6549-4b6b-a7ed-39f0d2958fa5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Updated VIF entry in instance network info cache for port 93d37651-cbe2-402a-80a0-36a875bc866a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.570 2 DEBUG nova.network.neutron [req-04fe2aca-9078-47e6-b700-a48e8aa35d31 req-a2e2c143-6549-4b6b-a7ed-39f0d2958fa5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Updating instance_info_cache with network_info: [{"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.609 2 DEBUG nova.compute.manager [req-d66a1697-0892-41b7-9777-3aba1efea559 req-a21b5498-35a2-4561-830e-d5174ecae45a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received event network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.610 2 DEBUG oslo_concurrency.lockutils [req-d66a1697-0892-41b7-9777-3aba1efea559 req-a21b5498-35a2-4561-830e-d5174ecae45a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.610 2 DEBUG oslo_concurrency.lockutils [req-d66a1697-0892-41b7-9777-3aba1efea559 req-a21b5498-35a2-4561-830e-d5174ecae45a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.610 2 DEBUG oslo_concurrency.lockutils [req-d66a1697-0892-41b7-9777-3aba1efea559 req-a21b5498-35a2-4561-830e-d5174ecae45a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.610 2 DEBUG nova.compute.manager [req-d66a1697-0892-41b7-9777-3aba1efea559 req-a21b5498-35a2-4561-830e-d5174ecae45a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Processing event network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.615 2 DEBUG oslo_concurrency.lockutils [req-04fe2aca-9078-47e6-b700-a48e8aa35d31 req-a2e2c143-6549-4b6b-a7ed-39f0d2958fa5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:32 np0005466031 kernel: tap962339a8-a0: entered promiscuous mode
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.687 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap962339a8-a0, col_values=(('external_ids', {'iface-id': '95f6c57c-e568-4ed7-aa6a-02671a012e41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.691 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.692 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ff52ccf0-30c2-4de2-9c91-9679dd9e1cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.693 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-962339a8-ad45-401e-ae58-50cd40858566
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 962339a8-ad45-401e-ae58-50cd40858566
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:05:32 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:32.694 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'env', 'PROCESS_TAG=haproxy-962339a8-ad45-401e-ae58-50cd40858566', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/962339a8-ad45-401e-ae58-50cd40858566.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:05:32 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:32Z|00756|binding|INFO|Releasing lport 95f6c57c-e568-4ed7-aa6a-02671a012e41 from this chassis (sb_readonly=0)
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.763 2 DEBUG nova.network.neutron [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updated VIF entry in instance network info cache for port 01cd52b4-b38a-475e-81eb-435d4253cc58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.764 2 DEBUG nova.network.neutron [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Updating instance_info_cache with network_info: [{"id": "01cd52b4-b38a-475e-81eb-435d4253cc58", "address": "fa:16:3e:ab:d1:b6", "network": {"id": "2ebd38c9-b7bf-496a-bf5d-d126c7d6c970", "bridge": "br-int", "label": "tempest-network-smoke--1381151343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd52b4-b3", "ovs_interfaceid": "01cd52b4-b38a-475e-81eb-435d4253cc58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.804 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e32508e3-e3b9-4e61-a988-8a3e0ada7848" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:32 np0005466031 nova_compute[235803]: 2025-10-02 13:05:32.896 2 DEBUG nova.compute.manager [req-7dfb025f-6ec8-4b0d-8524-0a329c82240d req-d57193b2-94d4-49d4-ab66-89fabfae83f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Received event network-vif-deleted-01cd52b4-b38a-475e-81eb-435d4253cc58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:33 np0005466031 podman[317826]: 2025-10-02 13:05:33.062127157 +0000 UTC m=+0.056117037 container create 4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.083 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410333.082447, 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.084 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] VM Started (Lifecycle Event)#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.087 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.090 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.096 2 INFO nova.virt.libvirt.driver [-] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Instance spawned successfully.#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.097 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:05:33 np0005466031 systemd[1]: Started libpod-conmon-4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf.scope.
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.123 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:33 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:05:33 np0005466031 podman[317826]: 2025-10-02 13:05:33.031214507 +0000 UTC m=+0.025204407 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.129 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.130 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.131 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.131 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.132 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.132 2 DEBUG nova.virt.libvirt.driver [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:33 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9584ea48ea9afea42ea4aa02d965a955a45b70747d2c1766553400b7542c4998/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.137 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:33 np0005466031 podman[317826]: 2025-10-02 13:05:33.145922022 +0000 UTC m=+0.139911932 container init 4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:05:33 np0005466031 podman[317826]: 2025-10-02 13:05:33.152921203 +0000 UTC m=+0.146911083 container start 4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.170 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.171 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410333.083494, 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.171 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:05:33 np0005466031 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[317841]: [NOTICE]   (317845) : New worker (317847) forked
Oct  2 09:05:33 np0005466031 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[317841]: [NOTICE]   (317845) : Loading success.
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.195 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.198 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410333.089241, 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.199 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.212 2 INFO nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Took 7.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.212 2 DEBUG nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.225 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.229 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.253 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.279 2 INFO nova.compute.manager [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Took 8.94 seconds to build instance.#033[00m
Oct  2 09:05:33 np0005466031 nova_compute[235803]: 2025-10-02 13:05:33.300 2 DEBUG oslo_concurrency.lockutils [None req-1b85b588-c7f8-4b93-aef8-93c7103553fa 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e356 e356: 3 total, 3 up, 3 in
Oct  2 09:05:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:34.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:34 np0005466031 nova_compute[235803]: 2025-10-02 13:05:34.744 2 DEBUG nova.compute.manager [req-c9229e21-97c0-4617-a82d-0b13d4b2acce req-c0dc281a-37d3-419d-838d-99f810cd7167 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received event network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:34 np0005466031 nova_compute[235803]: 2025-10-02 13:05:34.745 2 DEBUG oslo_concurrency.lockutils [req-c9229e21-97c0-4617-a82d-0b13d4b2acce req-c0dc281a-37d3-419d-838d-99f810cd7167 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:34 np0005466031 nova_compute[235803]: 2025-10-02 13:05:34.745 2 DEBUG oslo_concurrency.lockutils [req-c9229e21-97c0-4617-a82d-0b13d4b2acce req-c0dc281a-37d3-419d-838d-99f810cd7167 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:34 np0005466031 nova_compute[235803]: 2025-10-02 13:05:34.745 2 DEBUG oslo_concurrency.lockutils [req-c9229e21-97c0-4617-a82d-0b13d4b2acce req-c0dc281a-37d3-419d-838d-99f810cd7167 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:34 np0005466031 nova_compute[235803]: 2025-10-02 13:05:34.746 2 DEBUG nova.compute.manager [req-c9229e21-97c0-4617-a82d-0b13d4b2acce req-c0dc281a-37d3-419d-838d-99f810cd7167 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] No waiting events found dispatching network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:34 np0005466031 nova_compute[235803]: 2025-10-02 13:05:34.746 2 WARNING nova.compute.manager [req-c9229e21-97c0-4617-a82d-0b13d4b2acce req-c0dc281a-37d3-419d-838d-99f810cd7167 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received unexpected event network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a for instance with vm_state active and task_state None.#033[00m
Oct  2 09:05:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:35 np0005466031 nova_compute[235803]: 2025-10-02 13:05:35.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:35 np0005466031 nova_compute[235803]: 2025-10-02 13:05:35.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:36.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:36.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:37 np0005466031 podman[317858]: 2025-10-02 13:05:37.633276341 +0000 UTC m=+0.060797443 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:05:37 np0005466031 podman[317859]: 2025-10-02 13:05:37.663560163 +0000 UTC m=+0.091287941 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:38.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e357 e357: 3 total, 3 up, 3 in
Oct  2 09:05:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.186 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.188 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.188 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.188 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.189 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.190 2 INFO nova.compute.manager [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Terminating instance#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.191 2 DEBUG nova.compute.manager [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:05:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:40.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:40 np0005466031 kernel: tap93d37651-cb (unregistering): left promiscuous mode
Oct  2 09:05:40 np0005466031 NetworkManager[44907]: <info>  [1759410340.2782] device (tap93d37651-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:05:40 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:40Z|00757|binding|INFO|Releasing lport 93d37651-cbe2-402a-80a0-36a875bc866a from this chassis (sb_readonly=0)
Oct  2 09:05:40 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:40Z|00758|binding|INFO|Setting lport 93d37651-cbe2-402a-80a0-36a875bc866a down in Southbound
Oct  2 09:05:40 np0005466031 ovn_controller[132413]: 2025-10-02T13:05:40Z|00759|binding|INFO|Removing iface tap93d37651-cb ovn-installed in OVS
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.305 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:86:94 10.100.0.3'], port_security=['fa:16:3e:38:86:94 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=93d37651-cbe2-402a-80a0-36a875bc866a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.306 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 93d37651-cbe2-402a-80a0-36a875bc866a in datapath 962339a8-ad45-401e-ae58-50cd40858566 unbound from our chassis#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.308 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962339a8-ad45-401e-ae58-50cd40858566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.309 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b0920c11-4a21-4d12-8c5d-8c9b67371487]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.310 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 namespace which is not needed anymore#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:40 np0005466031 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Oct  2 09:05:40 np0005466031 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000bd.scope: Consumed 8.032s CPU time.
Oct  2 09:05:40 np0005466031 systemd-machined[192227]: Machine qemu-86-instance-000000bd terminated.
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.426 2 INFO nova.virt.libvirt.driver [-] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Instance destroyed successfully.#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.427 2 DEBUG nova.objects.instance [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'resources' on Instance uuid 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:40 np0005466031 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[317841]: [NOTICE]   (317845) : haproxy version is 2.8.14-c23fe91
Oct  2 09:05:40 np0005466031 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[317841]: [NOTICE]   (317845) : path to executable is /usr/sbin/haproxy
Oct  2 09:05:40 np0005466031 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[317841]: [WARNING]  (317845) : Exiting Master process...
Oct  2 09:05:40 np0005466031 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[317841]: [WARNING]  (317845) : Exiting Master process...
Oct  2 09:05:40 np0005466031 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[317841]: [ALERT]    (317845) : Current worker (317847) exited with code 143 (Terminated)
Oct  2 09:05:40 np0005466031 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[317841]: [WARNING]  (317845) : All workers exited. Exiting... (0)
Oct  2 09:05:40 np0005466031 systemd[1]: libpod-4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf.scope: Deactivated successfully.
Oct  2 09:05:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:40.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.443 2 DEBUG nova.virt.libvirt.vif [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-463662906',display_name='tempest-TestServerMultinode-server-463662906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-463662906',id=189,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:05:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-gnjgw18t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:33Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:05:40 np0005466031 podman[317926]: 2025-10-02 13:05:40.444108376 +0000 UTC m=+0.046427549 container died 4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.444 2 DEBUG nova.network.os_vif_util [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "93d37651-cbe2-402a-80a0-36a875bc866a", "address": "fa:16:3e:38:86:94", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93d37651-cb", "ovs_interfaceid": "93d37651-cbe2-402a-80a0-36a875bc866a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.445 2 DEBUG nova.network.os_vif_util [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:86:94,bridge_name='br-int',has_traffic_filtering=True,id=93d37651-cbe2-402a-80a0-36a875bc866a,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93d37651-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.445 2 DEBUG os_vif [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:86:94,bridge_name='br-int',has_traffic_filtering=True,id=93d37651-cbe2-402a-80a0-36a875bc866a,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93d37651-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.447 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93d37651-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.469 2 INFO os_vif [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:86:94,bridge_name='br-int',has_traffic_filtering=True,id=93d37651-cbe2-402a-80a0-36a875bc866a,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93d37651-cb')#033[00m
Oct  2 09:05:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay-9584ea48ea9afea42ea4aa02d965a955a45b70747d2c1766553400b7542c4998-merged.mount: Deactivated successfully.
Oct  2 09:05:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf-userdata-shm.mount: Deactivated successfully.
Oct  2 09:05:40 np0005466031 podman[317926]: 2025-10-02 13:05:40.519465997 +0000 UTC m=+0.121785170 container cleanup 4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:05:40 np0005466031 systemd[1]: libpod-conmon-4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf.scope: Deactivated successfully.
Oct  2 09:05:40 np0005466031 podman[317984]: 2025-10-02 13:05:40.59072651 +0000 UTC m=+0.051827444 container remove 4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.595 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2f3b40-7372-4cb0-ad2d-74218c428eb0]: (4, ('Thu Oct  2 01:05:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 (4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf)\n4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf\nThu Oct  2 01:05:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 (4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf)\n4b527f73b40831d887fb0242ab9cc7d759c42bf24b7487d3448bb027b4c3e6cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.597 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e10412ae-94c2-46df-ad63-c1ea365f9d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.598 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962339a8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:40 np0005466031 kernel: tap962339a8-a0: left promiscuous mode
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.617 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4569407f-a2d3-4d85-8e2b-4a53056ad8f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.639 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[80f0c6e6-abab-4499-af15-a6644ff4501f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.640 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2301de-3adc-4091-bc43-d673061deb40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.653 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[71e1be17-766d-490e-88ad-a872a7977a64]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 818785, 'reachable_time': 27972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317999, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.655 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:05:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:05:40.655 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[9ceea219-5813-4e10-a49f-ab9c5c0d6a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:40 np0005466031 systemd[1]: run-netns-ovnmeta\x2d962339a8\x2dad45\x2d401e\x2dae58\x2d50cd40858566.mount: Deactivated successfully.
Oct  2 09:05:40 np0005466031 nova_compute[235803]: 2025-10-02 13:05:40.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:41 np0005466031 nova_compute[235803]: 2025-10-02 13:05:41.652 2 DEBUG nova.compute.manager [req-10e6870c-d9e9-4504-9daa-f109488a8781 req-8dc94fc3-04ea-4c2c-836a-d5cb89e86fcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received event network-vif-unplugged-93d37651-cbe2-402a-80a0-36a875bc866a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:41 np0005466031 nova_compute[235803]: 2025-10-02 13:05:41.653 2 DEBUG oslo_concurrency.lockutils [req-10e6870c-d9e9-4504-9daa-f109488a8781 req-8dc94fc3-04ea-4c2c-836a-d5cb89e86fcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:41 np0005466031 nova_compute[235803]: 2025-10-02 13:05:41.653 2 DEBUG oslo_concurrency.lockutils [req-10e6870c-d9e9-4504-9daa-f109488a8781 req-8dc94fc3-04ea-4c2c-836a-d5cb89e86fcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:41 np0005466031 nova_compute[235803]: 2025-10-02 13:05:41.653 2 DEBUG oslo_concurrency.lockutils [req-10e6870c-d9e9-4504-9daa-f109488a8781 req-8dc94fc3-04ea-4c2c-836a-d5cb89e86fcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:41 np0005466031 nova_compute[235803]: 2025-10-02 13:05:41.653 2 DEBUG nova.compute.manager [req-10e6870c-d9e9-4504-9daa-f109488a8781 req-8dc94fc3-04ea-4c2c-836a-d5cb89e86fcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] No waiting events found dispatching network-vif-unplugged-93d37651-cbe2-402a-80a0-36a875bc866a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:41 np0005466031 nova_compute[235803]: 2025-10-02 13:05:41.653 2 DEBUG nova.compute.manager [req-10e6870c-d9e9-4504-9daa-f109488a8781 req-8dc94fc3-04ea-4c2c-836a-d5cb89e86fcc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received event network-vif-unplugged-93d37651-cbe2-402a-80a0-36a875bc866a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:05:42 np0005466031 nova_compute[235803]: 2025-10-02 13:05:42.008 2 INFO nova.virt.libvirt.driver [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Deleting instance files /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_del#033[00m
Oct  2 09:05:42 np0005466031 nova_compute[235803]: 2025-10-02 13:05:42.009 2 INFO nova.virt.libvirt.driver [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Deletion of /var/lib/nova/instances/6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e_del complete#033[00m
Oct  2 09:05:42 np0005466031 nova_compute[235803]: 2025-10-02 13:05:42.105 2 INFO nova.compute.manager [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Took 1.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:05:42 np0005466031 nova_compute[235803]: 2025-10-02 13:05:42.106 2 DEBUG oslo.service.loopingcall [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:05:42 np0005466031 nova_compute[235803]: 2025-10-02 13:05:42.106 2 DEBUG nova.compute.manager [-] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:05:42 np0005466031 nova_compute[235803]: 2025-10-02 13:05:42.107 2 DEBUG nova.network.neutron [-] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:05:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:42.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:43 np0005466031 nova_compute[235803]: 2025-10-02 13:05:43.809 2 DEBUG nova.network.neutron [-] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:43 np0005466031 nova_compute[235803]: 2025-10-02 13:05:43.878 2 INFO nova.compute.manager [-] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Took 1.77 seconds to deallocate network for instance.#033[00m
Oct  2 09:05:43 np0005466031 nova_compute[235803]: 2025-10-02 13:05:43.949 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:43 np0005466031 nova_compute[235803]: 2025-10-02 13:05:43.950 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.039 2 DEBUG nova.compute.manager [req-fc088444-3fa8-4589-b863-a7c68ed8daa3 req-1f6c4638-4a6a-4764-8e41-3b7b7f3ec2fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received event network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.040 2 DEBUG oslo_concurrency.lockutils [req-fc088444-3fa8-4589-b863-a7c68ed8daa3 req-1f6c4638-4a6a-4764-8e41-3b7b7f3ec2fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.040 2 DEBUG oslo_concurrency.lockutils [req-fc088444-3fa8-4589-b863-a7c68ed8daa3 req-1f6c4638-4a6a-4764-8e41-3b7b7f3ec2fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.040 2 DEBUG oslo_concurrency.lockutils [req-fc088444-3fa8-4589-b863-a7c68ed8daa3 req-1f6c4638-4a6a-4764-8e41-3b7b7f3ec2fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.040 2 DEBUG nova.compute.manager [req-fc088444-3fa8-4589-b863-a7c68ed8daa3 req-1f6c4638-4a6a-4764-8e41-3b7b7f3ec2fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] No waiting events found dispatching network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.040 2 WARNING nova.compute.manager [req-fc088444-3fa8-4589-b863-a7c68ed8daa3 req-1f6c4638-4a6a-4764-8e41-3b7b7f3ec2fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received unexpected event network-vif-plugged-93d37651-cbe2-402a-80a0-36a875bc866a for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.052 2 DEBUG oslo_concurrency.processutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.097 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410329.0523517, e32508e3-e3b9-4e61-a988-8a3e0ada7848 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.098 2 INFO nova.compute.manager [-] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.115 2 DEBUG nova.compute.manager [req-b62a1404-72b1-40e0-81b0-0a2c662c03b8 req-138e4c27-e78b-42bc-af5f-c01a64cb504a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Received event network-vif-deleted-93d37651-cbe2-402a-80a0-36a875bc866a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.133 2 DEBUG nova.compute.manager [None req-69e58b2a-2c65-4c05-8c03-7efc8232eac4 - - - - - -] [instance: e32508e3-e3b9-4e61-a988-8a3e0ada7848] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:44.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:44.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4013527471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.536 2 DEBUG oslo_concurrency.processutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.542 2 DEBUG nova.compute.provider_tree [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.557 2 DEBUG nova.scheduler.client.report [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.577 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.613 2 INFO nova.scheduler.client.report [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Deleted allocations for instance 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e#033[00m
Oct  2 09:05:44 np0005466031 podman[318025]: 2025-10-02 13:05:44.635978931 +0000 UTC m=+0.056973732 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:05:44 np0005466031 podman[318026]: 2025-10-02 13:05:44.666515271 +0000 UTC m=+0.086359999 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:05:44 np0005466031 nova_compute[235803]: 2025-10-02 13:05:44.716 2 DEBUG oslo_concurrency.lockutils [None req-1ed4e373-0329-440b-9935-32ffc74319af 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:45 np0005466031 nova_compute[235803]: 2025-10-02 13:05:45.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:45 np0005466031 nova_compute[235803]: 2025-10-02 13:05:45.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:46.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:46.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:05:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:48.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:05:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:48.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:50.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:50.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:50 np0005466031 nova_compute[235803]: 2025-10-02 13:05:50.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:50 np0005466031 nova_compute[235803]: 2025-10-02 13:05:50.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:52.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:52.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:54.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:54.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:55 np0005466031 nova_compute[235803]: 2025-10-02 13:05:55.425 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410340.4239388, 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:55 np0005466031 nova_compute[235803]: 2025-10-02 13:05:55.426 2 INFO nova.compute.manager [-] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:05:55 np0005466031 nova_compute[235803]: 2025-10-02 13:05:55.468 2 DEBUG nova.compute.manager [None req-57241e95-f1a3-4c07-a954-b98456521024 - - - - - -] [instance: 6dcd1b6d-1ff3-4e24-b6c7-7c5fe8b3650e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:55 np0005466031 nova_compute[235803]: 2025-10-02 13:05:55.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:55 np0005466031 nova_compute[235803]: 2025-10-02 13:05:55.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:56.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:56.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:05:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:58.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:05:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:05:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:05:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:58.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:05:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:06:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:00.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:06:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:00.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:00 np0005466031 nova_compute[235803]: 2025-10-02 13:06:00.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:00 np0005466031 nova_compute[235803]: 2025-10-02 13:06:00.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:02.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:04.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.381 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.382 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.421 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:06:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:04.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.535 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.536 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.541 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.542 2 INFO nova.compute.claims [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.695 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:04 np0005466031 nova_compute[235803]: 2025-10-02 13:06:04.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3781703479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.211 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.218 2 DEBUG nova.compute.provider_tree [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.240 2 DEBUG nova.scheduler.client.report [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.267 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.267 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:06:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:06:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1235638161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:06:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:06:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1235638161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.311 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.312 2 DEBUG nova.network.neutron [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.338 2 INFO nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.374 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.482 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.484 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.484 2 INFO nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Creating image(s)#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.511 2 DEBUG nova.storage.rbd_utils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] rbd image fa8f170f-4839-4548-bdf9-d4a307880023_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.540 2 DEBUG nova.storage.rbd_utils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] rbd image fa8f170f-4839-4548-bdf9-d4a307880023_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:05 np0005466031 podman[318316]: 2025-10-02 13:06:05.555551798 +0000 UTC m=+0.071222063 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.578 2 DEBUG nova.storage.rbd_utils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] rbd image fa8f170f-4839-4548-bdf9-d4a307880023_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.583 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.641 2 DEBUG nova.policy [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7c99b382e3ea4a03bbcf5bd8e2322243', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '68aecf9157774d368c016e89768f535f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:06:05 np0005466031 podman[318316]: 2025-10-02 13:06:05.655044245 +0000 UTC m=+0.170714490 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.683 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.683 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.684 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.684 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.711 2 DEBUG nova.storage.rbd_utils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] rbd image fa8f170f-4839-4548-bdf9-d4a307880023_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.717 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 fa8f170f-4839-4548-bdf9-d4a307880023_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:05 np0005466031 nova_compute[235803]: 2025-10-02 13:06:05.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:06 np0005466031 nova_compute[235803]: 2025-10-02 13:06:06.034 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 fa8f170f-4839-4548-bdf9-d4a307880023_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:06 np0005466031 nova_compute[235803]: 2025-10-02 13:06:06.126 2 DEBUG nova.storage.rbd_utils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] resizing rbd image fa8f170f-4839-4548-bdf9-d4a307880023_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:06:06 np0005466031 podman[318599]: 2025-10-02 13:06:06.21750378 +0000 UTC m=+0.063801709 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 09:06:06 np0005466031 podman[318599]: 2025-10-02 13:06:06.229110015 +0000 UTC m=+0.075407944 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 09:06:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:06 np0005466031 nova_compute[235803]: 2025-10-02 13:06:06.257 2 DEBUG nova.objects.instance [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'migration_context' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:06 np0005466031 nova_compute[235803]: 2025-10-02 13:06:06.282 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:06:06 np0005466031 nova_compute[235803]: 2025-10-02 13:06:06.282 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Ensure instance console log exists: /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:06:06 np0005466031 nova_compute[235803]: 2025-10-02 13:06:06.283 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:06 np0005466031 nova_compute[235803]: 2025-10-02 13:06:06.283 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:06 np0005466031 nova_compute[235803]: 2025-10-02 13:06:06.283 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:06.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:06 np0005466031 podman[318685]: 2025-10-02 13:06:06.581044295 +0000 UTC m=+0.198083839 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, release=1793, version=2.2.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, architecture=x86_64, vendor=Red Hat, Inc.)
Oct  2 09:06:06 np0005466031 podman[318685]: 2025-10-02 13:06:06.599957109 +0000 UTC m=+0.216996653 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1793, io.buildah.version=1.28.2, io.openshift.expose-services=, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  2 09:06:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:06:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:07 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:06:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:08.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:08.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:06:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 13K writes, 67K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1600 writes, 8193 keys, 1600 commit groups, 1.0 writes per commit group, ingest: 15.82 MB, 0.03 MB/s#012Interval WAL: 1600 writes, 1600 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     66.1      1.24              0.23        42    0.029       0      0       0.0       0.0#012  L6      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    115.5     98.3      4.17              1.15        41    0.102    279K    22K       0.0       0.0#012 Sum      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     89.1     91.0      5.40              1.39        83    0.065    279K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7     89.2     89.2      1.02              0.24        14    0.073     64K   3656       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    115.5     98.3      4.17              1.15        41    0.102    279K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     66.2      1.23              0.23        41    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.080, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.48 GB write, 0.10 MB/s write, 0.47 GB read, 0.10 MB/s read, 5.4 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 52.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000262 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3023,50.17 MB,16.5017%) FilterBlock(83,783.11 KB,0.251564%) IndexBlock(83,1.30 MB,0.427567%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:06:08 np0005466031 podman[318869]: 2025-10-02 13:06:08.622359618 +0000 UTC m=+0.051850835 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 09:06:08 np0005466031 podman[318870]: 2025-10-02 13:06:08.653074843 +0000 UTC m=+0.082226280 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  2 09:06:09 np0005466031 nova_compute[235803]: 2025-10-02 13:06:09.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:09 np0005466031 nova_compute[235803]: 2025-10-02 13:06:09.803 2 DEBUG nova.network.neutron [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Successfully created port: 725263f1-e117-427b-90e3-9e3c70306cba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:06:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:10.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:10 np0005466031 nova_compute[235803]: 2025-10-02 13:06:10.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:10 np0005466031 nova_compute[235803]: 2025-10-02 13:06:10.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:10 np0005466031 nova_compute[235803]: 2025-10-02 13:06:10.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:12.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:12.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:12 np0005466031 nova_compute[235803]: 2025-10-02 13:06:12.748 2 DEBUG nova.network.neutron [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Successfully updated port: 725263f1-e117-427b-90e3-9e3c70306cba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:06:12 np0005466031 nova_compute[235803]: 2025-10-02 13:06:12.804 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:06:12 np0005466031 nova_compute[235803]: 2025-10-02 13:06:12.804 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquired lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:06:12 np0005466031 nova_compute[235803]: 2025-10-02 13:06:12.804 2 DEBUG nova.network.neutron [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:06:12 np0005466031 nova_compute[235803]: 2025-10-02 13:06:12.910 2 DEBUG nova.compute.manager [req-8e02a276-1483-4862-a294-0d87443491ee req-50e3811d-4ef7-4c12-8c12-3d795d51bad9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-changed-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:12 np0005466031 nova_compute[235803]: 2025-10-02 13:06:12.911 2 DEBUG nova.compute.manager [req-8e02a276-1483-4862-a294-0d87443491ee req-50e3811d-4ef7-4c12-8c12-3d795d51bad9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Refreshing instance network info cache due to event network-changed-725263f1-e117-427b-90e3-9e3c70306cba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:06:12 np0005466031 nova_compute[235803]: 2025-10-02 13:06:12.912 2 DEBUG oslo_concurrency.lockutils [req-8e02a276-1483-4862-a294-0d87443491ee req-50e3811d-4ef7-4c12-8c12-3d795d51bad9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:06:13 np0005466031 nova_compute[235803]: 2025-10-02 13:06:13.051 2 DEBUG nova.network.neutron [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:06:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e358 e358: 3 total, 3 up, 3 in
Oct  2 09:06:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:14.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:14.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:15 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:15 np0005466031 podman[319015]: 2025-10-02 13:06:15.625722848 +0000 UTC m=+0.054535782 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 09:06:15 np0005466031 podman[319016]: 2025-10-02 13:06:15.626240193 +0000 UTC m=+0.053270146 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.687 2 DEBUG nova.network.neutron [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Updating instance_info_cache with network_info: [{"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.874 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Releasing lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.874 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance network_info: |[{"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.874 2 DEBUG oslo_concurrency.lockutils [req-8e02a276-1483-4862-a294-0d87443491ee req-50e3811d-4ef7-4c12-8c12-3d795d51bad9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.875 2 DEBUG nova.network.neutron [req-8e02a276-1483-4862-a294-0d87443491ee req-50e3811d-4ef7-4c12-8c12-3d795d51bad9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Refreshing network info cache for port 725263f1-e117-427b-90e3-9e3c70306cba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.877 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Start _get_guest_xml network_info=[{"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.881 2 WARNING nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.886 2 DEBUG nova.virt.libvirt.host [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.886 2 DEBUG nova.virt.libvirt.host [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.891 2 DEBUG nova.virt.libvirt.host [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.891 2 DEBUG nova.virt.libvirt.host [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.892 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.893 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.893 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.893 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.893 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.893 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.893 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.894 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.894 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.894 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.894 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.894 2 DEBUG nova.virt.hardware [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:06:15 np0005466031 nova_compute[235803]: 2025-10-02 13:06:15.896 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:06:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2607940262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:06:16 np0005466031 nova_compute[235803]: 2025-10-02 13:06:16.490 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:16.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:16 np0005466031 nova_compute[235803]: 2025-10-02 13:06:16.516 2 DEBUG nova.storage.rbd_utils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] rbd image fa8f170f-4839-4548-bdf9-d4a307880023_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:16 np0005466031 nova_compute[235803]: 2025-10-02 13:06:16.519 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:06:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3523504306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:06:16 np0005466031 nova_compute[235803]: 2025-10-02 13:06:16.966 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:16 np0005466031 nova_compute[235803]: 2025-10-02 13:06:16.967 2 DEBUG nova.virt.libvirt.vif [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:06:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1635210967',display_name='tempest-TestServerAdvancedOps-server-1635210967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1635210967',id=190,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='68aecf9157774d368c016e89768f535f',ramdisk_id='',reservation_id='r-o4c27txu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-2117170196',owner_user_name='tempest-TestServerAdvancedOps-2117170196-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:06:05Z,user_data=None,user_id='7c99b382e3ea4a03bbcf5bd8e2322243',uuid=fa8f170f-4839-4548-bdf9-d4a307880023,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:06:16 np0005466031 nova_compute[235803]: 2025-10-02 13:06:16.968 2 DEBUG nova.network.os_vif_util [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converting VIF {"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:06:16 np0005466031 nova_compute[235803]: 2025-10-02 13:06:16.969 2 DEBUG nova.network.os_vif_util [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:06:16 np0005466031 nova_compute[235803]: 2025-10-02 13:06:16.971 2 DEBUG nova.objects.instance [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'pci_devices' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.509 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <uuid>fa8f170f-4839-4548-bdf9-d4a307880023</uuid>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <name>instance-000000be</name>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestServerAdvancedOps-server-1635210967</nova:name>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:06:15</nova:creationTime>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <nova:user uuid="7c99b382e3ea4a03bbcf5bd8e2322243">tempest-TestServerAdvancedOps-2117170196-project-member</nova:user>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <nova:project uuid="68aecf9157774d368c016e89768f535f">tempest-TestServerAdvancedOps-2117170196</nova:project>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <nova:port uuid="725263f1-e117-427b-90e3-9e3c70306cba">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <entry name="serial">fa8f170f-4839-4548-bdf9-d4a307880023</entry>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <entry name="uuid">fa8f170f-4839-4548-bdf9-d4a307880023</entry>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/fa8f170f-4839-4548-bdf9-d4a307880023_disk">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/fa8f170f-4839-4548-bdf9-d4a307880023_disk.config">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:de:ea:ae"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <target dev="tap725263f1-e1"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023/console.log" append="off"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:06:17 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:06:17 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:06:17 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:06:17 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.510 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Preparing to wait for external event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.511 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.511 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.512 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.512 2 DEBUG nova.virt.libvirt.vif [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:06:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1635210967',display_name='tempest-TestServerAdvancedOps-server-1635210967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1635210967',id=190,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='68aecf9157774d368c016e89768f535f',ramdisk_id='',reservation_id='r-o4c27txu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-2117170196',owner_user_name='tempest-TestServerAdvancedOps-2117170196-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:06:05Z,user_data=None,user_id='7c99b382e3ea4a03bbcf5bd8e2322243',uuid=fa8f170f-4839-4548-bdf9-d4a307880023,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.512 2 DEBUG nova.network.os_vif_util [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converting VIF {"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.513 2 DEBUG nova.network.os_vif_util [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.513 2 DEBUG os_vif [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.515 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap725263f1-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap725263f1-e1, col_values=(('external_ids', {'iface-id': '725263f1-e117-427b-90e3-9e3c70306cba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:ea:ae', 'vm-uuid': 'fa8f170f-4839-4548-bdf9-d4a307880023'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:17 np0005466031 NetworkManager[44907]: <info>  [1759410377.5211] manager: (tap725263f1-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.528 2 INFO os_vif [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1')#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.889 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.889 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.889 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] No VIF found with MAC fa:16:3e:de:ea:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.890 2 INFO nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Using config drive#033[00m
Oct  2 09:06:17 np0005466031 nova_compute[235803]: 2025-10-02 13:06:17.910 2 DEBUG nova.storage.rbd_utils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] rbd image fa8f170f-4839-4548-bdf9-d4a307880023_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:18.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:18.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:18 np0005466031 nova_compute[235803]: 2025-10-02 13:06:18.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.037 2 DEBUG nova.network.neutron [req-8e02a276-1483-4862-a294-0d87443491ee req-50e3811d-4ef7-4c12-8c12-3d795d51bad9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Updated VIF entry in instance network info cache for port 725263f1-e117-427b-90e3-9e3c70306cba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.037 2 DEBUG nova.network.neutron [req-8e02a276-1483-4862-a294-0d87443491ee req-50e3811d-4ef7-4c12-8c12-3d795d51bad9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Updating instance_info_cache with network_info: [{"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:06:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 e359: 3 total, 3 up, 3 in
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.698 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.699 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.699 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.746 2 DEBUG oslo_concurrency.lockutils [req-8e02a276-1483-4862-a294-0d87443491ee req-50e3811d-4ef7-4c12-8c12-3d795d51bad9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.811 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.812 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.812 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.956 2 INFO nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Creating config drive at /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023/disk.config#033[00m
Oct  2 09:06:19 np0005466031 nova_compute[235803]: 2025-10-02 13:06:19.961 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqilcqjc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.095 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqilcqjc" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.122 2 DEBUG nova.storage.rbd_utils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] rbd image fa8f170f-4839-4548-bdf9-d4a307880023_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.126 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023/disk.config fa8f170f-4839-4548-bdf9-d4a307880023_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.158 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.159 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.159 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.159 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.160 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:20.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.373 2 DEBUG oslo_concurrency.processutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023/disk.config fa8f170f-4839-4548-bdf9-d4a307880023_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.374 2 INFO nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Deleting local config drive /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023/disk.config because it was imported into RBD.#033[00m
Oct  2 09:06:20 np0005466031 kernel: tap725263f1-e1: entered promiscuous mode
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:20 np0005466031 NetworkManager[44907]: <info>  [1759410380.4328] manager: (tap725263f1-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Oct  2 09:06:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:20Z|00760|binding|INFO|Claiming lport 725263f1-e117-427b-90e3-9e3c70306cba for this chassis.
Oct  2 09:06:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:20Z|00761|binding|INFO|725263f1-e117-427b-90e3-9e3c70306cba: Claiming fa:16:3e:de:ea:ae 10.100.0.4
Oct  2 09:06:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:20.453 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:ea:ae 10.100.0.4'], port_security=['fa:16:3e:de:ea:ae 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fa8f170f-4839-4548-bdf9-d4a307880023', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f85e173-ba03-413e-9a20-267dffdab135', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68aecf9157774d368c016e89768f535f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '800abc01-9f04-4b1d-9c7f-1217b9fedcb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=660c8205-21a8-4011-813f-b928006abd43, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=725263f1-e117-427b-90e3-9e3c70306cba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:06:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:20.454 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 725263f1-e117-427b-90e3-9e3c70306cba in datapath 0f85e173-ba03-413e-9a20-267dffdab135 bound to our chassis#033[00m
Oct  2 09:06:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:20.455 141898 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f85e173-ba03-413e-9a20-267dffdab135 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 09:06:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:20.456 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a074fc8e-78df-4b13-865f-c5162298ed9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:20 np0005466031 systemd-udevd[319209]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:06:20 np0005466031 NetworkManager[44907]: <info>  [1759410380.4792] device (tap725263f1-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:06:20 np0005466031 NetworkManager[44907]: <info>  [1759410380.4803] device (tap725263f1-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:06:20 np0005466031 systemd-machined[192227]: New machine qemu-87-instance-000000be.
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:20Z|00762|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba ovn-installed in OVS
Oct  2 09:06:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:20Z|00763|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba up in Southbound
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:20 np0005466031 systemd[1]: Started Virtual Machine qemu-87-instance-000000be.
Oct  2 09:06:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:20.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1954721145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.594 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.693 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.694 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.836 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.838 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4158MB free_disk=20.907779693603516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.838 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.839 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:20 np0005466031 nova_compute[235803]: 2025-10-02 13:06:20.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.421 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410381.4212925, fa8f170f-4839-4548-bdf9-d4a307880023 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.422 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Started (Lifecycle Event)#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.601 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.604 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410381.421467, fa8f170f-4839-4548-bdf9-d4a307880023 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.604 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.648 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.651 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.669 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance fa8f170f-4839-4548-bdf9-d4a307880023 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.670 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.670 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.683 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:06:21 np0005466031 nova_compute[235803]: 2025-10-02 13:06:21.720 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/822308584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.164 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.170 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.199 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.245 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.246 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:22.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.664 2 DEBUG nova.compute.manager [req-59c35ecb-4819-49ea-8260-c3743ac6e585 req-1d4d86fe-99c2-45f9-a19e-76fa04afe94d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.665 2 DEBUG oslo_concurrency.lockutils [req-59c35ecb-4819-49ea-8260-c3743ac6e585 req-1d4d86fe-99c2-45f9-a19e-76fa04afe94d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.665 2 DEBUG oslo_concurrency.lockutils [req-59c35ecb-4819-49ea-8260-c3743ac6e585 req-1d4d86fe-99c2-45f9-a19e-76fa04afe94d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.665 2 DEBUG oslo_concurrency.lockutils [req-59c35ecb-4819-49ea-8260-c3743ac6e585 req-1d4d86fe-99c2-45f9-a19e-76fa04afe94d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.666 2 DEBUG nova.compute.manager [req-59c35ecb-4819-49ea-8260-c3743ac6e585 req-1d4d86fe-99c2-45f9-a19e-76fa04afe94d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Processing event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.666 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.670 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410382.6703475, fa8f170f-4839-4548-bdf9-d4a307880023 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.671 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.674 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.677 2 INFO nova.virt.libvirt.driver [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance spawned successfully.#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.678 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.742 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.746 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.759 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.759 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.760 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.760 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.761 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.761 2 DEBUG nova.virt.libvirt.driver [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.808 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.983 2 INFO nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Took 17.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:06:22 np0005466031 nova_compute[235803]: 2025-10-02 13:06:22.985 2 DEBUG nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:23.074 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:06:23 np0005466031 nova_compute[235803]: 2025-10-02 13:06:23.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:23.076 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:06:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:23.076 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:23 np0005466031 nova_compute[235803]: 2025-10-02 13:06:23.146 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:23 np0005466031 nova_compute[235803]: 2025-10-02 13:06:23.148 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:23 np0005466031 nova_compute[235803]: 2025-10-02 13:06:23.148 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:23 np0005466031 nova_compute[235803]: 2025-10-02 13:06:23.243 2 INFO nova.compute.manager [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Took 18.74 seconds to build instance.#033[00m
Oct  2 09:06:23 np0005466031 nova_compute[235803]: 2025-10-02 13:06:23.315 2 DEBUG oslo_concurrency.lockutils [None req-1d70debd-cbe8-4fc5-9a76-1c424816e075 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:24.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:24.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:25 np0005466031 nova_compute[235803]: 2025-10-02 13:06:25.324 2 DEBUG nova.compute.manager [req-d0c8fbf1-f9f6-4c56-baa0-1ed5f6078e46 req-471a7229-2431-47b1-a6a8-8134c5be260f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:25 np0005466031 nova_compute[235803]: 2025-10-02 13:06:25.325 2 DEBUG oslo_concurrency.lockutils [req-d0c8fbf1-f9f6-4c56-baa0-1ed5f6078e46 req-471a7229-2431-47b1-a6a8-8134c5be260f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:25 np0005466031 nova_compute[235803]: 2025-10-02 13:06:25.325 2 DEBUG oslo_concurrency.lockutils [req-d0c8fbf1-f9f6-4c56-baa0-1ed5f6078e46 req-471a7229-2431-47b1-a6a8-8134c5be260f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:25 np0005466031 nova_compute[235803]: 2025-10-02 13:06:25.325 2 DEBUG oslo_concurrency.lockutils [req-d0c8fbf1-f9f6-4c56-baa0-1ed5f6078e46 req-471a7229-2431-47b1-a6a8-8134c5be260f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:25 np0005466031 nova_compute[235803]: 2025-10-02 13:06:25.325 2 DEBUG nova.compute.manager [req-d0c8fbf1-f9f6-4c56-baa0-1ed5f6078e46 req-471a7229-2431-47b1-a6a8-8134c5be260f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:25 np0005466031 nova_compute[235803]: 2025-10-02 13:06:25.325 2 WARNING nova.compute.manager [req-d0c8fbf1-f9f6-4c56-baa0-1ed5f6078e46 req-471a7229-2431-47b1-a6a8-8134c5be260f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state active and task_state None.#033[00m
Oct  2 09:06:25 np0005466031 nova_compute[235803]: 2025-10-02 13:06:25.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:25.874 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:25.875 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:25.875 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:26.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:26.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:27 np0005466031 nova_compute[235803]: 2025-10-02 13:06:27.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:27 np0005466031 nova_compute[235803]: 2025-10-02 13:06:27.728 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:28.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:28.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:29 np0005466031 nova_compute[235803]: 2025-10-02 13:06:29.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:29 np0005466031 nova_compute[235803]: 2025-10-02 13:06:29.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:06:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:30 np0005466031 nova_compute[235803]: 2025-10-02 13:06:30.040 2 DEBUG nova.objects.instance [None req-aaa65d34-5133-4d6c-9d67-37954ab42f32 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'pci_devices' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:30 np0005466031 nova_compute[235803]: 2025-10-02 13:06:30.115 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410390.1157494, fa8f170f-4839-4548-bdf9-d4a307880023 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:30 np0005466031 nova_compute[235803]: 2025-10-02 13:06:30.116 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:06:30 np0005466031 nova_compute[235803]: 2025-10-02 13:06:30.173 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:30 np0005466031 nova_compute[235803]: 2025-10-02 13:06:30.177 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:30 np0005466031 nova_compute[235803]: 2025-10-02 13:06:30.237 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 09:06:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:30.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:30.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:30 np0005466031 nova_compute[235803]: 2025-10-02 13:06:30.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:31 np0005466031 kernel: tap725263f1-e1 (unregistering): left promiscuous mode
Oct  2 09:06:31 np0005466031 NetworkManager[44907]: <info>  [1759410391.3760] device (tap725263f1-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:06:31 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:31Z|00764|binding|INFO|Releasing lport 725263f1-e117-427b-90e3-9e3c70306cba from this chassis (sb_readonly=0)
Oct  2 09:06:31 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:31Z|00765|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba down in Southbound
Oct  2 09:06:31 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:31Z|00766|binding|INFO|Removing iface tap725263f1-e1 ovn-installed in OVS
Oct  2 09:06:31 np0005466031 nova_compute[235803]: 2025-10-02 13:06:31.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:31 np0005466031 nova_compute[235803]: 2025-10-02 13:06:31.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:31 np0005466031 nova_compute[235803]: 2025-10-02 13:06:31.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:31.402 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:ea:ae 10.100.0.4'], port_security=['fa:16:3e:de:ea:ae 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fa8f170f-4839-4548-bdf9-d4a307880023', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f85e173-ba03-413e-9a20-267dffdab135', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68aecf9157774d368c016e89768f535f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '800abc01-9f04-4b1d-9c7f-1217b9fedcb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=660c8205-21a8-4011-813f-b928006abd43, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=725263f1-e117-427b-90e3-9e3c70306cba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:06:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:31.404 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 725263f1-e117-427b-90e3-9e3c70306cba in datapath 0f85e173-ba03-413e-9a20-267dffdab135 unbound from our chassis#033[00m
Oct  2 09:06:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:31.404 141898 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f85e173-ba03-413e-9a20-267dffdab135 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 09:06:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:31.405 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[03d62cdf-6269-42b6-84de-e29f8372fe45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:31 np0005466031 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000be.scope: Deactivated successfully.
Oct  2 09:06:31 np0005466031 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000be.scope: Consumed 8.597s CPU time.
Oct  2 09:06:31 np0005466031 systemd-machined[192227]: Machine qemu-87-instance-000000be terminated.
Oct  2 09:06:31 np0005466031 nova_compute[235803]: 2025-10-02 13:06:31.628 2 DEBUG nova.compute.manager [None req-aaa65d34-5133-4d6c-9d67-37954ab42f32 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:32 np0005466031 nova_compute[235803]: 2025-10-02 13:06:32.217 2 DEBUG nova.compute.manager [req-e975aeb2-d5db-4052-b3aa-04859f8841c4 req-e1e80d2f-1159-4be7-94e0-bc979ceb8f12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:32 np0005466031 nova_compute[235803]: 2025-10-02 13:06:32.217 2 DEBUG oslo_concurrency.lockutils [req-e975aeb2-d5db-4052-b3aa-04859f8841c4 req-e1e80d2f-1159-4be7-94e0-bc979ceb8f12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:32 np0005466031 nova_compute[235803]: 2025-10-02 13:06:32.217 2 DEBUG oslo_concurrency.lockutils [req-e975aeb2-d5db-4052-b3aa-04859f8841c4 req-e1e80d2f-1159-4be7-94e0-bc979ceb8f12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:32 np0005466031 nova_compute[235803]: 2025-10-02 13:06:32.218 2 DEBUG oslo_concurrency.lockutils [req-e975aeb2-d5db-4052-b3aa-04859f8841c4 req-e1e80d2f-1159-4be7-94e0-bc979ceb8f12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:32 np0005466031 nova_compute[235803]: 2025-10-02 13:06:32.218 2 DEBUG nova.compute.manager [req-e975aeb2-d5db-4052-b3aa-04859f8841c4 req-e1e80d2f-1159-4be7-94e0-bc979ceb8f12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:32 np0005466031 nova_compute[235803]: 2025-10-02 13:06:32.218 2 WARNING nova.compute.manager [req-e975aeb2-d5db-4052-b3aa-04859f8841c4 req-e1e80d2f-1159-4be7-94e0-bc979ceb8f12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state suspended and task_state None.#033[00m
Oct  2 09:06:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:06:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:32.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:06:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:32.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:32 np0005466031 nova_compute[235803]: 2025-10-02 13:06:32.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:34.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:34.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.219 2 DEBUG nova.compute.manager [req-a6c0ef43-f3f5-4814-b3c7-e7f52438e917 req-efa83ffb-45ca-41b4-b738-44c67b8bda31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.220 2 DEBUG oslo_concurrency.lockutils [req-a6c0ef43-f3f5-4814-b3c7-e7f52438e917 req-efa83ffb-45ca-41b4-b738-44c67b8bda31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.220 2 DEBUG oslo_concurrency.lockutils [req-a6c0ef43-f3f5-4814-b3c7-e7f52438e917 req-efa83ffb-45ca-41b4-b738-44c67b8bda31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.220 2 DEBUG oslo_concurrency.lockutils [req-a6c0ef43-f3f5-4814-b3c7-e7f52438e917 req-efa83ffb-45ca-41b4-b738-44c67b8bda31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.220 2 DEBUG nova.compute.manager [req-a6c0ef43-f3f5-4814-b3c7-e7f52438e917 req-efa83ffb-45ca-41b4-b738-44c67b8bda31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.220 2 WARNING nova.compute.manager [req-a6c0ef43-f3f5-4814-b3c7-e7f52438e917 req-efa83ffb-45ca-41b4-b738-44c67b8bda31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state suspended and task_state None.#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.395 2 INFO nova.compute.manager [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Resuming#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.396 2 DEBUG nova.objects.instance [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'flavor' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.477 2 DEBUG oslo_concurrency.lockutils [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.477 2 DEBUG oslo_concurrency.lockutils [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquired lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.477 2 DEBUG nova.network.neutron [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:06:35 np0005466031 nova_compute[235803]: 2025-10-02 13:06:35.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:36.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:36.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:36 np0005466031 nova_compute[235803]: 2025-10-02 13:06:36.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:37 np0005466031 nova_compute[235803]: 2025-10-02 13:06:37.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:06:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:38.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:06:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.493 2 DEBUG nova.network.neutron [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Updating instance_info_cache with network_info: [{"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.508 2 DEBUG oslo_concurrency.lockutils [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Releasing lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.514 2 DEBUG nova.virt.libvirt.vif [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1635210967',display_name='tempest-TestServerAdvancedOps-server-1635210967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1635210967',id=190,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='68aecf9157774d368c016e89768f535f',ramdisk_id='',reservation_id='r-o4c27txu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-2117170196',owner_user_name='tempest-TestServerAdvancedOps-2117170196-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:06:31Z,user_data=None,user_id='7c99b382e3ea4a03bbcf5bd8e2322243',uuid=fa8f170f-4839-4548-bdf9-d4a307880023,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.515 2 DEBUG nova.network.os_vif_util [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converting VIF {"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.515 2 DEBUG nova.network.os_vif_util [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.516 2 DEBUG os_vif [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap725263f1-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap725263f1-e1, col_values=(('external_ids', {'iface-id': '725263f1-e117-427b-90e3-9e3c70306cba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:ea:ae', 'vm-uuid': 'fa8f170f-4839-4548-bdf9-d4a307880023'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.521 2 INFO os_vif [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1')#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.599 2 DEBUG nova.objects.instance [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'numa_topology' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:39 np0005466031 podman[319368]: 2025-10-02 13:06:39.644483718 +0000 UTC m=+0.086285857 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:06:39 np0005466031 podman[319369]: 2025-10-02 13:06:39.67856289 +0000 UTC m=+0.119885835 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:06:39 np0005466031 kernel: tap725263f1-e1: entered promiscuous mode
Oct  2 09:06:39 np0005466031 NetworkManager[44907]: <info>  [1759410399.6857] manager: (tap725263f1-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Oct  2 09:06:39 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:39Z|00767|binding|INFO|Claiming lport 725263f1-e117-427b-90e3-9e3c70306cba for this chassis.
Oct  2 09:06:39 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:39Z|00768|binding|INFO|725263f1-e117-427b-90e3-9e3c70306cba: Claiming fa:16:3e:de:ea:ae 10.100.0.4
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:39.712 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:ea:ae 10.100.0.4'], port_security=['fa:16:3e:de:ea:ae 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fa8f170f-4839-4548-bdf9-d4a307880023', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f85e173-ba03-413e-9a20-267dffdab135', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68aecf9157774d368c016e89768f535f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '800abc01-9f04-4b1d-9c7f-1217b9fedcb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=660c8205-21a8-4011-813f-b928006abd43, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=725263f1-e117-427b-90e3-9e3c70306cba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:06:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:39.713 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 725263f1-e117-427b-90e3-9e3c70306cba in datapath 0f85e173-ba03-413e-9a20-267dffdab135 bound to our chassis#033[00m
Oct  2 09:06:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:39.714 141898 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f85e173-ba03-413e-9a20-267dffdab135 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 09:06:39 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:39Z|00769|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba ovn-installed in OVS
Oct  2 09:06:39 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:39Z|00770|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba up in Southbound
Oct  2 09:06:39 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:39.715 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8d024075-91aa-489d-b5e8-b50475a288e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:39 np0005466031 nova_compute[235803]: 2025-10-02 13:06:39.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:39 np0005466031 systemd-udevd[319422]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:06:39 np0005466031 systemd-machined[192227]: New machine qemu-88-instance-000000be.
Oct  2 09:06:39 np0005466031 NetworkManager[44907]: <info>  [1759410399.7410] device (tap725263f1-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:06:39 np0005466031 NetworkManager[44907]: <info>  [1759410399.7421] device (tap725263f1-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:06:39 np0005466031 systemd[1]: Started Virtual Machine qemu-88-instance-000000be.
Oct  2 09:06:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:06:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:40.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:06:40 np0005466031 nova_compute[235803]: 2025-10-02 13:06:40.405 2 DEBUG nova.compute.manager [req-d8f69764-eddd-435b-af62-b9ec11632418 req-e9524111-af27-47c1-8834-a6c6bc03b4ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:40 np0005466031 nova_compute[235803]: 2025-10-02 13:06:40.407 2 DEBUG oslo_concurrency.lockutils [req-d8f69764-eddd-435b-af62-b9ec11632418 req-e9524111-af27-47c1-8834-a6c6bc03b4ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:40 np0005466031 nova_compute[235803]: 2025-10-02 13:06:40.408 2 DEBUG oslo_concurrency.lockutils [req-d8f69764-eddd-435b-af62-b9ec11632418 req-e9524111-af27-47c1-8834-a6c6bc03b4ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:40 np0005466031 nova_compute[235803]: 2025-10-02 13:06:40.408 2 DEBUG oslo_concurrency.lockutils [req-d8f69764-eddd-435b-af62-b9ec11632418 req-e9524111-af27-47c1-8834-a6c6bc03b4ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:40 np0005466031 nova_compute[235803]: 2025-10-02 13:06:40.408 2 DEBUG nova.compute.manager [req-d8f69764-eddd-435b-af62-b9ec11632418 req-e9524111-af27-47c1-8834-a6c6bc03b4ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:40 np0005466031 nova_compute[235803]: 2025-10-02 13:06:40.408 2 WARNING nova.compute.manager [req-d8f69764-eddd-435b-af62-b9ec11632418 req-e9524111-af27-47c1-8834-a6c6bc03b4ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 09:06:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:40.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:40 np0005466031 nova_compute[235803]: 2025-10-02 13:06:40.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.582 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for fa8f170f-4839-4548-bdf9-d4a307880023 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.583 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410401.5822563, fa8f170f-4839-4548-bdf9-d4a307880023 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.583 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Started (Lifecycle Event)#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.599 2 DEBUG nova.compute.manager [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.599 2 DEBUG nova.objects.instance [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'pci_devices' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.620 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.623 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.628 2 INFO nova.virt.libvirt.driver [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance running successfully.#033[00m
Oct  2 09:06:41 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.630 2 DEBUG nova.virt.libvirt.guest [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.630 2 DEBUG nova.compute.manager [None req-4a40d309-1fe6-4a4c-949d-8fe0bb14a79c 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.667 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.668 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410401.5856988, fa8f170f-4839-4548-bdf9-d4a307880023 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.668 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.731 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:41 np0005466031 nova_compute[235803]: 2025-10-02 13:06:41.733 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:42.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:06:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 64K writes, 263K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.06 MB/s#012Cumulative WAL: 64K writes, 23K syncs, 2.75 writes per sync, written: 0.27 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 52.86 MB, 0.09 MB/s#012Interval WAL: 10K writes, 3712 syncs, 2.77 writes per sync, written: 0.05 GB, 0.09 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:06:42 np0005466031 nova_compute[235803]: 2025-10-02 13:06:42.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:42.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:42 np0005466031 nova_compute[235803]: 2025-10-02 13:06:42.625 2 DEBUG nova.compute.manager [req-fc801d8c-7619-4dce-9ac9-81e3e405fd69 req-700f8c86-439c-4423-963d-cc488b62cd37 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:42 np0005466031 nova_compute[235803]: 2025-10-02 13:06:42.625 2 DEBUG oslo_concurrency.lockutils [req-fc801d8c-7619-4dce-9ac9-81e3e405fd69 req-700f8c86-439c-4423-963d-cc488b62cd37 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:42 np0005466031 nova_compute[235803]: 2025-10-02 13:06:42.625 2 DEBUG oslo_concurrency.lockutils [req-fc801d8c-7619-4dce-9ac9-81e3e405fd69 req-700f8c86-439c-4423-963d-cc488b62cd37 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:42 np0005466031 nova_compute[235803]: 2025-10-02 13:06:42.626 2 DEBUG oslo_concurrency.lockutils [req-fc801d8c-7619-4dce-9ac9-81e3e405fd69 req-700f8c86-439c-4423-963d-cc488b62cd37 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:42 np0005466031 nova_compute[235803]: 2025-10-02 13:06:42.626 2 DEBUG nova.compute.manager [req-fc801d8c-7619-4dce-9ac9-81e3e405fd69 req-700f8c86-439c-4423-963d-cc488b62cd37 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:42 np0005466031 nova_compute[235803]: 2025-10-02 13:06:42.626 2 WARNING nova.compute.manager [req-fc801d8c-7619-4dce-9ac9-81e3e405fd69 req-700f8c86-439c-4423-963d-cc488b62cd37 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state active and task_state None.#033[00m
Oct  2 09:06:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:44.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:44.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:45 np0005466031 nova_compute[235803]: 2025-10-02 13:06:45.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:46.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:46.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:46 np0005466031 podman[319479]: 2025-10-02 13:06:46.632462884 +0000 UTC m=+0.056965103 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:06:46 np0005466031 podman[319478]: 2025-10-02 13:06:46.635308896 +0000 UTC m=+0.060738731 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:06:46 np0005466031 nova_compute[235803]: 2025-10-02 13:06:46.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:46 np0005466031 nova_compute[235803]: 2025-10-02 13:06:46.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:06:46 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #57. Immutable memtables: 13.
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.653 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.653 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.678 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.751 2 DEBUG nova.objects.instance [None req-05c0f00a-8d4b-4264-8d22-3dd62d04fe1b 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'pci_devices' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.865 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410407.8649213, fa8f170f-4839-4548-bdf9-d4a307880023 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.865 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.884 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.888 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:47 np0005466031 nova_compute[235803]: 2025-10-02 13:06:47.909 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 09:06:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:48.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:48.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:50.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:50 np0005466031 nova_compute[235803]: 2025-10-02 13:06:50.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:51 np0005466031 kernel: tap725263f1-e1 (unregistering): left promiscuous mode
Oct  2 09:06:51 np0005466031 NetworkManager[44907]: <info>  [1759410411.0149] device (tap725263f1-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:06:51 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:51Z|00771|binding|INFO|Releasing lport 725263f1-e117-427b-90e3-9e3c70306cba from this chassis (sb_readonly=0)
Oct  2 09:06:51 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:51Z|00772|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba down in Southbound
Oct  2 09:06:51 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:51Z|00773|binding|INFO|Removing iface tap725263f1-e1 ovn-installed in OVS
Oct  2 09:06:51 np0005466031 nova_compute[235803]: 2025-10-02 13:06:51.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:51 np0005466031 nova_compute[235803]: 2025-10-02 13:06:51.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:51.047 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:ea:ae 10.100.0.4'], port_security=['fa:16:3e:de:ea:ae 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fa8f170f-4839-4548-bdf9-d4a307880023', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f85e173-ba03-413e-9a20-267dffdab135', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68aecf9157774d368c016e89768f535f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '800abc01-9f04-4b1d-9c7f-1217b9fedcb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=660c8205-21a8-4011-813f-b928006abd43, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=725263f1-e117-427b-90e3-9e3c70306cba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:06:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:51.048 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 725263f1-e117-427b-90e3-9e3c70306cba in datapath 0f85e173-ba03-413e-9a20-267dffdab135 unbound from our chassis#033[00m
Oct  2 09:06:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:51.049 141898 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f85e173-ba03-413e-9a20-267dffdab135 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 09:06:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:51.050 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[16fdc3d1-0289-4118-a05e-bc1be70d40ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:51 np0005466031 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000be.scope: Deactivated successfully.
Oct  2 09:06:51 np0005466031 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000be.scope: Consumed 5.544s CPU time.
Oct  2 09:06:51 np0005466031 systemd-machined[192227]: Machine qemu-88-instance-000000be terminated.
Oct  2 09:06:51 np0005466031 nova_compute[235803]: 2025-10-02 13:06:51.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:51 np0005466031 nova_compute[235803]: 2025-10-02 13:06:51.207 2 DEBUG nova.compute.manager [None req-05c0f00a-8d4b-4264-8d22-3dd62d04fe1b 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:52 np0005466031 nova_compute[235803]: 2025-10-02 13:06:52.259 2 DEBUG nova.compute.manager [req-7bdab0d8-bf9b-477d-a202-e53bd1c65316 req-0a9de704-38d9-48fb-a21a-e77204be27c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:52 np0005466031 nova_compute[235803]: 2025-10-02 13:06:52.260 2 DEBUG oslo_concurrency.lockutils [req-7bdab0d8-bf9b-477d-a202-e53bd1c65316 req-0a9de704-38d9-48fb-a21a-e77204be27c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:52 np0005466031 nova_compute[235803]: 2025-10-02 13:06:52.260 2 DEBUG oslo_concurrency.lockutils [req-7bdab0d8-bf9b-477d-a202-e53bd1c65316 req-0a9de704-38d9-48fb-a21a-e77204be27c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:52 np0005466031 nova_compute[235803]: 2025-10-02 13:06:52.260 2 DEBUG oslo_concurrency.lockutils [req-7bdab0d8-bf9b-477d-a202-e53bd1c65316 req-0a9de704-38d9-48fb-a21a-e77204be27c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:52 np0005466031 nova_compute[235803]: 2025-10-02 13:06:52.260 2 DEBUG nova.compute.manager [req-7bdab0d8-bf9b-477d-a202-e53bd1c65316 req-0a9de704-38d9-48fb-a21a-e77204be27c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:52 np0005466031 nova_compute[235803]: 2025-10-02 13:06:52.260 2 WARNING nova.compute.manager [req-7bdab0d8-bf9b-477d-a202-e53bd1c65316 req-0a9de704-38d9-48fb-a21a-e77204be27c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state suspended and task_state None.#033[00m
Oct  2 09:06:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:06:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:52.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:06:52 np0005466031 nova_compute[235803]: 2025-10-02 13:06:52.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:52.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:54.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:54.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:54 np0005466031 nova_compute[235803]: 2025-10-02 13:06:54.621 2 DEBUG nova.compute.manager [req-d89cb28c-6f4d-48f1-a3ac-f968dc2eaa65 req-3646b9b5-743a-489d-a605-12a44a07cbc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:54 np0005466031 nova_compute[235803]: 2025-10-02 13:06:54.621 2 DEBUG oslo_concurrency.lockutils [req-d89cb28c-6f4d-48f1-a3ac-f968dc2eaa65 req-3646b9b5-743a-489d-a605-12a44a07cbc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:54 np0005466031 nova_compute[235803]: 2025-10-02 13:06:54.621 2 DEBUG oslo_concurrency.lockutils [req-d89cb28c-6f4d-48f1-a3ac-f968dc2eaa65 req-3646b9b5-743a-489d-a605-12a44a07cbc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:54 np0005466031 nova_compute[235803]: 2025-10-02 13:06:54.622 2 DEBUG oslo_concurrency.lockutils [req-d89cb28c-6f4d-48f1-a3ac-f968dc2eaa65 req-3646b9b5-743a-489d-a605-12a44a07cbc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:54 np0005466031 nova_compute[235803]: 2025-10-02 13:06:54.622 2 DEBUG nova.compute.manager [req-d89cb28c-6f4d-48f1-a3ac-f968dc2eaa65 req-3646b9b5-743a-489d-a605-12a44a07cbc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:54 np0005466031 nova_compute[235803]: 2025-10-02 13:06:54.622 2 WARNING nova.compute.manager [req-d89cb28c-6f4d-48f1-a3ac-f968dc2eaa65 req-3646b9b5-743a-489d-a605-12a44a07cbc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state suspended and task_state None.#033[00m
Oct  2 09:06:55 np0005466031 nova_compute[235803]: 2025-10-02 13:06:55.345 2 INFO nova.compute.manager [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Resuming#033[00m
Oct  2 09:06:55 np0005466031 nova_compute[235803]: 2025-10-02 13:06:55.346 2 DEBUG nova.objects.instance [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'flavor' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:55 np0005466031 nova_compute[235803]: 2025-10-02 13:06:55.692 2 DEBUG oslo_concurrency.lockutils [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:06:55 np0005466031 nova_compute[235803]: 2025-10-02 13:06:55.692 2 DEBUG oslo_concurrency.lockutils [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquired lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:06:55 np0005466031 nova_compute[235803]: 2025-10-02 13:06:55.693 2 DEBUG nova.network.neutron [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:06:55 np0005466031 nova_compute[235803]: 2025-10-02 13:06:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:56.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:56.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:57 np0005466031 nova_compute[235803]: 2025-10-02 13:06:57.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:57 np0005466031 nova_compute[235803]: 2025-10-02 13:06:57.996 2 DEBUG nova.network.neutron [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Updating instance_info_cache with network_info: [{"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.026 2 DEBUG oslo_concurrency.lockutils [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Releasing lock "refresh_cache-fa8f170f-4839-4548-bdf9-d4a307880023" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.030 2 DEBUG nova.virt.libvirt.vif [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1635210967',display_name='tempest-TestServerAdvancedOps-server-1635210967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1635210967',id=190,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='68aecf9157774d368c016e89768f535f',ramdisk_id='',reservation_id='r-o4c27txu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-2117170196',owner_user_name='tempest-TestServerAdvancedOps-2117170196-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:06:51Z,user_data=None,user_id='7c99b382e3ea4a03bbcf5bd8e2322243',uuid=fa8f170f-4839-4548-bdf9-d4a307880023,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.030 2 DEBUG nova.network.os_vif_util [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converting VIF {"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.031 2 DEBUG nova.network.os_vif_util [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.031 2 DEBUG os_vif [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap725263f1-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap725263f1-e1, col_values=(('external_ids', {'iface-id': '725263f1-e117-427b-90e3-9e3c70306cba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:ea:ae', 'vm-uuid': 'fa8f170f-4839-4548-bdf9-d4a307880023'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.036 2 INFO os_vif [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1')#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.061 2 DEBUG nova.objects.instance [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'numa_topology' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:58 np0005466031 kernel: tap725263f1-e1: entered promiscuous mode
Oct  2 09:06:58 np0005466031 NetworkManager[44907]: <info>  [1759410418.1273] manager: (tap725263f1-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Oct  2 09:06:58 np0005466031 systemd-udevd[319603]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:06:58 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:58Z|00774|binding|INFO|Claiming lport 725263f1-e117-427b-90e3-9e3c70306cba for this chassis.
Oct  2 09:06:58 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:58Z|00775|binding|INFO|725263f1-e117-427b-90e3-9e3c70306cba: Claiming fa:16:3e:de:ea:ae 10.100.0.4
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:58 np0005466031 NetworkManager[44907]: <info>  [1759410418.1838] device (tap725263f1-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:06:58 np0005466031 NetworkManager[44907]: <info>  [1759410418.1847] device (tap725263f1-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:06:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:58.185 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:ea:ae 10.100.0.4'], port_security=['fa:16:3e:de:ea:ae 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fa8f170f-4839-4548-bdf9-d4a307880023', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f85e173-ba03-413e-9a20-267dffdab135', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68aecf9157774d368c016e89768f535f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '800abc01-9f04-4b1d-9c7f-1217b9fedcb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=660c8205-21a8-4011-813f-b928006abd43, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=725263f1-e117-427b-90e3-9e3c70306cba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:06:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:58.187 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 725263f1-e117-427b-90e3-9e3c70306cba in datapath 0f85e173-ba03-413e-9a20-267dffdab135 bound to our chassis#033[00m
Oct  2 09:06:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:58.187 141898 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f85e173-ba03-413e-9a20-267dffdab135 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 09:06:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:06:58.188 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[54373ab8-40c8-43a9-832c-57be09edfc5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:58 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:58Z|00776|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba ovn-installed in OVS
Oct  2 09:06:58 np0005466031 ovn_controller[132413]: 2025-10-02T13:06:58Z|00777|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba up in Southbound
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:58 np0005466031 nova_compute[235803]: 2025-10-02 13:06:58.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:58 np0005466031 systemd-machined[192227]: New machine qemu-89-instance-000000be.
Oct  2 09:06:58 np0005466031 systemd[1]: Started Virtual Machine qemu-89-instance-000000be.
Oct  2 09:06:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:58.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:06:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:58.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.294 2 DEBUG nova.compute.manager [req-ba72d970-e31c-4c8d-b850-03502b50aea6 req-49761615-c8a6-42c4-8a45-3c776370cd79 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.294 2 DEBUG oslo_concurrency.lockutils [req-ba72d970-e31c-4c8d-b850-03502b50aea6 req-49761615-c8a6-42c4-8a45-3c776370cd79 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.294 2 DEBUG oslo_concurrency.lockutils [req-ba72d970-e31c-4c8d-b850-03502b50aea6 req-49761615-c8a6-42c4-8a45-3c776370cd79 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.294 2 DEBUG oslo_concurrency.lockutils [req-ba72d970-e31c-4c8d-b850-03502b50aea6 req-49761615-c8a6-42c4-8a45-3c776370cd79 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.294 2 DEBUG nova.compute.manager [req-ba72d970-e31c-4c8d-b850-03502b50aea6 req-49761615-c8a6-42c4-8a45-3c776370cd79 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.295 2 WARNING nova.compute.manager [req-ba72d970-e31c-4c8d-b850-03502b50aea6 req-49761615-c8a6-42c4-8a45-3c776370cd79 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.313 2 DEBUG nova.virt.libvirt.host [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Removed pending event for fa8f170f-4839-4548-bdf9-d4a307880023 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.313 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410419.3131576, fa8f170f-4839-4548-bdf9-d4a307880023 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.313 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Started (Lifecycle Event)#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.336 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.354 2 DEBUG nova.compute.manager [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.355 2 DEBUG nova.objects.instance [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'pci_devices' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.358 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.379 2 INFO nova.virt.libvirt.driver [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance running successfully.#033[00m
Oct  2 09:06:59 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.382 2 DEBUG nova.virt.libvirt.guest [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.382 2 DEBUG nova.compute.manager [None req-df987489-41c1-42d3-a189-9de856b683fa 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.387 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.387 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410419.3174253, fa8f170f-4839-4548-bdf9-d4a307880023 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.387 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.442 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:59 np0005466031 nova_compute[235803]: 2025-10-02 13:06:59.445 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:07:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:00.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:00.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:00 np0005466031 nova_compute[235803]: 2025-10-02 13:07:00.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:02 np0005466031 nova_compute[235803]: 2025-10-02 13:07:02.210 2 DEBUG nova.compute.manager [req-6866df53-ca14-4f6f-89b7-581fd32777b8 req-e291a28c-aa3b-4aa2-9e27-0169deaf79a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:02 np0005466031 nova_compute[235803]: 2025-10-02 13:07:02.210 2 DEBUG oslo_concurrency.lockutils [req-6866df53-ca14-4f6f-89b7-581fd32777b8 req-e291a28c-aa3b-4aa2-9e27-0169deaf79a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:02 np0005466031 nova_compute[235803]: 2025-10-02 13:07:02.211 2 DEBUG oslo_concurrency.lockutils [req-6866df53-ca14-4f6f-89b7-581fd32777b8 req-e291a28c-aa3b-4aa2-9e27-0169deaf79a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:02 np0005466031 nova_compute[235803]: 2025-10-02 13:07:02.211 2 DEBUG oslo_concurrency.lockutils [req-6866df53-ca14-4f6f-89b7-581fd32777b8 req-e291a28c-aa3b-4aa2-9e27-0169deaf79a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:02 np0005466031 nova_compute[235803]: 2025-10-02 13:07:02.211 2 DEBUG nova.compute.manager [req-6866df53-ca14-4f6f-89b7-581fd32777b8 req-e291a28c-aa3b-4aa2-9e27-0169deaf79a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:02 np0005466031 nova_compute[235803]: 2025-10-02 13:07:02.211 2 WARNING nova.compute.manager [req-6866df53-ca14-4f6f-89b7-581fd32777b8 req-e291a28c-aa3b-4aa2-9e27-0169deaf79a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state active and task_state None.#033[00m
Oct  2 09:07:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:02.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:02 np0005466031 nova_compute[235803]: 2025-10-02 13:07:02.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:02.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:04.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.316 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.316 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.317 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.317 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.317 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.318 2 INFO nova.compute.manager [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Terminating instance#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.320 2 DEBUG nova.compute.manager [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:07:04 np0005466031 kernel: tap725263f1-e1 (unregistering): left promiscuous mode
Oct  2 09:07:04 np0005466031 NetworkManager[44907]: <info>  [1759410424.3899] device (tap725263f1-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:07:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:07:04Z|00778|binding|INFO|Releasing lport 725263f1-e117-427b-90e3-9e3c70306cba from this chassis (sb_readonly=0)
Oct  2 09:07:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:07:04Z|00779|binding|INFO|Setting lport 725263f1-e117-427b-90e3-9e3c70306cba down in Southbound
Oct  2 09:07:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:07:04Z|00780|binding|INFO|Removing iface tap725263f1-e1 ovn-installed in OVS
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:04.402 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:ea:ae 10.100.0.4'], port_security=['fa:16:3e:de:ea:ae 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fa8f170f-4839-4548-bdf9-d4a307880023', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f85e173-ba03-413e-9a20-267dffdab135', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68aecf9157774d368c016e89768f535f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '800abc01-9f04-4b1d-9c7f-1217b9fedcb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=660c8205-21a8-4011-813f-b928006abd43, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=725263f1-e117-427b-90e3-9e3c70306cba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:07:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:04.403 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 725263f1-e117-427b-90e3-9e3c70306cba in datapath 0f85e173-ba03-413e-9a20-267dffdab135 unbound from our chassis#033[00m
Oct  2 09:07:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:04.404 141898 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f85e173-ba03-413e-9a20-267dffdab135 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 09:07:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:04.406 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b0cb0685-a86e-40aa-a465-4f8a28426f6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:04 np0005466031 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000be.scope: Deactivated successfully.
Oct  2 09:07:04 np0005466031 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000be.scope: Consumed 2.095s CPU time.
Oct  2 09:07:04 np0005466031 systemd-machined[192227]: Machine qemu-89-instance-000000be terminated.
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.562 2 INFO nova.virt.libvirt.driver [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Instance destroyed successfully.#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.562 2 DEBUG nova.objects.instance [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lazy-loading 'resources' on Instance uuid fa8f170f-4839-4548-bdf9-d4a307880023 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.590 2 DEBUG nova.virt.libvirt.vif [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1635210967',display_name='tempest-TestServerAdvancedOps-server-1635210967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1635210967',id=190,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='68aecf9157774d368c016e89768f535f',ramdisk_id='',reservation_id='r-o4c27txu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-2117170196',owner_user_name='tempest-TestServerAdvancedOps-2117170196-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:06:59Z,user_data=None,user_id='7c99b382e3ea4a03bbcf5bd8e2322243',uuid=fa8f170f-4839-4548-bdf9-d4a307880023,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.590 2 DEBUG nova.network.os_vif_util [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converting VIF {"id": "725263f1-e117-427b-90e3-9e3c70306cba", "address": "fa:16:3e:de:ea:ae", "network": {"id": "0f85e173-ba03-413e-9a20-267dffdab135", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1508803742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "68aecf9157774d368c016e89768f535f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap725263f1-e1", "ovs_interfaceid": "725263f1-e117-427b-90e3-9e3c70306cba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.591 2 DEBUG nova.network.os_vif_util [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.591 2 DEBUG os_vif [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap725263f1-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.599 2 INFO os_vif [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:ea:ae,bridge_name='br-int',has_traffic_filtering=True,id=725263f1-e117-427b-90e3-9e3c70306cba,network=Network(0f85e173-ba03-413e-9a20-267dffdab135),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap725263f1-e1')#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.815 2 DEBUG nova.compute.manager [req-56fc55bc-7b97-404f-9124-9b736e2c56a9 req-78bdb025-4188-40fa-8ce1-4a2fe9892f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.816 2 DEBUG oslo_concurrency.lockutils [req-56fc55bc-7b97-404f-9124-9b736e2c56a9 req-78bdb025-4188-40fa-8ce1-4a2fe9892f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.816 2 DEBUG oslo_concurrency.lockutils [req-56fc55bc-7b97-404f-9124-9b736e2c56a9 req-78bdb025-4188-40fa-8ce1-4a2fe9892f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.816 2 DEBUG oslo_concurrency.lockutils [req-56fc55bc-7b97-404f-9124-9b736e2c56a9 req-78bdb025-4188-40fa-8ce1-4a2fe9892f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.816 2 DEBUG nova.compute.manager [req-56fc55bc-7b97-404f-9124-9b736e2c56a9 req-78bdb025-4188-40fa-8ce1-4a2fe9892f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:04 np0005466031 nova_compute[235803]: 2025-10-02 13:07:04.817 2 DEBUG nova.compute.manager [req-56fc55bc-7b97-404f-9124-9b736e2c56a9 req-78bdb025-4188-40fa-8ce1-4a2fe9892f0b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-unplugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:07:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:05 np0005466031 nova_compute[235803]: 2025-10-02 13:07:05.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:06 np0005466031 nova_compute[235803]: 2025-10-02 13:07:06.113 2 INFO nova.virt.libvirt.driver [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Deleting instance files /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023_del#033[00m
Oct  2 09:07:06 np0005466031 nova_compute[235803]: 2025-10-02 13:07:06.114 2 INFO nova.virt.libvirt.driver [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Deletion of /var/lib/nova/instances/fa8f170f-4839-4548-bdf9-d4a307880023_del complete#033[00m
Oct  2 09:07:06 np0005466031 nova_compute[235803]: 2025-10-02 13:07:06.191 2 INFO nova.compute.manager [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Took 1.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:07:06 np0005466031 nova_compute[235803]: 2025-10-02 13:07:06.191 2 DEBUG oslo.service.loopingcall [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:07:06 np0005466031 nova_compute[235803]: 2025-10-02 13:07:06.192 2 DEBUG nova.compute.manager [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:07:06 np0005466031 nova_compute[235803]: 2025-10-02 13:07:06.192 2 DEBUG nova.network.neutron [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:07:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:06.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:06.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:07 np0005466031 nova_compute[235803]: 2025-10-02 13:07:07.200 2 DEBUG nova.compute.manager [req-cbf49a58-126c-45dc-9748-2c59edc938d0 req-756d3593-3a3f-428f-9ce1-9725d4e2be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:07 np0005466031 nova_compute[235803]: 2025-10-02 13:07:07.200 2 DEBUG oslo_concurrency.lockutils [req-cbf49a58-126c-45dc-9748-2c59edc938d0 req-756d3593-3a3f-428f-9ce1-9725d4e2be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:07 np0005466031 nova_compute[235803]: 2025-10-02 13:07:07.200 2 DEBUG oslo_concurrency.lockutils [req-cbf49a58-126c-45dc-9748-2c59edc938d0 req-756d3593-3a3f-428f-9ce1-9725d4e2be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:07 np0005466031 nova_compute[235803]: 2025-10-02 13:07:07.201 2 DEBUG oslo_concurrency.lockutils [req-cbf49a58-126c-45dc-9748-2c59edc938d0 req-756d3593-3a3f-428f-9ce1-9725d4e2be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:07 np0005466031 nova_compute[235803]: 2025-10-02 13:07:07.201 2 DEBUG nova.compute.manager [req-cbf49a58-126c-45dc-9748-2c59edc938d0 req-756d3593-3a3f-428f-9ce1-9725d4e2be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] No waiting events found dispatching network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:07 np0005466031 nova_compute[235803]: 2025-10-02 13:07:07.201 2 WARNING nova.compute.manager [req-cbf49a58-126c-45dc-9748-2c59edc938d0 req-756d3593-3a3f-428f-9ce1-9725d4e2be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received unexpected event network-vif-plugged-725263f1-e117-427b-90e3-9e3c70306cba for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:07:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:08.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:08.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:08 np0005466031 nova_compute[235803]: 2025-10-02 13:07:08.629 2 DEBUG nova.network.neutron [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:08 np0005466031 nova_compute[235803]: 2025-10-02 13:07:08.664 2 INFO nova.compute.manager [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Took 2.47 seconds to deallocate network for instance.#033[00m
Oct  2 09:07:08 np0005466031 nova_compute[235803]: 2025-10-02 13:07:08.723 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:08 np0005466031 nova_compute[235803]: 2025-10-02 13:07:08.723 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:08 np0005466031 nova_compute[235803]: 2025-10-02 13:07:08.757 2 DEBUG nova.compute.manager [req-2e2195db-8c58-44ba-98bf-eb26ab3d2748 req-87e3717b-ec1f-48fa-98a8-270077be0980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Received event network-vif-deleted-725263f1-e117-427b-90e3-9e3c70306cba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:08 np0005466031 nova_compute[235803]: 2025-10-02 13:07:08.791 2 DEBUG oslo_concurrency.processutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2568518845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:09 np0005466031 nova_compute[235803]: 2025-10-02 13:07:09.253 2 DEBUG oslo_concurrency.processutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:09 np0005466031 nova_compute[235803]: 2025-10-02 13:07:09.259 2 DEBUG nova.compute.provider_tree [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:07:09 np0005466031 nova_compute[235803]: 2025-10-02 13:07:09.273 2 DEBUG nova.scheduler.client.report [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:07:09 np0005466031 nova_compute[235803]: 2025-10-02 13:07:09.300 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:09 np0005466031 nova_compute[235803]: 2025-10-02 13:07:09.357 2 INFO nova.scheduler.client.report [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Deleted allocations for instance fa8f170f-4839-4548-bdf9-d4a307880023#033[00m
Oct  2 09:07:09 np0005466031 nova_compute[235803]: 2025-10-02 13:07:09.584 2 DEBUG oslo_concurrency.lockutils [None req-7a6f78c1-2630-40b6-a554-9d4752d98232 7c99b382e3ea4a03bbcf5bd8e2322243 68aecf9157774d368c016e89768f535f - - default default] Lock "fa8f170f-4839-4548-bdf9-d4a307880023" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:09 np0005466031 nova_compute[235803]: 2025-10-02 13:07:09.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:10.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:10.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:10 np0005466031 nova_compute[235803]: 2025-10-02 13:07:10.662 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:10 np0005466031 podman[319723]: 2025-10-02 13:07:10.675118184 +0000 UTC m=+0.090676704 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:07:10 np0005466031 podman[319724]: 2025-10-02 13:07:10.707729643 +0000 UTC m=+0.123023835 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:07:10 np0005466031 nova_compute[235803]: 2025-10-02 13:07:10.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:11 np0005466031 nova_compute[235803]: 2025-10-02 13:07:11.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:12.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:12.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:12 np0005466031 nova_compute[235803]: 2025-10-02 13:07:12.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.279427) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433279494, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2414, "num_deletes": 253, "total_data_size": 5716916, "memory_usage": 5788400, "flush_reason": "Manual Compaction"}
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433298959, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3749607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66245, "largest_seqno": 68654, "table_properties": {"data_size": 3739781, "index_size": 6191, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20826, "raw_average_key_size": 20, "raw_value_size": 3719986, "raw_average_value_size": 3697, "num_data_blocks": 269, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410224, "oldest_key_time": 1759410224, "file_creation_time": 1759410433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 19629 microseconds, and 9956 cpu microseconds.
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.299061) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3749607 bytes OK
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.299089) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.300636) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.300649) EVENT_LOG_v1 {"time_micros": 1759410433300644, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.300665) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5706233, prev total WAL file size 5706497, number of live WAL files 2.
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.301789) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3661KB)], [135(10MB)]
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433301922, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14457379, "oldest_snapshot_seqno": -1}
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9082 keys, 12539334 bytes, temperature: kUnknown
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433377667, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12539334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12479597, "index_size": 35944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238661, "raw_average_key_size": 26, "raw_value_size": 12319186, "raw_average_value_size": 1356, "num_data_blocks": 1377, "num_entries": 9082, "num_filter_entries": 9082, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.377978) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12539334 bytes
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.379299) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.7 rd, 165.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 9607, records dropped: 525 output_compression: NoCompression
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.379321) EVENT_LOG_v1 {"time_micros": 1759410433379312, "job": 86, "event": "compaction_finished", "compaction_time_micros": 75818, "compaction_time_cpu_micros": 36750, "output_level": 6, "num_output_files": 1, "total_output_size": 12539334, "num_input_records": 9607, "num_output_records": 9082, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433380691, "job": 86, "event": "table_file_deletion", "file_number": 137}
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433384067, "job": 86, "event": "table_file_deletion", "file_number": 135}
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.301655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.384187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.384190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.384192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.384193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:07:13.384195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:14.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:14.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:14 np0005466031 nova_compute[235803]: 2025-10-02 13:07:14.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:15 np0005466031 nova_compute[235803]: 2025-10-02 13:07:15.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:15 np0005466031 nova_compute[235803]: 2025-10-02 13:07:15.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:16.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:16.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:07:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:07:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:07:17 np0005466031 podman[319955]: 2025-10-02 13:07:17.629976314 +0000 UTC m=+0.054368107 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 09:07:17 np0005466031 podman[319954]: 2025-10-02 13:07:17.633387693 +0000 UTC m=+0.058825196 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:07:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:18.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:18.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:19 np0005466031 nova_compute[235803]: 2025-10-02 13:07:19.560 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410424.5592217, fa8f170f-4839-4548-bdf9-d4a307880023 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:07:19 np0005466031 nova_compute[235803]: 2025-10-02 13:07:19.560 2 INFO nova.compute.manager [-] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:07:19 np0005466031 nova_compute[235803]: 2025-10-02 13:07:19.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005466031 nova_compute[235803]: 2025-10-02 13:07:19.690 2 DEBUG nova.compute.manager [None req-99f3f53f-62b7-48c4-b094-f26a3856e664 - - - - - -] [instance: fa8f170f-4839-4548-bdf9-d4a307880023] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:07:20 np0005466031 nova_compute[235803]: 2025-10-02 13:07:20.094 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:20.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:20.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:20 np0005466031 nova_compute[235803]: 2025-10-02 13:07:20.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.806 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.807 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.808 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.869 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.869 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.870 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.870 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:07:21 np0005466031 nova_compute[235803]: 2025-10-02 13:07:21.870 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/674363330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:22 np0005466031 nova_compute[235803]: 2025-10-02 13:07:22.326 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:22.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:22 np0005466031 nova_compute[235803]: 2025-10-02 13:07:22.478 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:07:22 np0005466031 nova_compute[235803]: 2025-10-02 13:07:22.479 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4142MB free_disk=20.92196273803711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:07:22 np0005466031 nova_compute[235803]: 2025-10-02 13:07:22.479 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:22 np0005466031 nova_compute[235803]: 2025-10-02 13:07:22.479 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:22.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:22 np0005466031 nova_compute[235803]: 2025-10-02 13:07:22.639 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:07:22 np0005466031 nova_compute[235803]: 2025-10-02 13:07:22.640 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:07:22 np0005466031 nova_compute[235803]: 2025-10-02 13:07:22.657 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:07:22 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:07:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1430412347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:23 np0005466031 nova_compute[235803]: 2025-10-02 13:07:23.111 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:23 np0005466031 nova_compute[235803]: 2025-10-02 13:07:23.117 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:07:23 np0005466031 nova_compute[235803]: 2025-10-02 13:07:23.135 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:07:23 np0005466031 nova_compute[235803]: 2025-10-02 13:07:23.170 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:07:23 np0005466031 nova_compute[235803]: 2025-10-02 13:07:23.170 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:23 np0005466031 nova_compute[235803]: 2025-10-02 13:07:23.999 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:07:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:07:24 np0005466031 nova_compute[235803]: 2025-10-02 13:07:24.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:24.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:25.875 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:25.876 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:25.876 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:25 np0005466031 nova_compute[235803]: 2025-10-02 13:07:25.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:26.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:27.042 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:07:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:27.043 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:07:27 np0005466031 nova_compute[235803]: 2025-10-02 13:07:27.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:28.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:28.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:28 np0005466031 nova_compute[235803]: 2025-10-02 13:07:28.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:29 np0005466031 nova_compute[235803]: 2025-10-02 13:07:29.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:29 np0005466031 nova_compute[235803]: 2025-10-02 13:07:29.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:29 np0005466031 nova_compute[235803]: 2025-10-02 13:07:29.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:07:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:30.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:30.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:30 np0005466031 nova_compute[235803]: 2025-10-02 13:07:30.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:32.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:32.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:34.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:34 np0005466031 nova_compute[235803]: 2025-10-02 13:07:34.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:34.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:07:35.045 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:35 np0005466031 nova_compute[235803]: 2025-10-02 13:07:35.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:36.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:36.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:38.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:38.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:39 np0005466031 nova_compute[235803]: 2025-10-02 13:07:39.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:40.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:40.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:40 np0005466031 nova_compute[235803]: 2025-10-02 13:07:40.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:41 np0005466031 podman[320154]: 2025-10-02 13:07:41.622661966 +0000 UTC m=+0.050675031 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:07:41 np0005466031 podman[320155]: 2025-10-02 13:07:41.651031844 +0000 UTC m=+0.076682151 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:07:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:42.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:42.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:44.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:44 np0005466031 nova_compute[235803]: 2025-10-02 13:07:44.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:44.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:46 np0005466031 nova_compute[235803]: 2025-10-02 13:07:46.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:46.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:46.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:48.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:48 np0005466031 podman[320202]: 2025-10-02 13:07:48.621554576 +0000 UTC m=+0.055301645 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:07:48 np0005466031 podman[320203]: 2025-10-02 13:07:48.627555888 +0000 UTC m=+0.056349654 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 09:07:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:48.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:49 np0005466031 nova_compute[235803]: 2025-10-02 13:07:49.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:50.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:50.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:51 np0005466031 nova_compute[235803]: 2025-10-02 13:07:51.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:52.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:52.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:54.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:54 np0005466031 nova_compute[235803]: 2025-10-02 13:07:54.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:54.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:56 np0005466031 nova_compute[235803]: 2025-10-02 13:07:56.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:56.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:56.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:07:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:58.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:07:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:07:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:59 np0005466031 nova_compute[235803]: 2025-10-02 13:07:59.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:08:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:08:01 np0005466031 nova_compute[235803]: 2025-10-02 13:08:01.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:02.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:04.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:04 np0005466031 nova_compute[235803]: 2025-10-02 13:08:04.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:06 np0005466031 nova_compute[235803]: 2025-10-02 13:08:06.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:06.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:06.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:07 np0005466031 ceph-osd[79023]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct  2 09:08:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:08.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:09 np0005466031 nova_compute[235803]: 2025-10-02 13:08:09.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:10 np0005466031 nova_compute[235803]: 2025-10-02 13:08:10.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:10.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:11 np0005466031 nova_compute[235803]: 2025-10-02 13:08:11.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.179 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "61bad754-8d82-465b-8545-25d700a6e146" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.180 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.200 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.276 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.277 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.282 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.282 2 INFO nova.compute.claims [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.387 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:12.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:12 np0005466031 podman[320331]: 2025-10-02 13:08:12.460504275 +0000 UTC m=+0.051750572 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:08:12 np0005466031 podman[320333]: 2025-10-02 13:08:12.494512075 +0000 UTC m=+0.083725423 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/695871044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.818 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.824 2 DEBUG nova.compute.provider_tree [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.863 2 DEBUG nova.scheduler.client.report [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.910 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:12 np0005466031 nova_compute[235803]: 2025-10-02 13:08:12.911 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.007 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.008 2 DEBUG nova.network.neutron [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.060 2 INFO nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.081 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.167 2 INFO nova.virt.block_device [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Booting with volume 3014eaef-bad8-4a87-9136-52199fbc69cb at /dev/vda#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.620 2 DEBUG os_brick.utils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.622 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.632 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.632 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[106531e0-aed9-4b54-96b8-969c1b0ea661]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.633 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.640 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.641 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[d319d8a0-9bd1-4396-8643-90fb8646558d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.642 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.649 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.649 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[6b38363a-4d1e-4a84-971e-88bee522488b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.650 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b55caf-665a-4e38-8666-aaf8b914da5b]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.650 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.680 2 DEBUG nova.policy [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '156cc6022c70402ab6d194a340b076d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.684 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.686 2 DEBUG os_brick.initiator.connectors.lightos [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.686 2 DEBUG os_brick.initiator.connectors.lightos [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.686 2 DEBUG os_brick.initiator.connectors.lightos [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.687 2 DEBUG os_brick.utils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:08:13 np0005466031 nova_compute[235803]: 2025-10-02 13:08:13.687 2 DEBUG nova.virt.block_device [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updating existing volume attachment record: a51a38b2-1628-4796-9233-e72a61cebdb1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:08:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:14.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:14 np0005466031 nova_compute[235803]: 2025-10-02 13:08:14.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:14 np0005466031 nova_compute[235803]: 2025-10-02 13:08:14.822 2 DEBUG nova.network.neutron [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Successfully created port: 1f77e4ed-6c42-4686-ae66-d3693020fddd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:08:15 np0005466031 nova_compute[235803]: 2025-10-02 13:08:15.255 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:08:15 np0005466031 nova_compute[235803]: 2025-10-02 13:08:15.259 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:08:15 np0005466031 nova_compute[235803]: 2025-10-02 13:08:15.260 2 INFO nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Creating image(s)#033[00m
Oct  2 09:08:15 np0005466031 nova_compute[235803]: 2025-10-02 13:08:15.260 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:08:15 np0005466031 nova_compute[235803]: 2025-10-02 13:08:15.261 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Ensure instance console log exists: /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:08:15 np0005466031 nova_compute[235803]: 2025-10-02 13:08:15.262 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:15 np0005466031 nova_compute[235803]: 2025-10-02 13:08:15.262 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:15 np0005466031 nova_compute[235803]: 2025-10-02 13:08:15.263 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:16 np0005466031 nova_compute[235803]: 2025-10-02 13:08:16.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:16.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:16 np0005466031 nova_compute[235803]: 2025-10-02 13:08:16.581 2 DEBUG nova.network.neutron [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Successfully updated port: 1f77e4ed-6c42-4686-ae66-d3693020fddd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:08:16 np0005466031 nova_compute[235803]: 2025-10-02 13:08:16.596 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:16 np0005466031 nova_compute[235803]: 2025-10-02 13:08:16.597 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:16 np0005466031 nova_compute[235803]: 2025-10-02 13:08:16.597 2 DEBUG nova.network.neutron [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:08:16 np0005466031 nova_compute[235803]: 2025-10-02 13:08:16.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:16.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:16 np0005466031 nova_compute[235803]: 2025-10-02 13:08:16.779 2 DEBUG nova.network.neutron [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:08:17 np0005466031 nova_compute[235803]: 2025-10-02 13:08:17.691 2 DEBUG nova.compute.manager [req-464ede65-50a8-4d7f-bd01-c9e4c5f64aca req-3d0b00f4-09b2-46ea-9c7c-8bda1e81e448 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received event network-changed-1f77e4ed-6c42-4686-ae66-d3693020fddd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:17 np0005466031 nova_compute[235803]: 2025-10-02 13:08:17.691 2 DEBUG nova.compute.manager [req-464ede65-50a8-4d7f-bd01-c9e4c5f64aca req-3d0b00f4-09b2-46ea-9c7c-8bda1e81e448 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Refreshing instance network info cache due to event network-changed-1f77e4ed-6c42-4686-ae66-d3693020fddd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:08:17 np0005466031 nova_compute[235803]: 2025-10-02 13:08:17.692 2 DEBUG oslo_concurrency.lockutils [req-464ede65-50a8-4d7f-bd01-c9e4c5f64aca req-3d0b00f4-09b2-46ea-9c7c-8bda1e81e448 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.245 2 DEBUG nova.network.neutron [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updating instance_info_cache with network_info: [{"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.273 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.273 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Instance network_info: |[{"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.274 2 DEBUG oslo_concurrency.lockutils [req-464ede65-50a8-4d7f-bd01-c9e4c5f64aca req-3d0b00f4-09b2-46ea-9c7c-8bda1e81e448 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.274 2 DEBUG nova.network.neutron [req-464ede65-50a8-4d7f-bd01-c9e4c5f64aca req-3d0b00f4-09b2-46ea-9c7c-8bda1e81e448 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Refreshing network info cache for port 1f77e4ed-6c42-4686-ae66-d3693020fddd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.277 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Start _get_guest_xml network_info=[{"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3014eaef-bad8-4a87-9136-52199fbc69cb', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3014eaef-bad8-4a87-9136-52199fbc69cb', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '61bad754-8d82-465b-8545-25d700a6e146', 'attached_at': '', 'detached_at': '', 'volume_id': '3014eaef-bad8-4a87-9136-52199fbc69cb', 'serial': '3014eaef-bad8-4a87-9136-52199fbc69cb', 'multiattach': True}, 'attachment_id': 'a51a38b2-1628-4796-9233-e72a61cebdb1', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.282 2 WARNING nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.288 2 DEBUG nova.virt.libvirt.host [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.289 2 DEBUG nova.virt.libvirt.host [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.300 2 DEBUG nova.virt.libvirt.host [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.301 2 DEBUG nova.virt.libvirt.host [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.302 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.303 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.303 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.304 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.304 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.304 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.304 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.304 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.305 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.305 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.305 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.305 2 DEBUG nova.virt.hardware [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.340 2 DEBUG nova.storage.rbd_utils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 61bad754-8d82-465b-8545-25d700a6e146_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.350 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:18.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 09:08:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 09:08:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:08:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2629945941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.842 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.870 2 DEBUG nova.virt.libvirt.vif [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-851533710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-851533710',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-0899y8tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:13Z,user_data=None,user_id='156cc6022c70402ab6d194a340b076d5',uuid=61bad754-8d82-465b-8545-25d700a6e146,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.871 2 DEBUG nova.network.os_vif_util [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.873 2 DEBUG nova.network.os_vif_util [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:a4:0c,bridge_name='br-int',has_traffic_filtering=True,id=1f77e4ed-6c42-4686-ae66-d3693020fddd,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f77e4ed-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.876 2 DEBUG nova.objects.instance [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 61bad754-8d82-465b-8545-25d700a6e146 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.901 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <uuid>61bad754-8d82-465b-8545-25d700a6e146</uuid>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <name>instance-000000c2</name>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-851533710</nova:name>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:08:18</nova:creationTime>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <nova:port uuid="1f77e4ed-6c42-4686-ae66-d3693020fddd">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <entry name="serial">61bad754-8d82-465b-8545-25d700a6e146</entry>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <entry name="uuid">61bad754-8d82-465b-8545-25d700a6e146</entry>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/61bad754-8d82-465b-8545-25d700a6e146_disk.config">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-3014eaef-bad8-4a87-9136-52199fbc69cb">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <serial>3014eaef-bad8-4a87-9136-52199fbc69cb</serial>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <shareable/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:53:a4:0c"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <target dev="tap1f77e4ed-6c"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146/console.log" append="off"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:08:18 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:08:18 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:08:18 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:08:18 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.904 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Preparing to wait for external event network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.905 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "61bad754-8d82-465b-8545-25d700a6e146-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.905 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.905 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.906 2 DEBUG nova.virt.libvirt.vif [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-851533710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-851533710',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-0899y8tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:13Z,user_data=None,user_id='156cc6022c70402ab6d194a340b076d5',uuid=61bad754-8d82-465b-8545-25d700a6e146,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.906 2 DEBUG nova.network.os_vif_util [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.907 2 DEBUG nova.network.os_vif_util [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:a4:0c,bridge_name='br-int',has_traffic_filtering=True,id=1f77e4ed-6c42-4686-ae66-d3693020fddd,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f77e4ed-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.907 2 DEBUG os_vif [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:a4:0c,bridge_name='br-int',has_traffic_filtering=True,id=1f77e4ed-6c42-4686-ae66-d3693020fddd,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f77e4ed-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f77e4ed-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f77e4ed-6c, col_values=(('external_ids', {'iface-id': '1f77e4ed-6c42-4686-ae66-d3693020fddd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:a4:0c', 'vm-uuid': '61bad754-8d82-465b-8545-25d700a6e146'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:18 np0005466031 NetworkManager[44907]: <info>  [1759410498.9690] manager: (tap1f77e4ed-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:18 np0005466031 nova_compute[235803]: 2025-10-02 13:08:18.982 2 INFO os_vif [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:a4:0c,bridge_name='br-int',has_traffic_filtering=True,id=1f77e4ed-6c42-4686-ae66-d3693020fddd,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f77e4ed-6c')#033[00m
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.085 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.085 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.085 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:53:a4:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.086 2 INFO nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Using config drive#033[00m
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.163 2 DEBUG nova.storage.rbd_utils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 61bad754-8d82-465b-8545-25d700a6e146_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:19 np0005466031 podman[320495]: 2025-10-02 13:08:19.625716015 +0000 UTC m=+0.054155261 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:08:19 np0005466031 podman[320494]: 2025-10-02 13:08:19.626387415 +0000 UTC m=+0.057539739 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.819 2 INFO nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Creating config drive at /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146/disk.config#033[00m
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.824 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqpg1soz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.963 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzqpg1soz" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:19 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.996 2 DEBUG nova.storage.rbd_utils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 61bad754-8d82-465b-8545-25d700a6e146_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:19.999 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146/disk.config 61bad754-8d82-465b-8545-25d700a6e146_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.029 2 DEBUG nova.network.neutron [req-464ede65-50a8-4d7f-bd01-c9e4c5f64aca req-3d0b00f4-09b2-46ea-9c7c-8bda1e81e448 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updated VIF entry in instance network info cache for port 1f77e4ed-6c42-4686-ae66-d3693020fddd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.030 2 DEBUG nova.network.neutron [req-464ede65-50a8-4d7f-bd01-c9e4c5f64aca req-3d0b00f4-09b2-46ea-9c7c-8bda1e81e448 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updating instance_info_cache with network_info: [{"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.084 2 DEBUG oslo_concurrency.lockutils [req-464ede65-50a8-4d7f-bd01-c9e4c5f64aca req-3d0b00f4-09b2-46ea-9c7c-8bda1e81e448 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:20.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:20.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.846 2 DEBUG oslo_concurrency.processutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146/disk.config 61bad754-8d82-465b-8545-25d700a6e146_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.846s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.846 2 INFO nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Deleting local config drive /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146/disk.config because it was imported into RBD.#033[00m
Oct  2 09:08:20 np0005466031 kernel: tap1f77e4ed-6c: entered promiscuous mode
Oct  2 09:08:20 np0005466031 NetworkManager[44907]: <info>  [1759410500.9090] manager: (tap1f77e4ed-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Oct  2 09:08:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:20Z|00781|binding|INFO|Claiming lport 1f77e4ed-6c42-4686-ae66-d3693020fddd for this chassis.
Oct  2 09:08:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:20Z|00782|binding|INFO|1f77e4ed-6c42-4686-ae66-d3693020fddd: Claiming fa:16:3e:53:a4:0c 10.100.0.14
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.924 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:a4:0c 10.100.0.14'], port_security=['fa:16:3e:53:a4:0c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '61bad754-8d82-465b-8545-25d700a6e146', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e0b78e6-81a7-466c-a6a5-7c1350a20a08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=1f77e4ed-6c42-4686-ae66-d3693020fddd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.925 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 1f77e4ed-6c42-4686-ae66-d3693020fddd in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis#033[00m
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.927 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc#033[00m
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.939 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d475d885-626a-4f4f-8934-3744923412f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.939 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9001b9c-b1 in ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:08:20 np0005466031 systemd-udevd[320589]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.944 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9001b9c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.944 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b89196b0-9f51-4e8a-83e3-17da21aad66b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.945 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e535fb8b-6314-4db6-b94b-078135226f74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:20 np0005466031 systemd-machined[192227]: New machine qemu-90-instance-000000c2.
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.956 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[39bef3a9-c6c7-42b4-9fe2-2e7fd9ebef64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:20 np0005466031 NetworkManager[44907]: <info>  [1759410500.9587] device (tap1f77e4ed-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:08:20 np0005466031 NetworkManager[44907]: <info>  [1759410500.9594] device (tap1f77e4ed-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:20Z|00783|binding|INFO|Setting lport 1f77e4ed-6c42-4686-ae66-d3693020fddd ovn-installed in OVS
Oct  2 09:08:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:20Z|00784|binding|INFO|Setting lport 1f77e4ed-6c42-4686-ae66-d3693020fddd up in Southbound
Oct  2 09:08:20 np0005466031 nova_compute[235803]: 2025-10-02 13:08:20.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:20.982 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[934e5ce4-5f56-4989-af8f-ecad7e8d4c70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:20 np0005466031 systemd[1]: Started Virtual Machine qemu-90-instance-000000c2.
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.013 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[71359002-fc8b-47c2-bd92-dc652a939f45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 NetworkManager[44907]: <info>  [1759410501.0190] manager: (tapd9001b9c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.019 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[71440e39-ef4b-42d8-a2b5-3d0df75f12ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 systemd-udevd[320593]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.052 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[fca885d6-bb6a-4480-8179-a5959cecab38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.055 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6e005144-b918-4967-ae24-e2294fe187f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 NetworkManager[44907]: <info>  [1759410501.0770] device (tapd9001b9c-b0): carrier: link connected
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.083 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[db225bdc-df5e-4e93-b2c1-e42b09ed134c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.100 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[521cdfe6-1c6e-42bd-820f-663c4f720389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835666, 'reachable_time': 44215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320622, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.115 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2dffe7-66de-4101-b8fd-d47abc1048e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c08b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835666, 'tstamp': 835666}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320623, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.132 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4f9f53-ffdb-496f-9f8c-59370391ad7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835666, 'reachable_time': 44215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320624, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.166 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[712c075f-d16b-4ba5-ac8d-58b13c5756c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.233 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c2c05c-ed52-4078-a068-9aa7054be963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.234 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.234 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.235 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:21 np0005466031 NetworkManager[44907]: <info>  [1759410501.2371] manager: (tapd9001b9c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Oct  2 09:08:21 np0005466031 kernel: tapd9001b9c-b0: entered promiscuous mode
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.239 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:21 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:21Z|00785|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.256 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.257 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[61ad8e61-4257-4470-a500-4500c7d55e5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.258 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-d9001b9c-bca6-4085-a954-1414269e31bc
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID d9001b9c-bca6-4085-a954-1414269e31bc
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:08:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:21.259 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'env', 'PROCESS_TAG=haproxy-d9001b9c-bca6-4085-a954-1414269e31bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9001b9c-bca6-4085-a954-1414269e31bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:21 np0005466031 podman[320696]: 2025-10-02 13:08:21.643164741 +0000 UTC m=+0.050673811 container create ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.662 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.663 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:21 np0005466031 systemd[1]: Started libpod-conmon-ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a.scope.
Oct  2 09:08:21 np0005466031 podman[320696]: 2025-10-02 13:08:21.614948418 +0000 UTC m=+0.022457498 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:08:21 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:08:21 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6d19d9cd68aaea6f8d9a700a0835fe96d7882492418a80e8a9c47c9e1c6484/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:08:21 np0005466031 podman[320696]: 2025-10-02 13:08:21.753746337 +0000 UTC m=+0.161255437 container init ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 09:08:21 np0005466031 podman[320696]: 2025-10-02 13:08:21.760144971 +0000 UTC m=+0.167654041 container start ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:08:21 np0005466031 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[320712]: [NOTICE]   (320716) : New worker (320719) forked
Oct  2 09:08:21 np0005466031 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[320712]: [NOTICE]   (320716) : Loading success.
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.791 2 DEBUG nova.compute.manager [req-eb34e3ae-23f8-47c6-803b-6d9df8ec55c8 req-c60ff8f9-6391-40a6-8ec1-fde0de489309 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received event network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.792 2 DEBUG oslo_concurrency.lockutils [req-eb34e3ae-23f8-47c6-803b-6d9df8ec55c8 req-c60ff8f9-6391-40a6-8ec1-fde0de489309 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "61bad754-8d82-465b-8545-25d700a6e146-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.792 2 DEBUG oslo_concurrency.lockutils [req-eb34e3ae-23f8-47c6-803b-6d9df8ec55c8 req-c60ff8f9-6391-40a6-8ec1-fde0de489309 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.792 2 DEBUG oslo_concurrency.lockutils [req-eb34e3ae-23f8-47c6-803b-6d9df8ec55c8 req-c60ff8f9-6391-40a6-8ec1-fde0de489309 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.792 2 DEBUG nova.compute.manager [req-eb34e3ae-23f8-47c6-803b-6d9df8ec55c8 req-c60ff8f9-6391-40a6-8ec1-fde0de489309 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Processing event network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.834 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.835 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410501.8344536, 61bad754-8d82-465b-8545-25d700a6e146 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.835 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] VM Started (Lifecycle Event)#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.839 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.848 2 INFO nova.virt.libvirt.driver [-] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Instance spawned successfully.#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.848 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.872 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.880 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.887 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.888 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.888 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.889 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.889 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.889 2 DEBUG nova.virt.libvirt.driver [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.938 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.938 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410501.8374612, 61bad754-8d82-465b-8545-25d700a6e146 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.938 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.985 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.988 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410501.838729, 61bad754-8d82-465b-8545-25d700a6e146 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:21 np0005466031 nova_compute[235803]: 2025-10-02 13:08:21.988 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.003 2 INFO nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Took 6.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.004 2 DEBUG nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.017 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.019 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.047 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.081 2 INFO nova.compute.manager [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Took 9.84 seconds to build instance.#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.101 2 DEBUG oslo_concurrency.lockutils [None req-e40ce106-197a-49dd-95b6-4ae62a8ea8ca 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/218103477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.156 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.232 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.232 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:08:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:22.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.403 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.404 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4083MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.404 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.405 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.480 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 61bad754-8d82-465b-8545-25d700a6e146 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.481 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.481 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.536 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3757053216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:22 np0005466031 nova_compute[235803]: 2025-10-02 13:08:22.994 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.000 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.019 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.043 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.043 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:08:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:08:23 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.884 2 DEBUG nova.compute.manager [req-e9babf7a-8017-4b81-a323-76926c436bc7 req-1970fbbc-9dc3-46f1-b9ad-334455c5031c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received event network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.884 2 DEBUG oslo_concurrency.lockutils [req-e9babf7a-8017-4b81-a323-76926c436bc7 req-1970fbbc-9dc3-46f1-b9ad-334455c5031c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "61bad754-8d82-465b-8545-25d700a6e146-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.885 2 DEBUG oslo_concurrency.lockutils [req-e9babf7a-8017-4b81-a323-76926c436bc7 req-1970fbbc-9dc3-46f1-b9ad-334455c5031c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.886 2 DEBUG oslo_concurrency.lockutils [req-e9babf7a-8017-4b81-a323-76926c436bc7 req-1970fbbc-9dc3-46f1-b9ad-334455c5031c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.886 2 DEBUG nova.compute.manager [req-e9babf7a-8017-4b81-a323-76926c436bc7 req-1970fbbc-9dc3-46f1-b9ad-334455c5031c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] No waiting events found dispatching network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.887 2 WARNING nova.compute.manager [req-e9babf7a-8017-4b81-a323-76926c436bc7 req-1970fbbc-9dc3-46f1-b9ad-334455c5031c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received unexpected event network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd for instance with vm_state active and task_state None.#033[00m
Oct  2 09:08:23 np0005466031 nova_compute[235803]: 2025-10-02 13:08:23.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:24 np0005466031 nova_compute[235803]: 2025-10-02 13:08:24.044 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:24 np0005466031 nova_compute[235803]: 2025-10-02 13:08:24.044 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:08:24 np0005466031 nova_compute[235803]: 2025-10-02 13:08:24.044 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:08:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:24.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:24.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:25 np0005466031 nova_compute[235803]: 2025-10-02 13:08:25.190 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:25 np0005466031 nova_compute[235803]: 2025-10-02 13:08:25.190 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:25 np0005466031 nova_compute[235803]: 2025-10-02 13:08:25.190 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:08:25 np0005466031 nova_compute[235803]: 2025-10-02 13:08:25.191 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 61bad754-8d82-465b-8545-25d700a6e146 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:25.876 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:25.877 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:25.877 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:26 np0005466031 nova_compute[235803]: 2025-10-02 13:08:26.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:26.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:28.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:08:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:28.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:08:28 np0005466031 nova_compute[235803]: 2025-10-02 13:08:28.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:29 np0005466031 nova_compute[235803]: 2025-10-02 13:08:29.285 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updating instance_info_cache with network_info: [{"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:29 np0005466031 nova_compute[235803]: 2025-10-02 13:08:29.308 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:29 np0005466031 nova_compute[235803]: 2025-10-02 13:08:29.308 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:08:29 np0005466031 nova_compute[235803]: 2025-10-02 13:08:29.308 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:29 np0005466031 nova_compute[235803]: 2025-10-02 13:08:29.309 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:29 np0005466031 nova_compute[235803]: 2025-10-02 13:08:29.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:29.909 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:08:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:29.910 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:08:29 np0005466031 nova_compute[235803]: 2025-10-02 13:08:29.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:30.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:30.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:31 np0005466031 nova_compute[235803]: 2025-10-02 13:08:31.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:31 np0005466031 nova_compute[235803]: 2025-10-02 13:08:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:31 np0005466031 nova_compute[235803]: 2025-10-02 13:08:31.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:08:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:08:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:08:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:32.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:33 np0005466031 nova_compute[235803]: 2025-10-02 13:08:33.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:34.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:35 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:35.911 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.604 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.605 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.624 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.698 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.698 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.707 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.708 2 INFO nova.compute.claims [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:08:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:08:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:36.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:08:36 np0005466031 nova_compute[235803]: 2025-10-02 13:08:36.908 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/769836320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.373 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.378 2 DEBUG nova.compute.provider_tree [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.395 2 DEBUG nova.scheduler.client.report [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.425 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.426 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.486 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.487 2 DEBUG nova.network.neutron [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:08:37 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:37Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:a4:0c 10.100.0.14
Oct  2 09:08:37 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:37Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:a4:0c 10.100.0.14
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.520 2 INFO nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.543 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.646 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.648 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.648 2 INFO nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Creating image(s)#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.688 2 DEBUG nova.storage.rbd_utils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image f07a4381-2291-4a58-a2ca-b04071e65a0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.718 2 DEBUG nova.storage.rbd_utils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image f07a4381-2291-4a58-a2ca-b04071e65a0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.747 2 DEBUG nova.storage.rbd_utils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image f07a4381-2291-4a58-a2ca-b04071e65a0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.753 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.832 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.833 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.834 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.834 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.865 2 DEBUG nova.storage.rbd_utils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image f07a4381-2291-4a58-a2ca-b04071e65a0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:37 np0005466031 nova_compute[235803]: 2025-10-02 13:08:37.870 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f07a4381-2291-4a58-a2ca-b04071e65a0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:38 np0005466031 nova_compute[235803]: 2025-10-02 13:08:38.209 2 DEBUG nova.policy [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '156cc6022c70402ab6d194a340b076d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:08:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:38.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:38.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:38 np0005466031 nova_compute[235803]: 2025-10-02 13:08:38.912 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f07a4381-2291-4a58-a2ca-b04071e65a0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:39 np0005466031 nova_compute[235803]: 2025-10-02 13:08:39.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:39 np0005466031 nova_compute[235803]: 2025-10-02 13:08:39.118 2 DEBUG nova.storage.rbd_utils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] resizing rbd image f07a4381-2291-4a58-a2ca-b04071e65a0a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:08:39 np0005466031 nova_compute[235803]: 2025-10-02 13:08:39.438 2 DEBUG nova.objects.instance [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid f07a4381-2291-4a58-a2ca-b04071e65a0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:39 np0005466031 nova_compute[235803]: 2025-10-02 13:08:39.450 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:08:39 np0005466031 nova_compute[235803]: 2025-10-02 13:08:39.451 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Ensure instance console log exists: /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:08:39 np0005466031 nova_compute[235803]: 2025-10-02 13:08:39.451 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:39 np0005466031 nova_compute[235803]: 2025-10-02 13:08:39.451 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:39 np0005466031 nova_compute[235803]: 2025-10-02 13:08:39.451 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:08:40 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:08:40 np0005466031 nova_compute[235803]: 2025-10-02 13:08:40.181 2 DEBUG nova.network.neutron [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Successfully created port: 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:08:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:40.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:40 np0005466031 nova_compute[235803]: 2025-10-02 13:08:40.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:40.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:40 np0005466031 nova_compute[235803]: 2025-10-02 13:08:40.874 2 DEBUG nova.network.neutron [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Successfully updated port: 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:08:40 np0005466031 nova_compute[235803]: 2025-10-02 13:08:40.889 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:40 np0005466031 nova_compute[235803]: 2025-10-02 13:08:40.889 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:40 np0005466031 nova_compute[235803]: 2025-10-02 13:08:40.890 2 DEBUG nova.network.neutron [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:08:41 np0005466031 nova_compute[235803]: 2025-10-02 13:08:41.033 2 DEBUG nova.compute.manager [req-a649cec5-91f6-4ae0-b303-f6f80667f1ab req-0ecc0f1a-95d5-49c3-908a-5331854aa34d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-changed-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:41 np0005466031 nova_compute[235803]: 2025-10-02 13:08:41.034 2 DEBUG nova.compute.manager [req-a649cec5-91f6-4ae0-b303-f6f80667f1ab req-0ecc0f1a-95d5-49c3-908a-5331854aa34d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Refreshing instance network info cache due to event network-changed-8313f187-d7bf-46d7-a7fe-6454eaa6bc87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:08:41 np0005466031 nova_compute[235803]: 2025-10-02 13:08:41.035 2 DEBUG oslo_concurrency.lockutils [req-a649cec5-91f6-4ae0-b303-f6f80667f1ab req-0ecc0f1a-95d5-49c3-908a-5331854aa34d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:41 np0005466031 nova_compute[235803]: 2025-10-02 13:08:41.079 2 DEBUG nova.network.neutron [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:08:41 np0005466031 nova_compute[235803]: 2025-10-02 13:08:41.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:42.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.606 2 DEBUG nova.network.neutron [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updating instance_info_cache with network_info: [{"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.622 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.623 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Instance network_info: |[{"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.623 2 DEBUG oslo_concurrency.lockutils [req-a649cec5-91f6-4ae0-b303-f6f80667f1ab req-0ecc0f1a-95d5-49c3-908a-5331854aa34d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.623 2 DEBUG nova.network.neutron [req-a649cec5-91f6-4ae0-b303-f6f80667f1ab req-0ecc0f1a-95d5-49c3-908a-5331854aa34d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Refreshing network info cache for port 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.625 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Start _get_guest_xml network_info=[{"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.632 2 WARNING nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:08:42 np0005466031 podman[321203]: 2025-10-02 13:08:42.640731437 +0000 UTC m=+0.068939258 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.646 2 DEBUG nova.virt.libvirt.host [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.647 2 DEBUG nova.virt.libvirt.host [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.651 2 DEBUG nova.virt.libvirt.host [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.652 2 DEBUG nova.virt.libvirt.host [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.653 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.653 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.654 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.654 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.655 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.655 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.655 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.655 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.656 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.656 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.656 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.656 2 DEBUG nova.virt.hardware [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:08:42 np0005466031 nova_compute[235803]: 2025-10-02 13:08:42.659 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:42 np0005466031 podman[321204]: 2025-10-02 13:08:42.670665669 +0000 UTC m=+0.097434668 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  2 09:08:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:42.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e360 e360: 3 total, 3 up, 3 in
Oct  2 09:08:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:08:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3950454878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.116 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.180 2 DEBUG nova.storage.rbd_utils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image f07a4381-2291-4a58-a2ca-b04071e65a0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.186 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:08:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2802699572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.639 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.641 2 DEBUG nova.virt.libvirt.vif [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=196,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-yct79g8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=f07a4381-2291-4a58-a2ca-b04071e65a0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.641 2 DEBUG nova.network.os_vif_util [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.642 2 DEBUG nova.network.os_vif_util [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:8e:57,bridge_name='br-int',has_traffic_filtering=True,id=8313f187-d7bf-46d7-a7fe-6454eaa6bc87,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8313f187-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.643 2 DEBUG nova.objects.instance [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid f07a4381-2291-4a58-a2ca-b04071e65a0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.704 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <uuid>f07a4381-2291-4a58-a2ca-b04071e65a0a</uuid>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <name>instance-000000c4</name>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <nova:name>multiattach-server-0</nova:name>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:08:42</nova:creationTime>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <nova:port uuid="8313f187-d7bf-46d7-a7fe-6454eaa6bc87">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <entry name="serial">f07a4381-2291-4a58-a2ca-b04071e65a0a</entry>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <entry name="uuid">f07a4381-2291-4a58-a2ca-b04071e65a0a</entry>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f07a4381-2291-4a58-a2ca-b04071e65a0a_disk">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f07a4381-2291-4a58-a2ca-b04071e65a0a_disk.config">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:ca:8e:57"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <target dev="tap8313f187-d7"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a/console.log" append="off"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:08:43 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:08:43 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:08:43 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:08:43 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.705 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Preparing to wait for external event network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.706 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.706 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.706 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.707 2 DEBUG nova.virt.libvirt.vif [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=196,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-yct79g8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=f07a4381-2291-4a58-a2ca-b04071e65a0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.707 2 DEBUG nova.network.os_vif_util [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.707 2 DEBUG nova.network.os_vif_util [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:8e:57,bridge_name='br-int',has_traffic_filtering=True,id=8313f187-d7bf-46d7-a7fe-6454eaa6bc87,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8313f187-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.708 2 DEBUG os_vif [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:8e:57,bridge_name='br-int',has_traffic_filtering=True,id=8313f187-d7bf-46d7-a7fe-6454eaa6bc87,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8313f187-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8313f187-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8313f187-d7, col_values=(('external_ids', {'iface-id': '8313f187-d7bf-46d7-a7fe-6454eaa6bc87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:8e:57', 'vm-uuid': 'f07a4381-2291-4a58-a2ca-b04071e65a0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:43 np0005466031 NetworkManager[44907]: <info>  [1759410523.7878] manager: (tap8313f187-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.794 2 INFO os_vif [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:8e:57,bridge_name='br-int',has_traffic_filtering=True,id=8313f187-d7bf-46d7-a7fe-6454eaa6bc87,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8313f187-d7')#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.836 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.837 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.837 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:ca:8e:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.837 2 INFO nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Using config drive#033[00m
Oct  2 09:08:43 np0005466031 nova_compute[235803]: 2025-10-02 13:08:43.862 2 DEBUG nova.storage.rbd_utils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image f07a4381-2291-4a58-a2ca-b04071e65a0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:44 np0005466031 nova_compute[235803]: 2025-10-02 13:08:44.361 2 INFO nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Creating config drive at /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a/disk.config#033[00m
Oct  2 09:08:44 np0005466031 nova_compute[235803]: 2025-10-02 13:08:44.367 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4jhq1z3g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:44.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:44 np0005466031 nova_compute[235803]: 2025-10-02 13:08:44.514 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4jhq1z3g" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:44 np0005466031 nova_compute[235803]: 2025-10-02 13:08:44.544 2 DEBUG nova.storage.rbd_utils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image f07a4381-2291-4a58-a2ca-b04071e65a0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:44 np0005466031 nova_compute[235803]: 2025-10-02 13:08:44.549 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a/disk.config f07a4381-2291-4a58-a2ca-b04071e65a0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:45 np0005466031 nova_compute[235803]: 2025-10-02 13:08:45.197 2 DEBUG nova.network.neutron [req-a649cec5-91f6-4ae0-b303-f6f80667f1ab req-0ecc0f1a-95d5-49c3-908a-5331854aa34d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updated VIF entry in instance network info cache for port 8313f187-d7bf-46d7-a7fe-6454eaa6bc87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:08:45 np0005466031 nova_compute[235803]: 2025-10-02 13:08:45.197 2 DEBUG nova.network.neutron [req-a649cec5-91f6-4ae0-b303-f6f80667f1ab req-0ecc0f1a-95d5-49c3-908a-5331854aa34d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updating instance_info_cache with network_info: [{"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:45 np0005466031 nova_compute[235803]: 2025-10-02 13:08:45.214 2 DEBUG oslo_concurrency.lockutils [req-a649cec5-91f6-4ae0-b303-f6f80667f1ab req-0ecc0f1a-95d5-49c3-908a-5331854aa34d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:46 np0005466031 nova_compute[235803]: 2025-10-02 13:08:46.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:46.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:47 np0005466031 nova_compute[235803]: 2025-10-02 13:08:47.410 2 DEBUG oslo_concurrency.processutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a/disk.config f07a4381-2291-4a58-a2ca-b04071e65a0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:47 np0005466031 nova_compute[235803]: 2025-10-02 13:08:47.411 2 INFO nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Deleting local config drive /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a/disk.config because it was imported into RBD.#033[00m
Oct  2 09:08:47 np0005466031 kernel: tap8313f187-d7: entered promiscuous mode
Oct  2 09:08:47 np0005466031 NetworkManager[44907]: <info>  [1759410527.4603] manager: (tap8313f187-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Oct  2 09:08:47 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:47Z|00786|binding|INFO|Claiming lport 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 for this chassis.
Oct  2 09:08:47 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:47Z|00787|binding|INFO|8313f187-d7bf-46d7-a7fe-6454eaa6bc87: Claiming fa:16:3e:ca:8e:57 10.100.0.6
Oct  2 09:08:47 np0005466031 nova_compute[235803]: 2025-10-02 13:08:47.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.472 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:8e:57 10.100.0.6'], port_security=['fa:16:3e:ca:8e:57 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f07a4381-2291-4a58-a2ca-b04071e65a0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=8313f187-d7bf-46d7-a7fe-6454eaa6bc87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.473 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.475 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc#033[00m
Oct  2 09:08:47 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:47Z|00788|binding|INFO|Setting lport 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 ovn-installed in OVS
Oct  2 09:08:47 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:47Z|00789|binding|INFO|Setting lport 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 up in Southbound
Oct  2 09:08:47 np0005466031 nova_compute[235803]: 2025-10-02 13:08:47.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:47 np0005466031 nova_compute[235803]: 2025-10-02 13:08:47.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.490 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[43a1b699-980c-4729-ac65-b2d3606fa7b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:47 np0005466031 systemd-udevd[321387]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:08:47 np0005466031 systemd-machined[192227]: New machine qemu-91-instance-000000c4.
Oct  2 09:08:47 np0005466031 systemd[1]: Started Virtual Machine qemu-91-instance-000000c4.
Oct  2 09:08:47 np0005466031 NetworkManager[44907]: <info>  [1759410527.5066] device (tap8313f187-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:08:47 np0005466031 NetworkManager[44907]: <info>  [1759410527.5082] device (tap8313f187-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.522 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0a35f813-f2f4-43ed-a428-3ede07883bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.526 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbc0792-6f12-4a27-9609-36f3bd557ee0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.556 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[93434972-fde8-489b-9e34-d98e3a8bc4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.572 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[02d1cfe8-e0f6-4461-ac6f-17c5087ad319]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835666, 'reachable_time': 44215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321399, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.587 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6d05e7-37e7-4977-9996-0fd8d401ac12]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835678, 'tstamp': 835678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321401, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835681, 'tstamp': 835681}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321401, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.589 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:47 np0005466031 nova_compute[235803]: 2025-10-02 13:08:47.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:47 np0005466031 nova_compute[235803]: 2025-10-02 13:08:47.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.591 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.592 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.592 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:08:47.592 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.289 2 DEBUG nova.compute.manager [req-8ecdec27-d872-4b12-951c-925526145899 req-4cdb6b99-7391-4dc3-8974-43dd5785f4c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.290 2 DEBUG oslo_concurrency.lockutils [req-8ecdec27-d872-4b12-951c-925526145899 req-4cdb6b99-7391-4dc3-8974-43dd5785f4c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.290 2 DEBUG oslo_concurrency.lockutils [req-8ecdec27-d872-4b12-951c-925526145899 req-4cdb6b99-7391-4dc3-8974-43dd5785f4c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.290 2 DEBUG oslo_concurrency.lockutils [req-8ecdec27-d872-4b12-951c-925526145899 req-4cdb6b99-7391-4dc3-8974-43dd5785f4c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.291 2 DEBUG nova.compute.manager [req-8ecdec27-d872-4b12-951c-925526145899 req-4cdb6b99-7391-4dc3-8974-43dd5785f4c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Processing event network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:08:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:48.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e361 e361: 3 total, 3 up, 3 in
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.733 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410528.7334247, f07a4381-2291-4a58-a2ca-b04071e65a0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.734 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] VM Started (Lifecycle Event)#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.737 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.740 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.743 2 INFO nova.virt.libvirt.driver [-] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Instance spawned successfully.#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.743 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:08:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.759 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.764 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.767 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.768 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.768 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.769 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.770 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.770 2 DEBUG nova.virt.libvirt.driver [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.799 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.799 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410528.7336574, f07a4381-2291-4a58-a2ca-b04071e65a0a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.800 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.835 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.839 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410528.7392986, f07a4381-2291-4a58-a2ca-b04071e65a0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.839 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.870 2 INFO nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Took 11.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.870 2 DEBUG nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.872 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.878 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.964 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:08:48 np0005466031 nova_compute[235803]: 2025-10-02 13:08:48.982 2 INFO nova.compute.manager [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Took 12.31 seconds to build instance.#033[00m
Oct  2 09:08:49 np0005466031 nova_compute[235803]: 2025-10-02 13:08:49.000 2 DEBUG oslo_concurrency.lockutils [None req-5e3717d8-2613-4d50-8ee8-5ed4a030bb2e 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:50 np0005466031 nova_compute[235803]: 2025-10-02 13:08:50.367 2 DEBUG nova.compute.manager [req-add3533a-b0d4-4d9d-868f-e3cada1b81d9 req-994cb715-4507-40d6-a746-f1201b449eb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:50 np0005466031 nova_compute[235803]: 2025-10-02 13:08:50.367 2 DEBUG oslo_concurrency.lockutils [req-add3533a-b0d4-4d9d-868f-e3cada1b81d9 req-994cb715-4507-40d6-a746-f1201b449eb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:50 np0005466031 nova_compute[235803]: 2025-10-02 13:08:50.367 2 DEBUG oslo_concurrency.lockutils [req-add3533a-b0d4-4d9d-868f-e3cada1b81d9 req-994cb715-4507-40d6-a746-f1201b449eb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:50 np0005466031 nova_compute[235803]: 2025-10-02 13:08:50.368 2 DEBUG oslo_concurrency.lockutils [req-add3533a-b0d4-4d9d-868f-e3cada1b81d9 req-994cb715-4507-40d6-a746-f1201b449eb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:50 np0005466031 nova_compute[235803]: 2025-10-02 13:08:50.368 2 DEBUG nova.compute.manager [req-add3533a-b0d4-4d9d-868f-e3cada1b81d9 req-994cb715-4507-40d6-a746-f1201b449eb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] No waiting events found dispatching network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:08:50 np0005466031 nova_compute[235803]: 2025-10-02 13:08:50.368 2 WARNING nova.compute.manager [req-add3533a-b0d4-4d9d-868f-e3cada1b81d9 req-994cb715-4507-40d6-a746-f1201b449eb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received unexpected event network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:08:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:50.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e362 e362: 3 total, 3 up, 3 in
Oct  2 09:08:50 np0005466031 podman[321447]: 2025-10-02 13:08:50.649235366 +0000 UTC m=+0.063322346 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:08:50 np0005466031 podman[321446]: 2025-10-02 13:08:50.649320068 +0000 UTC m=+0.071459630 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.889013) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530889046, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1265, "num_deletes": 250, "total_data_size": 2640640, "memory_usage": 2683136, "flush_reason": "Manual Compaction"}
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530895773, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 1088282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68660, "largest_seqno": 69919, "table_properties": {"data_size": 1083901, "index_size": 1840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11847, "raw_average_key_size": 20, "raw_value_size": 1074322, "raw_average_value_size": 1901, "num_data_blocks": 82, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410433, "oldest_key_time": 1759410433, "file_creation_time": 1759410530, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 6799 microseconds, and 3310 cpu microseconds.
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.895810) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 1088282 bytes OK
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.895828) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.896836) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.896847) EVENT_LOG_v1 {"time_micros": 1759410530896844, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.896861) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2634607, prev total WAL file size 2634607, number of live WAL files 2.
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.897893) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(1062KB)], [138(11MB)]
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530897965, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 13627616, "oldest_snapshot_seqno": -1}
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9172 keys, 10418017 bytes, temperature: kUnknown
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530950965, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10418017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10361111, "index_size": 32857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 240812, "raw_average_key_size": 26, "raw_value_size": 10202637, "raw_average_value_size": 1112, "num_data_blocks": 1250, "num_entries": 9172, "num_filter_entries": 9172, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410530, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.951223) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10418017 bytes
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.952625) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 256.8 rd, 196.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.0 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(22.1) write-amplify(9.6) OK, records in: 9647, records dropped: 475 output_compression: NoCompression
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.952647) EVENT_LOG_v1 {"time_micros": 1759410530952637, "job": 88, "event": "compaction_finished", "compaction_time_micros": 53073, "compaction_time_cpu_micros": 25388, "output_level": 6, "num_output_files": 1, "total_output_size": 10418017, "num_input_records": 9647, "num_output_records": 9172, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530952992, "job": 88, "event": "table_file_deletion", "file_number": 140}
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530955450, "job": 88, "event": "table_file_deletion", "file_number": 138}
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.897739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.955629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.955639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.955642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.955645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:50 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:08:50.955648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:51 np0005466031 NetworkManager[44907]: <info>  [1759410531.6602] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Oct  2 09:08:51 np0005466031 NetworkManager[44907]: <info>  [1759410531.6611] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:51 np0005466031 ovn_controller[132413]: 2025-10-02T13:08:51Z|00790|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.975 2 DEBUG nova.compute.manager [req-4868878b-6685-4c94-a89b-fdb9eaed5401 req-2025de6a-cb06-4af3-a94b-af463182a2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-changed-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.975 2 DEBUG nova.compute.manager [req-4868878b-6685-4c94-a89b-fdb9eaed5401 req-2025de6a-cb06-4af3-a94b-af463182a2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Refreshing instance network info cache due to event network-changed-8313f187-d7bf-46d7-a7fe-6454eaa6bc87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.976 2 DEBUG oslo_concurrency.lockutils [req-4868878b-6685-4c94-a89b-fdb9eaed5401 req-2025de6a-cb06-4af3-a94b-af463182a2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.976 2 DEBUG oslo_concurrency.lockutils [req-4868878b-6685-4c94-a89b-fdb9eaed5401 req-2025de6a-cb06-4af3-a94b-af463182a2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:51 np0005466031 nova_compute[235803]: 2025-10-02 13:08:51.976 2 DEBUG nova.network.neutron [req-4868878b-6685-4c94-a89b-fdb9eaed5401 req-2025de6a-cb06-4af3-a94b-af463182a2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Refreshing network info cache for port 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:08:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:52.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:52.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:53 np0005466031 nova_compute[235803]: 2025-10-02 13:08:53.651 2 DEBUG nova.network.neutron [req-4868878b-6685-4c94-a89b-fdb9eaed5401 req-2025de6a-cb06-4af3-a94b-af463182a2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updated VIF entry in instance network info cache for port 8313f187-d7bf-46d7-a7fe-6454eaa6bc87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:08:53 np0005466031 nova_compute[235803]: 2025-10-02 13:08:53.653 2 DEBUG nova.network.neutron [req-4868878b-6685-4c94-a89b-fdb9eaed5401 req-2025de6a-cb06-4af3-a94b-af463182a2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updating instance_info_cache with network_info: [{"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:53 np0005466031 nova_compute[235803]: 2025-10-02 13:08:53.674 2 DEBUG oslo_concurrency.lockutils [req-4868878b-6685-4c94-a89b-fdb9eaed5401 req-2025de6a-cb06-4af3-a94b-af463182a2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:53 np0005466031 nova_compute[235803]: 2025-10-02 13:08:53.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:54.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e363 e363: 3 total, 3 up, 3 in
Oct  2 09:08:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:56 np0005466031 nova_compute[235803]: 2025-10-02 13:08:56.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:56.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:56.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:08:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:58.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:08:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:08:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:58.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:58 np0005466031 nova_compute[235803]: 2025-10-02 13:08:58.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:00.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:00.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:01 np0005466031 nova_compute[235803]: 2025-10-02 13:09:01.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:09:02Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ca:8e:57 10.100.0.6
Oct  2 09:09:02 np0005466031 ovn_controller[132413]: 2025-10-02T13:09:02Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ca:8e:57 10.100.0.6
Oct  2 09:09:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:02.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:03 np0005466031 nova_compute[235803]: 2025-10-02 13:09:03.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:04.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:04.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:09:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4208459991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:09:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:09:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4208459991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:09:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:06 np0005466031 nova_compute[235803]: 2025-10-02 13:09:06.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:09:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:06.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:09:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:06.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:08.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:08.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:08 np0005466031 nova_compute[235803]: 2025-10-02 13:09:08.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:10.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:10.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:11 np0005466031 nova_compute[235803]: 2025-10-02 13:09:11.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:11 np0005466031 nova_compute[235803]: 2025-10-02 13:09:11.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:12.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:12.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:12 np0005466031 podman[321574]: 2025-10-02 13:09:12.924770093 +0000 UTC m=+0.058677661 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 09:09:12 np0005466031 podman[321575]: 2025-10-02 13:09:12.961259215 +0000 UTC m=+0.088776509 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:09:13 np0005466031 nova_compute[235803]: 2025-10-02 13:09:13.621 2 DEBUG nova.compute.manager [req-c141df69-3672-4db3-a130-50022742fe0f req-e2c132a7-a32a-4b49-915b-ff99b3355dbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-changed-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:13 np0005466031 nova_compute[235803]: 2025-10-02 13:09:13.621 2 DEBUG nova.compute.manager [req-c141df69-3672-4db3-a130-50022742fe0f req-e2c132a7-a32a-4b49-915b-ff99b3355dbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Refreshing instance network info cache due to event network-changed-8313f187-d7bf-46d7-a7fe-6454eaa6bc87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:13 np0005466031 nova_compute[235803]: 2025-10-02 13:09:13.621 2 DEBUG oslo_concurrency.lockutils [req-c141df69-3672-4db3-a130-50022742fe0f req-e2c132a7-a32a-4b49-915b-ff99b3355dbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:13 np0005466031 nova_compute[235803]: 2025-10-02 13:09:13.621 2 DEBUG oslo_concurrency.lockutils [req-c141df69-3672-4db3-a130-50022742fe0f req-e2c132a7-a32a-4b49-915b-ff99b3355dbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:13 np0005466031 nova_compute[235803]: 2025-10-02 13:09:13.622 2 DEBUG nova.network.neutron [req-c141df69-3672-4db3-a130-50022742fe0f req-e2c132a7-a32a-4b49-915b-ff99b3355dbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Refreshing network info cache for port 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:13 np0005466031 nova_compute[235803]: 2025-10-02 13:09:13.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e364 e364: 3 total, 3 up, 3 in
Oct  2 09:09:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:14.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:14 np0005466031 nova_compute[235803]: 2025-10-02 13:09:14.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:14.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:15 np0005466031 nova_compute[235803]: 2025-10-02 13:09:15.875 2 DEBUG nova.network.neutron [req-c141df69-3672-4db3-a130-50022742fe0f req-e2c132a7-a32a-4b49-915b-ff99b3355dbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updated VIF entry in instance network info cache for port 8313f187-d7bf-46d7-a7fe-6454eaa6bc87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:15 np0005466031 nova_compute[235803]: 2025-10-02 13:09:15.876 2 DEBUG nova.network.neutron [req-c141df69-3672-4db3-a130-50022742fe0f req-e2c132a7-a32a-4b49-915b-ff99b3355dbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updating instance_info_cache with network_info: [{"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:15 np0005466031 nova_compute[235803]: 2025-10-02 13:09:15.911 2 DEBUG oslo_concurrency.lockutils [req-c141df69-3672-4db3-a130-50022742fe0f req-e2c132a7-a32a-4b49-915b-ff99b3355dbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:16 np0005466031 nova_compute[235803]: 2025-10-02 13:09:16.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:16 np0005466031 ceph-osd[79023]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct  2 09:09:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:16.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:16.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:16 np0005466031 ovn_controller[132413]: 2025-10-02T13:09:16Z|00791|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:09:16 np0005466031 nova_compute[235803]: 2025-10-02 13:09:16.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.300 2 DEBUG oslo_concurrency.lockutils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.301 2 DEBUG oslo_concurrency.lockutils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.324 2 DEBUG nova.objects.instance [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid f07a4381-2291-4a58-a2ca-b04071e65a0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.356 2 DEBUG oslo_concurrency.lockutils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.572 2 DEBUG oslo_concurrency.lockutils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.572 2 DEBUG oslo_concurrency.lockutils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.573 2 INFO nova.compute.manager [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Attaching volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 to /dev/vdb#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.745 2 DEBUG os_brick.utils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.746 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.757 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.757 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[7c38a100-df29-4a88-a165-41c7596992bf]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.758 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.767 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.767 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[e2192be9-1324-4527-a33e-cd3c4fc1b5e2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.769 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.777 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.777 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b8fea9-463a-4e6e-8aa1-4c9b1838338d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.778 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[6b71935a-85a9-4a5d-9434-1e900eabf4c3]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.779 2 DEBUG oslo_concurrency.processutils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.810 2 DEBUG oslo_concurrency.processutils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.813 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.813 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.813 2 DEBUG os_brick.initiator.connectors.lightos [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.814 2 DEBUG os_brick.utils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:09:17 np0005466031 nova_compute[235803]: 2025-10-02 13:09:17.814 2 DEBUG nova.virt.block_device [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updating existing volume attachment record: 646f9755-18df-4630-8f00-f869a77fc0ce _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:09:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:18.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.541 2 DEBUG nova.objects.instance [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid f07a4381-2291-4a58-a2ca-b04071e65a0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.564 2 DEBUG nova.virt.libvirt.driver [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Attempting to attach volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.568 2 DEBUG nova.virt.libvirt.guest [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:09:18 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct  2 09:09:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:  <auth username="openstack">
Oct  2 09:09:18 np0005466031 nova_compute[235803]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:  </auth>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:  <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct  2 09:09:18 np0005466031 nova_compute[235803]:  <shareable/>
Oct  2 09:09:18 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:09:18 np0005466031 nova_compute[235803]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:09:18 np0005466031 virtqemud[235323]: End of file while reading data: Input/output error
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.709 2 DEBUG nova.virt.libvirt.driver [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.710 2 DEBUG nova.virt.libvirt.driver [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.710 2 DEBUG nova.virt.libvirt.driver [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.710 2 DEBUG nova.virt.libvirt.driver [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:ca:8e:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:09:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:18.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:18 np0005466031 nova_compute[235803]: 2025-10-02 13:09:18.924 2 DEBUG oslo_concurrency.lockutils [None req-9fd6ea17-9d50-4f55-a022-10d46c64bee4 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:19 np0005466031 nova_compute[235803]: 2025-10-02 13:09:19.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 e365: 3 total, 3 up, 3 in
Oct  2 09:09:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:20.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:21 np0005466031 nova_compute[235803]: 2025-10-02 13:09:21.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1024354763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:21 np0005466031 podman[321673]: 2025-10-02 13:09:21.623058454 +0000 UTC m=+0.054787449 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:09:21 np0005466031 podman[321672]: 2025-10-02 13:09:21.630351305 +0000 UTC m=+0.065225901 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:09:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:22.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:22 np0005466031 nova_compute[235803]: 2025-10-02 13:09:22.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:22 np0005466031 nova_compute[235803]: 2025-10-02 13:09:22.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:09:22 np0005466031 nova_compute[235803]: 2025-10-02 13:09:22.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:09:22 np0005466031 nova_compute[235803]: 2025-10-02 13:09:22.787 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:22 np0005466031 nova_compute[235803]: 2025-10-02 13:09:22.787 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:22 np0005466031 nova_compute[235803]: 2025-10-02 13:09:22.787 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:09:22 np0005466031 nova_compute[235803]: 2025-10-02 13:09:22.788 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 61bad754-8d82-465b-8545-25d700a6e146 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:22.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.519 2 DEBUG oslo_concurrency.lockutils [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.520 2 DEBUG oslo_concurrency.lockutils [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.536 2 INFO nova.compute.manager [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Detaching volume 2341c515-f8fa-4cdf-87e9-1faa534d8307#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.660 2 INFO nova.virt.block_device [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Attempting to driver detach volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 from mountpoint /dev/vdb#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.666 2 DEBUG nova.virt.libvirt.driver [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Attempting to detach device vdb from instance f07a4381-2291-4a58-a2ca-b04071e65a0a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.667 2 DEBUG nova.virt.libvirt.guest [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct  2 09:09:23 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <shareable/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:09:23 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.673 2 INFO nova.virt.libvirt.driver [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance f07a4381-2291-4a58-a2ca-b04071e65a0a from the persistent domain config.#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.673 2 DEBUG nova.virt.libvirt.driver [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f07a4381-2291-4a58-a2ca-b04071e65a0a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.674 2 DEBUG nova.virt.libvirt.guest [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct  2 09:09:23 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <shareable/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:09:23 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:09:23 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.776 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759410563.7759984, f07a4381-2291-4a58-a2ca-b04071e65a0a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.777 2 DEBUG nova.virt.libvirt.driver [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f07a4381-2291-4a58-a2ca-b04071e65a0a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.779 2 INFO nova.virt.libvirt.driver [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance f07a4381-2291-4a58-a2ca-b04071e65a0a from the live domain config.#033[00m
Oct  2 09:09:23 np0005466031 nova_compute[235803]: 2025-10-02 13:09:23.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.072 2 DEBUG nova.objects.instance [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid f07a4381-2291-4a58-a2ca-b04071e65a0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.113 2 DEBUG oslo_concurrency.lockutils [None req-0d45e736-e87f-4b04-bf52-0c0a7803cab7 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.150 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updating instance_info_cache with network_info: [{"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.171 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.171 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.172 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.172 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.202 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.202 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.203 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.203 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.203 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:24.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/533345457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.660 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.722 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.723 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.727 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.727 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:24.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.892 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.893 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3777MB free_disk=20.851551055908203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.893 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:24 np0005466031 nova_compute[235803]: 2025-10-02 13:09:24.893 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.045 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 61bad754-8d82-465b-8545-25d700a6e146 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.045 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance f07a4381-2291-4a58-a2ca-b04071e65a0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.045 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.046 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.197 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:25 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2098822834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.646 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.653 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.668 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.689 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:09:25 np0005466031 nova_compute[235803]: 2025-10-02 13:09:25.690 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:25.877 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:25.878 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:25.879 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:26 np0005466031 nova_compute[235803]: 2025-10-02 13:09:26.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:26 np0005466031 nova_compute[235803]: 2025-10-02 13:09:26.154 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:26.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:26.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:28 np0005466031 nova_compute[235803]: 2025-10-02 13:09:28.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:28.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:29 np0005466031 nova_compute[235803]: 2025-10-02 13:09:29.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:30.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:31 np0005466031 nova_compute[235803]: 2025-10-02 13:09:31.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:31 np0005466031 nova_compute[235803]: 2025-10-02 13:09:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:31 np0005466031 nova_compute[235803]: 2025-10-02 13:09:31.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:09:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:32.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:32.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:33.700 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:33 np0005466031 nova_compute[235803]: 2025-10-02 13:09:33.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:33 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:33.703 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:09:33 np0005466031 nova_compute[235803]: 2025-10-02 13:09:33.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:34.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:34.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.343885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575344380, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 751, "num_deletes": 258, "total_data_size": 1246455, "memory_usage": 1264512, "flush_reason": "Manual Compaction"}
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575351035, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 821686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69924, "largest_seqno": 70670, "table_properties": {"data_size": 818091, "index_size": 1374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8425, "raw_average_key_size": 19, "raw_value_size": 810724, "raw_average_value_size": 1850, "num_data_blocks": 61, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410531, "oldest_key_time": 1759410531, "file_creation_time": 1759410575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 7176 microseconds, and 2743 cpu microseconds.
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.351067) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 821686 bytes OK
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.351084) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.352716) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.352728) EVENT_LOG_v1 {"time_micros": 1759410575352724, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.352744) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 1242428, prev total WAL file size 1242428, number of live WAL files 2.
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.353233) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353136' seq:72057594037927935, type:22 .. '6C6F676D0032373638' seq:0, type:0; will stop at (end)
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(802KB)], [141(10173KB)]
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575353264, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 11239703, "oldest_snapshot_seqno": -1}
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9080 keys, 11112389 bytes, temperature: kUnknown
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575426027, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 11112389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11054975, "index_size": 33621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 239899, "raw_average_key_size": 26, "raw_value_size": 10896994, "raw_average_value_size": 1200, "num_data_blocks": 1279, "num_entries": 9080, "num_filter_entries": 9080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.426306) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11112389 bytes
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.427594) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.3 rd, 152.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(27.2) write-amplify(13.5) OK, records in: 9610, records dropped: 530 output_compression: NoCompression
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.427615) EVENT_LOG_v1 {"time_micros": 1759410575427606, "job": 90, "event": "compaction_finished", "compaction_time_micros": 72839, "compaction_time_cpu_micros": 30942, "output_level": 6, "num_output_files": 1, "total_output_size": 11112389, "num_input_records": 9610, "num_output_records": 9080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575427945, "job": 90, "event": "table_file_deletion", "file_number": 143}
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575430035, "job": 90, "event": "table_file_deletion", "file_number": 141}
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.353155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.430118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.430122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.430123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.430125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:09:35.430127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:36 np0005466031 nova_compute[235803]: 2025-10-02 13:09:36.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:09:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:36.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:09:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:36.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:38.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:38 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:38.704 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:38 np0005466031 nova_compute[235803]: 2025-10-02 13:09:38.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:38.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:40.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:41 np0005466031 nova_compute[235803]: 2025-10-02 13:09:41.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:42.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:09:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:09:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:43 np0005466031 podman[321949]: 2025-10-02 13:09:43.630976377 +0000 UTC m=+0.047955852 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:09:43 np0005466031 podman[321950]: 2025-10-02 13:09:43.668682684 +0000 UTC m=+0.083697063 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:09:43 np0005466031 nova_compute[235803]: 2025-10-02 13:09:43.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:44.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:44.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:46 np0005466031 nova_compute[235803]: 2025-10-02 13:09:46.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:46.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:48.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:48 np0005466031 nova_compute[235803]: 2025-10-02 13:09:48.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:48.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:50.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:51 np0005466031 nova_compute[235803]: 2025-10-02 13:09:51.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:52.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:52 np0005466031 nova_compute[235803]: 2025-10-02 13:09:52.582 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:52 np0005466031 nova_compute[235803]: 2025-10-02 13:09:52.582 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:52 np0005466031 nova_compute[235803]: 2025-10-02 13:09:52.603 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:09:52 np0005466031 podman[322049]: 2025-10-02 13:09:52.688923092 +0000 UTC m=+0.109609459 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  2 09:09:52 np0005466031 podman[322048]: 2025-10-02 13:09:52.689245401 +0000 UTC m=+0.113951144 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:09:52 np0005466031 nova_compute[235803]: 2025-10-02 13:09:52.690 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:52 np0005466031 nova_compute[235803]: 2025-10-02 13:09:52.690 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:52 np0005466031 nova_compute[235803]: 2025-10-02 13:09:52.698 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:09:52 np0005466031 nova_compute[235803]: 2025-10-02 13:09:52.699 2 INFO nova.compute.claims [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:09:52 np0005466031 nova_compute[235803]: 2025-10-02 13:09:52.827 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:52.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:53 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/527186031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.273 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.279 2 DEBUG nova.compute.provider_tree [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.302 2 DEBUG nova.scheduler.client.report [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.329 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.330 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.381 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.382 2 DEBUG nova.network.neutron [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.404 2 INFO nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.423 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.475 2 INFO nova.virt.block_device [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Booting with volume 23ece991-964a-4523-b231-9590440c3d93 at /dev/vda#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.621 2 DEBUG os_brick.utils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.622 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.628 2 DEBUG nova.policy [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '62f4c4b5cc194bd59ca9cc9f1da78a79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '954946ff6b204fba90f767ec67210620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.632 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.632 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8d7df8-5b62-426b-bcbb-73619675f6a5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.635 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.644 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.645 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[f23bf924-3cea-49df-aa89-45ccfdfaf30e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.646 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.655 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.655 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0f3970-2d02-428f-b85d-5c6dbe9e247d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.656 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebb219b-7e5c-4e97-b4fc-b9f27cd1156e]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.657 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.686 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.689 2 DEBUG os_brick.initiator.connectors.lightos [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.689 2 DEBUG os_brick.initiator.connectors.lightos [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.690 2 DEBUG os_brick.initiator.connectors.lightos [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.690 2 DEBUG os_brick.utils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.690 2 DEBUG nova.virt.block_device [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating existing volume attachment record: c562c2d2-86a1-4802-aa4a-ca557acc0921 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:09:53 np0005466031 nova_compute[235803]: 2025-10-02 13:09:53.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:54.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:54 np0005466031 nova_compute[235803]: 2025-10-02 13:09:54.678 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:09:54 np0005466031 nova_compute[235803]: 2025-10-02 13:09:54.679 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:09:54 np0005466031 nova_compute[235803]: 2025-10-02 13:09:54.680 2 INFO nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Creating image(s)#033[00m
Oct  2 09:09:54 np0005466031 nova_compute[235803]: 2025-10-02 13:09:54.680 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:09:54 np0005466031 nova_compute[235803]: 2025-10-02 13:09:54.680 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Ensure instance console log exists: /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:09:54 np0005466031 nova_compute[235803]: 2025-10-02 13:09:54.680 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:54 np0005466031 nova_compute[235803]: 2025-10-02 13:09:54.681 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:54 np0005466031 nova_compute[235803]: 2025-10-02 13:09:54.681 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:54.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:55 np0005466031 nova_compute[235803]: 2025-10-02 13:09:55.241 2 DEBUG nova.network.neutron [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Successfully created port: f2d2820c-dd48-47a4-ab94-dd6136c8e314 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:09:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.344 2 DEBUG nova.network.neutron [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Successfully updated port: f2d2820c-dd48-47a4-ab94-dd6136c8e314 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.363 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.364 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.364 2 DEBUG nova.network.neutron [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.424 2 DEBUG nova.compute.manager [req-ad534338-7915-4643-8cde-54b9be80ccb2 req-e26b4e7b-538c-4326-bb77-f6c2cf464131 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.425 2 DEBUG nova.compute.manager [req-ad534338-7915-4643-8cde-54b9be80ccb2 req-e26b4e7b-538c-4326-bb77-f6c2cf464131 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing instance network info cache due to event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.425 2 DEBUG oslo_concurrency.lockutils [req-ad534338-7915-4643-8cde-54b9be80ccb2 req-e26b4e7b-538c-4326-bb77-f6c2cf464131 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:56 np0005466031 nova_compute[235803]: 2025-10-02 13:09:56.479 2 DEBUG nova.network.neutron [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:09:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:56.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:09:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.413 2 DEBUG nova.network.neutron [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.436 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.437 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance network_info: |[{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.437 2 DEBUG oslo_concurrency.lockutils [req-ad534338-7915-4643-8cde-54b9be80ccb2 req-e26b4e7b-538c-4326-bb77-f6c2cf464131 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.437 2 DEBUG nova.network.neutron [req-ad534338-7915-4643-8cde-54b9be80ccb2 req-e26b4e7b-538c-4326-bb77-f6c2cf464131 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.440 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Start _get_guest_xml network_info=[{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-23ece991-964a-4523-b231-9590440c3d93', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '23ece991-964a-4523-b231-9590440c3d93', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '79964143-e208-4552-8380-513c3adf09ac', 'attached_at': '', 'detached_at': '', 'volume_id': '23ece991-964a-4523-b231-9590440c3d93', 'serial': '23ece991-964a-4523-b231-9590440c3d93'}, 'attachment_id': 'c562c2d2-86a1-4802-aa4a-ca557acc0921', 'delete_on_termination': True, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.445 2 WARNING nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.449 2 DEBUG nova.virt.libvirt.host [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.449 2 DEBUG nova.virt.libvirt.host [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.455 2 DEBUG nova.virt.libvirt.host [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.456 2 DEBUG nova.virt.libvirt.host [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.457 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.457 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.458 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.458 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.458 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.458 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.458 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.459 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.459 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.459 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.459 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.459 2 DEBUG nova.virt.hardware [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.509 2 DEBUG nova.storage.rbd_utils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image 79964143-e208-4552-8380-513c3adf09ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.512 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3080379018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.941 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.989 2 DEBUG nova.virt.libvirt.vif [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-838477094',display_name='tempest-TestShelveInstance-server-838477094',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-838477094',id=201,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVidC3ivoi0/BSVTNr+Q25H86pwCavGSEVSNuXxP+lg72xfVuGJwhfG7zkX1TfYRkx+B8B4fFFME+SiQB5nTIMxK1PVVykpxHJMksdNCrSHtdTo6A7G7KE0+4LlyrKslw==',key_name='tempest-TestShelveInstance-2040419602',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-v1nr892j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:53Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=79964143-e208-4552-8380-513c3adf09ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.990 2 DEBUG nova.network.os_vif_util [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.991 2 DEBUG nova.network.os_vif_util [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:57 np0005466031 nova_compute[235803]: 2025-10-02 13:09:57.992 2 DEBUG nova.objects.instance [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.010 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <uuid>79964143-e208-4552-8380-513c3adf09ac</uuid>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <name>instance-000000c9</name>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestShelveInstance-server-838477094</nova:name>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:09:57</nova:creationTime>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <nova:user uuid="62f4c4b5cc194bd59ca9cc9f1da78a79">tempest-TestShelveInstance-228669170-project-member</nova:user>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <nova:project uuid="954946ff6b204fba90f767ec67210620">tempest-TestShelveInstance-228669170</nova:project>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <nova:port uuid="f2d2820c-dd48-47a4-ab94-dd6136c8e314">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <entry name="serial">79964143-e208-4552-8380-513c3adf09ac</entry>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <entry name="uuid">79964143-e208-4552-8380-513c3adf09ac</entry>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/79964143-e208-4552-8380-513c3adf09ac_disk.config">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-23ece991-964a-4523-b231-9590440c3d93">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <serial>23ece991-964a-4523-b231-9590440c3d93</serial>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:47:17:41"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <target dev="tapf2d2820c-dd"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/console.log" append="off"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:09:58 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:09:58 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:09:58 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:09:58 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.011 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Preparing to wait for external event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.011 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.011 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.011 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.012 2 DEBUG nova.virt.libvirt.vif [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-838477094',display_name='tempest-TestShelveInstance-server-838477094',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-838477094',id=201,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVidC3ivoi0/BSVTNr+Q25H86pwCavGSEVSNuXxP+lg72xfVuGJwhfG7zkX1TfYRkx+B8B4fFFME+SiQB5nTIMxK1PVVykpxHJMksdNCrSHtdTo6A7G7KE0+4LlyrKslw==',key_name='tempest-TestShelveInstance-2040419602',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-v1nr892j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:53Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=79964143-e208-4552-8380-513c3adf09ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.012 2 DEBUG nova.network.os_vif_util [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.013 2 DEBUG nova.network.os_vif_util [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.013 2 DEBUG os_vif [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.014 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.014 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2d2820c-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2d2820c-dd, col_values=(('external_ids', {'iface-id': 'f2d2820c-dd48-47a4-ab94-dd6136c8e314', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:17:41', 'vm-uuid': '79964143-e208-4552-8380-513c3adf09ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005466031 NetworkManager[44907]: <info>  [1759410598.0192] manager: (tapf2d2820c-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.026 2 INFO os_vif [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd')#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.076 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.077 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.077 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No VIF found with MAC fa:16:3e:47:17:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.078 2 INFO nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Using config drive#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.105 2 DEBUG nova.storage.rbd_utils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image 79964143-e208-4552-8380-513c3adf09ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:58.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.603 2 INFO nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Creating config drive at /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.608 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvdtmkld9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.740 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvdtmkld9" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.763 2 DEBUG nova.storage.rbd_utils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image 79964143-e208-4552-8380-513c3adf09ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.766 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config 79964143-e208-4552-8380-513c3adf09ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:09:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:09:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:58.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.928 2 DEBUG oslo_concurrency.processutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config 79964143-e208-4552-8380-513c3adf09ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.929 2 INFO nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Deleting local config drive /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac/disk.config because it was imported into RBD.#033[00m
Oct  2 09:09:58 np0005466031 NetworkManager[44907]: <info>  [1759410598.9723] manager: (tapf2d2820c-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/357)
Oct  2 09:09:58 np0005466031 kernel: tapf2d2820c-dd: entered promiscuous mode
Oct  2 09:09:58 np0005466031 ovn_controller[132413]: 2025-10-02T13:09:58Z|00792|binding|INFO|Claiming lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 for this chassis.
Oct  2 09:09:58 np0005466031 ovn_controller[132413]: 2025-10-02T13:09:58Z|00793|binding|INFO|f2d2820c-dd48-47a4-ab94-dd6136c8e314: Claiming fa:16:3e:47:17:41 10.100.0.10
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:58.982 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:17:41 10.100.0.10'], port_security=['fa:16:3e:47:17:41 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79964143-e208-4552-8380-513c3adf09ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ecabba0-db02-4a2b-8a99-c435db80e5c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=f2d2820c-dd48-47a4-ab94-dd6136c8e314) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:58.983 141898 INFO neutron.agent.ovn.metadata.agent [-] Port f2d2820c-dd48-47a4-ab94-dd6136c8e314 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 bound to our chassis#033[00m
Oct  2 09:09:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:58.984 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4223a8cc-f72a-428d-accb-3f4210096878#033[00m
Oct  2 09:09:58 np0005466031 ovn_controller[132413]: 2025-10-02T13:09:58Z|00794|binding|INFO|Setting lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 ovn-installed in OVS
Oct  2 09:09:58 np0005466031 ovn_controller[132413]: 2025-10-02T13:09:58Z|00795|binding|INFO|Setting lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 up in Southbound
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:58.996 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[014ccc26-47eb-43ae-a5c2-89ab9c5eadfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:58.997 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4223a8cc-f1 in ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:09:58 np0005466031 nova_compute[235803]: 2025-10-02 13:09:58.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:58.999 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4223a8cc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:58.999 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e9045d26-a6df-4b1f-b8a8-86f06ac7e6ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.000 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c2439171-7c85-47f7-9d5a-9bed84d35cce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 systemd-udevd[322285]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.011 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[2d42ad2a-412e-4df9-b1d8-035b8441ee38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 systemd-machined[192227]: New machine qemu-92-instance-000000c9.
Oct  2 09:09:59 np0005466031 NetworkManager[44907]: <info>  [1759410599.0217] device (tapf2d2820c-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:09:59 np0005466031 NetworkManager[44907]: <info>  [1759410599.0229] device (tapf2d2820c-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:09:59 np0005466031 systemd[1]: Started Virtual Machine qemu-92-instance-000000c9.
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.030 2 DEBUG nova.network.neutron [req-ad534338-7915-4643-8cde-54b9be80ccb2 req-e26b4e7b-538c-4326-bb77-f6c2cf464131 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updated VIF entry in instance network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.031 2 DEBUG nova.network.neutron [req-ad534338-7915-4643-8cde-54b9be80ccb2 req-e26b4e7b-538c-4326-bb77-f6c2cf464131 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.036 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[776f42cc-111f-461c-bc4a-26da4ccf59fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.049 2 DEBUG oslo_concurrency.lockutils [req-ad534338-7915-4643-8cde-54b9be80ccb2 req-e26b4e7b-538c-4326-bb77-f6c2cf464131 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.064 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d638de08-94db-4d87-8b69-c506cd79c64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.070 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[59fbef88-8eaa-4b68-9d15-3e42d7ee3f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 NetworkManager[44907]: <info>  [1759410599.0710] manager: (tap4223a8cc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/358)
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.100 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed619ab-756d-4f25-a6fa-7c58377d7814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.104 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[92aa3500-e2ac-4736-bdee-f320d4959a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 NetworkManager[44907]: <info>  [1759410599.1254] device (tap4223a8cc-f0): carrier: link connected
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.130 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[035f7774-b4d8-45ba-8baf-aa4d4f74b95a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.146 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[078466a3-1ffe-4103-934b-58f4a51712a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845471, 'reachable_time': 27305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322316, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.161 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e7eca08c-b478-42b1-992d-67bcb102516b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:f568'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845471, 'tstamp': 845471}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322317, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.179 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c39a3cb9-f8f7-4eaa-9a39-8be985ec6e4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845471, 'reachable_time': 27305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322318, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.196 2 DEBUG nova.compute.manager [req-6e33fc98-4f15-46e6-a708-9bdc6cd5883c req-f12316d2-1123-49cf-8f90-f74be32c3aaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.197 2 DEBUG oslo_concurrency.lockutils [req-6e33fc98-4f15-46e6-a708-9bdc6cd5883c req-f12316d2-1123-49cf-8f90-f74be32c3aaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.197 2 DEBUG oslo_concurrency.lockutils [req-6e33fc98-4f15-46e6-a708-9bdc6cd5883c req-f12316d2-1123-49cf-8f90-f74be32c3aaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.198 2 DEBUG oslo_concurrency.lockutils [req-6e33fc98-4f15-46e6-a708-9bdc6cd5883c req-f12316d2-1123-49cf-8f90-f74be32c3aaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.198 2 DEBUG nova.compute.manager [req-6e33fc98-4f15-46e6-a708-9bdc6cd5883c req-f12316d2-1123-49cf-8f90-f74be32c3aaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Processing event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.210 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a28b9cac-716d-4a66-b2e1-cbc894bead4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.277 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4331af27-a780-4cd0-8ae2-3eed87f29c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.278 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.278 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.278 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4223a8cc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:59 np0005466031 NetworkManager[44907]: <info>  [1759410599.2810] manager: (tap4223a8cc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:59 np0005466031 kernel: tap4223a8cc-f0: entered promiscuous mode
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.283 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4223a8cc-f0, col_values=(('external_ids', {'iface-id': '97eaefd1-ed23-4787-9782-741cd2cf7e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:59 np0005466031 ovn_controller[132413]: 2025-10-02T13:09:59Z|00796|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.299 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.300 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[42172131-4582-4fc5-b3d7-b8a9ebc57562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.301 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-4223a8cc-f72a-428d-accb-3f4210096878
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 4223a8cc-f72a-428d-accb-3f4210096878
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:09:59 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:09:59.301 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'env', 'PROCESS_TAG=haproxy-4223a8cc-f72a-428d-accb-3f4210096878', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4223a8cc-f72a-428d-accb-3f4210096878.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:09:59 np0005466031 podman[322392]: 2025-10-02 13:09:59.714860741 +0000 UTC m=+0.095943085 container create 14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:09:59 np0005466031 podman[322392]: 2025-10-02 13:09:59.648268332 +0000 UTC m=+0.029350706 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:09:59 np0005466031 systemd[1]: Started libpod-conmon-14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811.scope.
Oct  2 09:09:59 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:09:59 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c6ffcd9c5f9dbd132fbe67050fb7ce46ee811d8efc0665d4e58922c9d624e0a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:09:59 np0005466031 podman[322392]: 2025-10-02 13:09:59.80785593 +0000 UTC m=+0.188938304 container init 14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 09:09:59 np0005466031 podman[322392]: 2025-10-02 13:09:59.81410184 +0000 UTC m=+0.195184184 container start 14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:09:59 np0005466031 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[322408]: [NOTICE]   (322412) : New worker (322414) forked
Oct  2 09:09:59 np0005466031 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[322408]: [NOTICE]   (322412) : Loading success.
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.862 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.862 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410599.8615613, 79964143-e208-4552-8380-513c3adf09ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.863 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] VM Started (Lifecycle Event)#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.866 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.870 2 INFO nova.virt.libvirt.driver [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance spawned successfully.#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.870 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.887 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.889 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.896 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.896 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.896 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.897 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.897 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.897 2 DEBUG nova.virt.libvirt.driver [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.922 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.923 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410599.8616889, 79964143-e208-4552-8380-513c3adf09ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.923 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.953 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.958 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410599.8659925, 79964143-e208-4552-8380-513c3adf09ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.958 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.978 2 INFO nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Took 5.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.978 2 DEBUG nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.984 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:59 np0005466031 nova_compute[235803]: 2025-10-02 13:09:59.989 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:10:00 np0005466031 nova_compute[235803]: 2025-10-02 13:10:00.020 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:10:00 np0005466031 nova_compute[235803]: 2025-10-02 13:10:00.065 2 INFO nova.compute.manager [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Took 7.41 seconds to build instance.#033[00m
Oct  2 09:10:00 np0005466031 nova_compute[235803]: 2025-10-02 13:10:00.084 2 DEBUG oslo_concurrency.lockutils [None req-6adb08ac-cff6-4077-bba2-5a41bb4535ae 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 09:10:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:00.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:01 np0005466031 nova_compute[235803]: 2025-10-02 13:10:01.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:01 np0005466031 nova_compute[235803]: 2025-10-02 13:10:01.309 2 DEBUG nova.compute.manager [req-0b3c96e0-dad0-4502-90ad-ab57586ecec1 req-505481ed-8d18-4a5e-95c2-8e1859919733 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:01 np0005466031 nova_compute[235803]: 2025-10-02 13:10:01.309 2 DEBUG oslo_concurrency.lockutils [req-0b3c96e0-dad0-4502-90ad-ab57586ecec1 req-505481ed-8d18-4a5e-95c2-8e1859919733 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:01 np0005466031 nova_compute[235803]: 2025-10-02 13:10:01.310 2 DEBUG oslo_concurrency.lockutils [req-0b3c96e0-dad0-4502-90ad-ab57586ecec1 req-505481ed-8d18-4a5e-95c2-8e1859919733 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:01 np0005466031 nova_compute[235803]: 2025-10-02 13:10:01.310 2 DEBUG oslo_concurrency.lockutils [req-0b3c96e0-dad0-4502-90ad-ab57586ecec1 req-505481ed-8d18-4a5e-95c2-8e1859919733 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:01 np0005466031 nova_compute[235803]: 2025-10-02 13:10:01.310 2 DEBUG nova.compute.manager [req-0b3c96e0-dad0-4502-90ad-ab57586ecec1 req-505481ed-8d18-4a5e-95c2-8e1859919733 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] No waiting events found dispatching network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:01 np0005466031 nova_compute[235803]: 2025-10-02 13:10:01.311 2 WARNING nova.compute.manager [req-0b3c96e0-dad0-4502-90ad-ab57586ecec1 req-505481ed-8d18-4a5e-95c2-8e1859919733 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received unexpected event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:10:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:02.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:02.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:03 np0005466031 nova_compute[235803]: 2025-10-02 13:10:03.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:10:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:04.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:10:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:04.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:10:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1490262338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:10:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:10:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1490262338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:10:05 np0005466031 nova_compute[235803]: 2025-10-02 13:10:05.382 2 DEBUG nova.compute.manager [req-7c42fbd7-ae36-47dd-99e4-ad45a7efe4f9 req-d11f9c31-942f-46b7-a052-f276c4b67689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:05 np0005466031 nova_compute[235803]: 2025-10-02 13:10:05.382 2 DEBUG nova.compute.manager [req-7c42fbd7-ae36-47dd-99e4-ad45a7efe4f9 req-d11f9c31-942f-46b7-a052-f276c4b67689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing instance network info cache due to event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:05 np0005466031 nova_compute[235803]: 2025-10-02 13:10:05.383 2 DEBUG oslo_concurrency.lockutils [req-7c42fbd7-ae36-47dd-99e4-ad45a7efe4f9 req-d11f9c31-942f-46b7-a052-f276c4b67689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:05 np0005466031 nova_compute[235803]: 2025-10-02 13:10:05.383 2 DEBUG oslo_concurrency.lockutils [req-7c42fbd7-ae36-47dd-99e4-ad45a7efe4f9 req-d11f9c31-942f-46b7-a052-f276c4b67689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:05 np0005466031 nova_compute[235803]: 2025-10-02 13:10:05.384 2 DEBUG nova.network.neutron [req-7c42fbd7-ae36-47dd-99e4-ad45a7efe4f9 req-d11f9c31-942f-46b7-a052-f276c4b67689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1997719586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:06 np0005466031 nova_compute[235803]: 2025-10-02 13:10:06.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:06.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:07 np0005466031 nova_compute[235803]: 2025-10-02 13:10:07.353 2 DEBUG nova.network.neutron [req-7c42fbd7-ae36-47dd-99e4-ad45a7efe4f9 req-d11f9c31-942f-46b7-a052-f276c4b67689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updated VIF entry in instance network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:07 np0005466031 nova_compute[235803]: 2025-10-02 13:10:07.354 2 DEBUG nova.network.neutron [req-7c42fbd7-ae36-47dd-99e4-ad45a7efe4f9 req-d11f9c31-942f-46b7-a052-f276c4b67689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:07 np0005466031 nova_compute[235803]: 2025-10-02 13:10:07.375 2 DEBUG oslo_concurrency.lockutils [req-7c42fbd7-ae36-47dd-99e4-ad45a7efe4f9 req-d11f9c31-942f-46b7-a052-f276c4b67689 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:08 np0005466031 nova_compute[235803]: 2025-10-02 13:10:08.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:08.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:08.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.157 2 DEBUG nova.compute.manager [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.253 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.253 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.285 2 DEBUG nova.objects.instance [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_requests' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.302 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.302 2 INFO nova.compute.claims [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.303 2 DEBUG nova.objects.instance [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.316 2 DEBUG nova.objects.instance [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.355 2 INFO nova.compute.resource_tracker [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating resource usage from migration 316381c1-6621-421f-914a-993647bbe4a1#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.355 2 DEBUG nova.compute.resource_tracker [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Starting to track incoming migration 316381c1-6621-421f-914a-993647bbe4a1 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.492 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:10.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:10.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1018353601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.932 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.937 2 DEBUG nova.compute.provider_tree [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.951 2 DEBUG nova.scheduler.client.report [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.978 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:10 np0005466031 nova_compute[235803]: 2025-10-02 13:10:10.979 2 INFO nova.compute.manager [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Migrating#033[00m
Oct  2 09:10:11 np0005466031 nova_compute[235803]: 2025-10-02 13:10:11.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:11 np0005466031 nova_compute[235803]: 2025-10-02 13:10:11.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:12.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:12 np0005466031 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 09:10:12 np0005466031 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 09:10:12 np0005466031 systemd-logind[786]: New session 68 of user nova.
Oct  2 09:10:12 np0005466031 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 09:10:12 np0005466031 systemd[1]: Starting User Manager for UID 42436...
Oct  2 09:10:12 np0005466031 systemd[322458]: Queued start job for default target Main User Target.
Oct  2 09:10:12 np0005466031 systemd[322458]: Created slice User Application Slice.
Oct  2 09:10:12 np0005466031 systemd[322458]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 09:10:12 np0005466031 systemd[322458]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 09:10:12 np0005466031 systemd[322458]: Reached target Paths.
Oct  2 09:10:12 np0005466031 systemd[322458]: Reached target Timers.
Oct  2 09:10:12 np0005466031 systemd[322458]: Starting D-Bus User Message Bus Socket...
Oct  2 09:10:12 np0005466031 systemd[322458]: Starting Create User's Volatile Files and Directories...
Oct  2 09:10:12 np0005466031 systemd[322458]: Finished Create User's Volatile Files and Directories.
Oct  2 09:10:12 np0005466031 systemd[322458]: Listening on D-Bus User Message Bus Socket.
Oct  2 09:10:12 np0005466031 systemd[322458]: Reached target Sockets.
Oct  2 09:10:12 np0005466031 systemd[322458]: Reached target Basic System.
Oct  2 09:10:12 np0005466031 systemd[1]: Started User Manager for UID 42436.
Oct  2 09:10:12 np0005466031 systemd[322458]: Reached target Main User Target.
Oct  2 09:10:12 np0005466031 systemd[322458]: Startup finished in 144ms.
Oct  2 09:10:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:12Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:17:41 10.100.0.10
Oct  2 09:10:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:12Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:17:41 10.100.0.10
Oct  2 09:10:12 np0005466031 systemd[1]: Started Session 68 of User nova.
Oct  2 09:10:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:12.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:12 np0005466031 systemd-logind[786]: Session 68 logged out. Waiting for processes to exit.
Oct  2 09:10:12 np0005466031 systemd[1]: session-68.scope: Deactivated successfully.
Oct  2 09:10:12 np0005466031 systemd-logind[786]: Removed session 68.
Oct  2 09:10:13 np0005466031 nova_compute[235803]: 2025-10-02 13:10:13.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:13 np0005466031 systemd-logind[786]: New session 70 of user nova.
Oct  2 09:10:13 np0005466031 systemd[1]: Started Session 70 of User nova.
Oct  2 09:10:13 np0005466031 systemd[1]: session-70.scope: Deactivated successfully.
Oct  2 09:10:13 np0005466031 systemd-logind[786]: Session 70 logged out. Waiting for processes to exit.
Oct  2 09:10:13 np0005466031 systemd-logind[786]: Removed session 70.
Oct  2 09:10:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:14.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:14 np0005466031 podman[322532]: 2025-10-02 13:10:14.62333136 +0000 UTC m=+0.056135698 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:10:14 np0005466031 podman[322533]: 2025-10-02 13:10:14.651182763 +0000 UTC m=+0.083683412 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:10:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:14.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:16 np0005466031 nova_compute[235803]: 2025-10-02 13:10:16.069 2 DEBUG nova.compute.manager [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:16 np0005466031 nova_compute[235803]: 2025-10-02 13:10:16.070 2 DEBUG oslo_concurrency.lockutils [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:16 np0005466031 nova_compute[235803]: 2025-10-02 13:10:16.070 2 DEBUG oslo_concurrency.lockutils [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:16 np0005466031 nova_compute[235803]: 2025-10-02 13:10:16.070 2 DEBUG oslo_concurrency.lockutils [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:16 np0005466031 nova_compute[235803]: 2025-10-02 13:10:16.070 2 DEBUG nova.compute.manager [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:16 np0005466031 nova_compute[235803]: 2025-10-02 13:10:16.070 2 WARNING nova.compute.manager [req-26edbf75-8c40-4d2c-ac2b-96e13ccf2461 req-d4330359-81ac-4509-a543-1c9616c3b1ff 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 09:10:16 np0005466031 nova_compute[235803]: 2025-10-02 13:10:16.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:16.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:16 np0005466031 nova_compute[235803]: 2025-10-02 13:10:16.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:16.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:17 np0005466031 nova_compute[235803]: 2025-10-02 13:10:17.244 2 INFO nova.network.neutron [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 09:10:17 np0005466031 nova_compute[235803]: 2025-10-02 13:10:17.941 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:17 np0005466031 nova_compute[235803]: 2025-10-02 13:10:17.941 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:17 np0005466031 nova_compute[235803]: 2025-10-02 13:10:17.942 2 DEBUG nova.network.neutron [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.481 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.482 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.482 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.483 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.483 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.484 2 WARNING nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.487 2 DEBUG nova.compute.manager [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.487 2 DEBUG nova.compute.manager [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing instance network info cache due to event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:18 np0005466031 nova_compute[235803]: 2025-10-02 13:10:18.488 2 DEBUG oslo_concurrency.lockutils [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:18.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:18.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.111 2 DEBUG nova.network.neutron [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.131 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.134 2 DEBUG oslo_concurrency.lockutils [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.134 2 DEBUG nova.network.neutron [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.224 2 DEBUG os_brick.utils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.225 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.235 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.236 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[0f844171-cbb9-421b-9cb9-3cf4a22f22a5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.237 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.246 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.246 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b7e569-6aa1-4d8a-a2ff-0de9d036f78b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.248 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.256 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.256 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[5929aa9b-1e0e-4d6c-bb1e-90e8e1d3f7bf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.257 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[3c52324c-f667-4f91-a104-5fe753f17b73]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.258 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.293 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.295 2 DEBUG os_brick.initiator.connectors.lightos [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.296 2 DEBUG os_brick.initiator.connectors.lightos [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.296 2 DEBUG os_brick.initiator.connectors.lightos [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.296 2 DEBUG os_brick.utils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:10:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:20.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:20 np0005466031 nova_compute[235803]: 2025-10-02 13:10:20.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:10:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:20.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:10:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1814106570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.039 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.040 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac" acquired by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.040 2 INFO nova.compute.manager [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Shelve offloading#033[00m
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.069 2 DEBUG nova.virt.libvirt.driver [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.229 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.231 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.231 2 INFO nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Creating image(s)#033[00m
Oct  2 09:10:21 np0005466031 nova_compute[235803]: 2025-10-02 13:10:21.277 2 DEBUG nova.storage.rbd_utils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] creating snapshot(nova-resize) on rbd image(469a928f-d7cb-4add-9410-629caac3f6f8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:10:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e366 e366: 3 total, 3 up, 3 in
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.017 2 DEBUG nova.objects.instance [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.130 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.132 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Ensure instance console log exists: /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.132 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.133 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.133 2 DEBUG oslo_concurrency.lockutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.136 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Start _get_guest_xml network_info=[{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:67:d9:a1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8347daf9-f32f-4c50-b89e-df9e913044db', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '469a928f-d7cb-4add-9410-629caac3f6f8', 'attached_at': '2025-10-02T13:10:20.000000', 'detached_at': '', 'volume_id': '8347daf9-f32f-4c50-b89e-df9e913044db', 'multiattach': True, 'serial': '8347daf9-f32f-4c50-b89e-df9e913044db'}, 'attachment_id': '49bf8921-a668-4225-ab0f-aecfaf369881', 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'guest_format': None, 'boot_index': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.141 2 WARNING nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.147 2 DEBUG nova.virt.libvirt.host [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.147 2 DEBUG nova.virt.libvirt.host [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.150 2 DEBUG nova.virt.libvirt.host [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.151 2 DEBUG nova.virt.libvirt.host [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.152 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.152 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.152 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.153 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.153 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.153 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.153 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.153 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.153 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.154 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.154 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.154 2 DEBUG nova.virt.hardware [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.155 2 DEBUG nova.objects.instance [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.178 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:22.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/153590829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.632 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:22 np0005466031 nova_compute[235803]: 2025-10-02 13:10:22.672 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:22.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1563581413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.177 2 DEBUG oslo_concurrency.processutils [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.206 2 DEBUG nova.virt.libvirt.vif [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=199,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-17mdigwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:10:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=469a928f-d7cb-4add-9410-629caac3f6f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:67:d9:a1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.207 2 DEBUG nova.network.os_vif_util [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:67:d9:a1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.210 2 DEBUG nova.network.os_vif_util [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.215 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <uuid>469a928f-d7cb-4add-9410-629caac3f6f8</uuid>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <name>instance-000000c7</name>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <memory>196608</memory>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <nova:name>multiattach-server-0</nova:name>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:10:22</nova:creationTime>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.micro">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <nova:memory>192</nova:memory>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <nova:port uuid="84c1a249-c4f5-48bf-835d-bbbc75fefeb0">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <entry name="serial">469a928f-d7cb-4add-9410-629caac3f6f8</entry>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <entry name="uuid">469a928f-d7cb-4add-9410-629caac3f6f8</entry>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/469a928f-d7cb-4add-9410-629caac3f6f8_disk">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/469a928f-d7cb-4add-9410-629caac3f6f8_disk.config">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <target dev="vdb" bus="virtio"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <shareable/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:67:d9:a1"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <target dev="tap84c1a249-c4"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8/console.log" append="off"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:10:23 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:10:23 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:10:23 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:10:23 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.217 2 DEBUG nova.virt.libvirt.vif [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=199,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-17mdigwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:10:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=469a928f-d7cb-4add-9410-629caac3f6f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:67:d9:a1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.218 2 DEBUG nova.network.os_vif_util [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:67:d9:a1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.219 2 DEBUG nova.network.os_vif_util [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.220 2 DEBUG os_vif [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.222 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.223 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1a249-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84c1a249-c4, col_values=(('external_ids', {'iface-id': '84c1a249-c4f5-48bf-835d-bbbc75fefeb0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:d9:a1', 'vm-uuid': '469a928f-d7cb-4add-9410-629caac3f6f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 NetworkManager[44907]: <info>  [1759410623.2307] manager: (tap84c1a249-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.237 2 INFO os_vif [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4')#033[00m
Oct  2 09:10:23 np0005466031 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 09:10:23 np0005466031 systemd[322458]: Activating special unit Exit the Session...
Oct  2 09:10:23 np0005466031 systemd[322458]: Stopped target Main User Target.
Oct  2 09:10:23 np0005466031 systemd[322458]: Stopped target Basic System.
Oct  2 09:10:23 np0005466031 systemd[322458]: Stopped target Paths.
Oct  2 09:10:23 np0005466031 systemd[322458]: Stopped target Sockets.
Oct  2 09:10:23 np0005466031 systemd[322458]: Stopped target Timers.
Oct  2 09:10:23 np0005466031 systemd[322458]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 09:10:23 np0005466031 systemd[322458]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 09:10:23 np0005466031 systemd[322458]: Closed D-Bus User Message Bus Socket.
Oct  2 09:10:23 np0005466031 systemd[322458]: Stopped Create User's Volatile Files and Directories.
Oct  2 09:10:23 np0005466031 systemd[322458]: Removed slice User Application Slice.
Oct  2 09:10:23 np0005466031 systemd[322458]: Reached target Shutdown.
Oct  2 09:10:23 np0005466031 systemd[322458]: Finished Exit the Session.
Oct  2 09:10:23 np0005466031 systemd[322458]: Reached target Exit the Session.
Oct  2 09:10:23 np0005466031 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 09:10:23 np0005466031 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 09:10:23 np0005466031 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.299 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.299 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.300 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.300 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:67:d9:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.300 2 INFO nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Using config drive#033[00m
Oct  2 09:10:23 np0005466031 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 09:10:23 np0005466031 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 09:10:23 np0005466031 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 09:10:23 np0005466031 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 09:10:23 np0005466031 podman[322722]: 2025-10-02 13:10:23.337615575 +0000 UTC m=+0.066304152 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:10:23 np0005466031 podman[322721]: 2025-10-02 13:10:23.337874182 +0000 UTC m=+0.066867338 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:10:23 np0005466031 kernel: tapf2d2820c-dd (unregistering): left promiscuous mode
Oct  2 09:10:23 np0005466031 NetworkManager[44907]: <info>  [1759410623.3663] device (tapf2d2820c-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:23Z|00797|binding|INFO|Releasing lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 from this chassis (sb_readonly=0)
Oct  2 09:10:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:23Z|00798|binding|INFO|Setting lport f2d2820c-dd48-47a4-ab94-dd6136c8e314 down in Southbound
Oct  2 09:10:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:23Z|00799|binding|INFO|Removing iface tapf2d2820c-dd ovn-installed in OVS
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.379 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:17:41 10.100.0.10'], port_security=['fa:16:3e:47:17:41 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79964143-e208-4552-8380-513c3adf09ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ecabba0-db02-4a2b-8a99-c435db80e5c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=f2d2820c-dd48-47a4-ab94-dd6136c8e314) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.380 141898 INFO neutron.agent.ovn.metadata.agent [-] Port f2d2820c-dd48-47a4-ab94-dd6136c8e314 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 unbound from our chassis#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.382 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4223a8cc-f72a-428d-accb-3f4210096878, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.383 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[587eea80-2fb3-43a2-8896-d28d16d98ee8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.383 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace which is not needed anymore#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 kernel: tap84c1a249-c4: entered promiscuous mode
Oct  2 09:10:23 np0005466031 systemd-udevd[322789]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:10:23 np0005466031 NetworkManager[44907]: <info>  [1759410623.4053] manager: (tap84c1a249-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Oct  2 09:10:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:23Z|00800|binding|INFO|Claiming lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for this chassis.
Oct  2 09:10:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:23Z|00801|binding|INFO|84c1a249-c4f5-48bf-835d-bbbc75fefeb0: Claiming fa:16:3e:67:d9:a1 10.100.0.9
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.417 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:d9:a1 10.100.0.9'], port_security=['fa:16:3e:67:d9:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '469a928f-d7cb-4add-9410-629caac3f6f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=84c1a249-c4f5-48bf-835d-bbbc75fefeb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:23 np0005466031 NetworkManager[44907]: <info>  [1759410623.4198] device (tap84c1a249-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:10:23 np0005466031 NetworkManager[44907]: <info>  [1759410623.4205] device (tap84c1a249-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:10:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:23Z|00802|binding|INFO|Setting lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 ovn-installed in OVS
Oct  2 09:10:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:23Z|00803|binding|INFO|Setting lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 up in Southbound
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 systemd-machined[192227]: New machine qemu-93-instance-000000c7.
Oct  2 09:10:23 np0005466031 systemd[1]: Started Virtual Machine qemu-93-instance-000000c7.
Oct  2 09:10:23 np0005466031 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c9.scope: Deactivated successfully.
Oct  2 09:10:23 np0005466031 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c9.scope: Consumed 13.941s CPU time.
Oct  2 09:10:23 np0005466031 systemd-machined[192227]: Machine qemu-92-instance-000000c9 terminated.
Oct  2 09:10:23 np0005466031 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[322408]: [NOTICE]   (322412) : haproxy version is 2.8.14-c23fe91
Oct  2 09:10:23 np0005466031 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[322408]: [NOTICE]   (322412) : path to executable is /usr/sbin/haproxy
Oct  2 09:10:23 np0005466031 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[322408]: [WARNING]  (322412) : Exiting Master process...
Oct  2 09:10:23 np0005466031 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[322408]: [WARNING]  (322412) : Exiting Master process...
Oct  2 09:10:23 np0005466031 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[322408]: [ALERT]    (322412) : Current worker (322414) exited with code 143 (Terminated)
Oct  2 09:10:23 np0005466031 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[322408]: [WARNING]  (322412) : All workers exited. Exiting... (0)
Oct  2 09:10:23 np0005466031 systemd[1]: libpod-14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811.scope: Deactivated successfully.
Oct  2 09:10:23 np0005466031 podman[322819]: 2025-10-02 13:10:23.533949241 +0000 UTC m=+0.046695086 container died 14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:10:23 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811-userdata-shm.mount: Deactivated successfully.
Oct  2 09:10:23 np0005466031 systemd[1]: var-lib-containers-storage-overlay-9c6ffcd9c5f9dbd132fbe67050fb7ce46ee811d8efc0665d4e58922c9d624e0a-merged.mount: Deactivated successfully.
Oct  2 09:10:23 np0005466031 podman[322819]: 2025-10-02 13:10:23.57623531 +0000 UTC m=+0.088981155 container cleanup 14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:10:23 np0005466031 systemd[1]: libpod-conmon-14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811.scope: Deactivated successfully.
Oct  2 09:10:23 np0005466031 NetworkManager[44907]: <info>  [1759410623.6026] manager: (tapf2d2820c-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:10:23 np0005466031 podman[322857]: 2025-10-02 13:10:23.651949001 +0000 UTC m=+0.046845371 container remove 14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.657 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd6f337-2dbd-4858-9270-ac048a53637a]: (4, ('Thu Oct  2 01:10:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811)\n14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811\nThu Oct  2 01:10:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811)\n14536c9fada58a365fe48127404de1e7011102f9bbae10ddb9fff5f92d563811\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.659 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1a532b11-d60a-48a5-9411-35b639aa107e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.660 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:23 np0005466031 kernel: tap4223a8cc-f0: left promiscuous mode
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.685 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[65ff107c-5f8d-4b92-ac6c-24aa31352c65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.729 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d4234e05-cbd1-4036-9faf-c6e1fadc694b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.729 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7b3645-1b20-4eab-8c83-4dfe5c9dbb3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.844 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8011515e-6933-4050-b533-e1c9fb75311c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845464, 'reachable_time': 40839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322886, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.846 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.846 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[ba78d1d4-2b0a-4ec7-94c9-71a0ff211e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.847 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis#033[00m
Oct  2 09:10:23 np0005466031 systemd[1]: run-netns-ovnmeta\x2d4223a8cc\x2df72a\x2d428d\x2daccb\x2d3f4210096878.mount: Deactivated successfully.
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.849 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.864 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[53105cb3-de94-4d23-9452-76a92c5aa7ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.889 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[290d44e2-5e99-447b-8a12-ff0f14dcc04b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.891 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[584e7be3-70c7-447e-ac05-3c2090dadcaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.920 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e9aeef-0424-4c94-a68c-61d44ec3a76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.947 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[45a228f7-a88c-4afb-a670-d1b5140450ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835666, 'reachable_time': 15897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322927, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.965 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1854c74b-c2b9-400d-b40c-c3eada0eb91a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835678, 'tstamp': 835678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322946, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835681, 'tstamp': 835681}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322946, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.967 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 nova_compute[235803]: 2025-10-02 13:10:23.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.975 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.975 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.975 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:23.976 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.123 2 INFO nova.virt.libvirt.driver [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.127 2 INFO nova.virt.libvirt.driver [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance destroyed successfully.#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.128 2 DEBUG nova.objects.instance [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'numa_topology' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.141 2 DEBUG nova.compute.manager [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.144 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.145 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.145 2 DEBUG nova.network.neutron [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.259 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.260 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.260 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.460 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410624.4602275, 469a928f-d7cb-4add-9410-629caac3f6f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.461 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.462 2 DEBUG nova.compute.manager [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.469 2 INFO nova.virt.libvirt.driver [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance running successfully.#033[00m
Oct  2 09:10:24 np0005466031 virtqemud[235323]: argument unsupported: QEMU guest agent is not configured
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.471 2 DEBUG nova.virt.libvirt.guest [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.472 2 DEBUG nova.virt.libvirt.driver [None req-46d64f6a-c29c-42a0-86a1-029260d63b92 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.487 2 DEBUG nova.compute.manager [req-f66b2cef-f7f3-4acb-82a9-16837f601b61 req-b1522be0-67af-4110-b56f-970fd150cf7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-unplugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.487 2 DEBUG oslo_concurrency.lockutils [req-f66b2cef-f7f3-4acb-82a9-16837f601b61 req-b1522be0-67af-4110-b56f-970fd150cf7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.487 2 DEBUG oslo_concurrency.lockutils [req-f66b2cef-f7f3-4acb-82a9-16837f601b61 req-b1522be0-67af-4110-b56f-970fd150cf7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.488 2 DEBUG oslo_concurrency.lockutils [req-f66b2cef-f7f3-4acb-82a9-16837f601b61 req-b1522be0-67af-4110-b56f-970fd150cf7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.488 2 DEBUG nova.compute.manager [req-f66b2cef-f7f3-4acb-82a9-16837f601b61 req-b1522be0-67af-4110-b56f-970fd150cf7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] No waiting events found dispatching network-vif-unplugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.488 2 WARNING nova.compute.manager [req-f66b2cef-f7f3-4acb-82a9-16837f601b61 req-b1522be0-67af-4110-b56f-970fd150cf7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received unexpected event network-vif-unplugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 for instance with vm_state active and task_state shelving.#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.498 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.501 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.520 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.521 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410624.4609368, 469a928f-d7cb-4add-9410-629caac3f6f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.521 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] VM Started (Lifecycle Event)#033[00m
Oct  2 09:10:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:24.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.538 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.541 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.559 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.613 2 DEBUG nova.compute.manager [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.614 2 DEBUG oslo_concurrency.lockutils [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.614 2 DEBUG oslo_concurrency.lockutils [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.614 2 DEBUG oslo_concurrency.lockutils [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.614 2 DEBUG nova.compute.manager [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.614 2 WARNING nova.compute.manager [req-b5c05086-2f74-4940-bffb-22cb8966390e req-4f72de13-7f27-45e8-9c1b-bbdc8dc65515 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.824 2 DEBUG nova.network.neutron [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updated VIF entry in instance network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.825 2 DEBUG nova.network.neutron [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:24 np0005466031 nova_compute[235803]: 2025-10-02 13:10:24.859 2 DEBUG oslo_concurrency.lockutils [req-a6a144b1-da72-4e9b-9e59-d6ff4053cdf9 req-c36345d8-bd85-4fa7-96ec-09d75edd7632 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:24.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:25.878 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:25.879 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:25.879 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:26.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.572 2 DEBUG nova.compute.manager [req-a80d10db-744c-4cb7-b3a4-8ef4e30c54c3 req-d781afca-a8ee-4198-bf82-e387bde32d6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.572 2 DEBUG oslo_concurrency.lockutils [req-a80d10db-744c-4cb7-b3a4-8ef4e30c54c3 req-d781afca-a8ee-4198-bf82-e387bde32d6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "79964143-e208-4552-8380-513c3adf09ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.573 2 DEBUG oslo_concurrency.lockutils [req-a80d10db-744c-4cb7-b3a4-8ef4e30c54c3 req-d781afca-a8ee-4198-bf82-e387bde32d6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.573 2 DEBUG oslo_concurrency.lockutils [req-a80d10db-744c-4cb7-b3a4-8ef4e30c54c3 req-d781afca-a8ee-4198-bf82-e387bde32d6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.573 2 DEBUG nova.compute.manager [req-a80d10db-744c-4cb7-b3a4-8ef4e30c54c3 req-d781afca-a8ee-4198-bf82-e387bde32d6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] No waiting events found dispatching network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.573 2 WARNING nova.compute.manager [req-a80d10db-744c-4cb7-b3a4-8ef4e30c54c3 req-d781afca-a8ee-4198-bf82-e387bde32d6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received unexpected event network-vif-plugged-f2d2820c-dd48-47a4-ab94-dd6136c8e314 for instance with vm_state active and task_state shelving.#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.685 2 DEBUG nova.compute.manager [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.685 2 DEBUG oslo_concurrency.lockutils [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.686 2 DEBUG oslo_concurrency.lockutils [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.686 2 DEBUG oslo_concurrency.lockutils [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.686 2 DEBUG nova.compute.manager [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:26 np0005466031 nova_compute[235803]: 2025-10-02 13:10:26.686 2 WARNING nova.compute.manager [req-abee3dc2-98a9-427c-98fa-a0acb8b9ccbe req-8b147fcc-e2a6-4a3e-b0df-26317c060a6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state resized and task_state None.#033[00m
Oct  2 09:10:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:26.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.295 2 DEBUG nova.network.neutron [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.328 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.331 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updating instance_info_cache with network_info: [{"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.368 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.368 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.369 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.369 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.369 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.416 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.417 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.417 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.417 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.417 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/293687028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.861 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.944 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.944 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.944 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.947 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.947 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.950 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.950 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.953 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:27 np0005466031 nova_compute[235803]: 2025-10-02 13:10:27.953 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.114 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.115 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3518MB free_disk=20.73917007446289GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.116 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.116 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.201 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Applying migration context for instance 469a928f-d7cb-4add-9410-629caac3f6f8 as it has an incoming, in-progress migration 316381c1-6621-421f-914a-993647bbe4a1. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.203 2 INFO nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating resource usage from migration 316381c1-6621-421f-914a-993647bbe4a1#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.237 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 61bad754-8d82-465b-8545-25d700a6e146 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.238 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance f07a4381-2291-4a58-a2ca-b04071e65a0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.238 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 79964143-e208-4552-8380-513c3adf09ac actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.238 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 469a928f-d7cb-4add-9410-629caac3f6f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.239 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.239 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1088MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.268 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.285 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.286 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.310 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.331 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.404 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:28.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2272454097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.840 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.845 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.861 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.888 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:10:28 np0005466031 nova_compute[235803]: 2025-10-02 13:10:28.888 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:28.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.363186) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629363227, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 889, "num_deletes": 251, "total_data_size": 1612680, "memory_usage": 1638032, "flush_reason": "Manual Compaction"}
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629397420, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1052493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70675, "largest_seqno": 71559, "table_properties": {"data_size": 1048384, "index_size": 1760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9722, "raw_average_key_size": 19, "raw_value_size": 1039964, "raw_average_value_size": 2135, "num_data_blocks": 77, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410575, "oldest_key_time": 1759410575, "file_creation_time": 1759410629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 34289 microseconds, and 3656 cpu microseconds.
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.397473) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1052493 bytes OK
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.397493) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.405374) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.405415) EVENT_LOG_v1 {"time_micros": 1759410629405406, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.405438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1608114, prev total WAL file size 1608114, number of live WAL files 2.
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.406221) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1027KB)], [144(10MB)]
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629406276, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 12164882, "oldest_snapshot_seqno": -1}
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9047 keys, 10295519 bytes, temperature: kUnknown
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629496681, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 10295519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10239050, "index_size": 32740, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 239967, "raw_average_key_size": 26, "raw_value_size": 10082312, "raw_average_value_size": 1114, "num_data_blocks": 1237, "num_entries": 9047, "num_filter_entries": 9047, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.496951) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10295519 bytes
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.503078) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.4 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(21.3) write-amplify(9.8) OK, records in: 9567, records dropped: 520 output_compression: NoCompression
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.503121) EVENT_LOG_v1 {"time_micros": 1759410629503106, "job": 92, "event": "compaction_finished", "compaction_time_micros": 90483, "compaction_time_cpu_micros": 24906, "output_level": 6, "num_output_files": 1, "total_output_size": 10295519, "num_input_records": 9567, "num_output_records": 9047, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629503521, "job": 92, "event": "table_file_deletion", "file_number": 146}
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629505510, "job": 92, "event": "table_file_deletion", "file_number": 144}
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.406115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.505593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.505598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.505600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.505601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:10:29.505602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.005 2 INFO nova.virt.libvirt.driver [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] Instance destroyed successfully.#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.006 2 DEBUG nova.objects.instance [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'resources' on Instance uuid 79964143-e208-4552-8380-513c3adf09ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.030 2 DEBUG nova.virt.libvirt.vif [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-838477094',display_name='tempest-TestShelveInstance-server-838477094',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-838477094',id=201,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVidC3ivoi0/BSVTNr+Q25H86pwCavGSEVSNuXxP+lg72xfVuGJwhfG7zkX1TfYRkx+B8B4fFFME+SiQB5nTIMxK1PVVykpxHJMksdNCrSHtdTo6A7G7KE0+4LlyrKslw==',key_name='tempest-TestShelveInstance-2040419602',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-v1nr892j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=<?>,task_state='shelving',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:00Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=79964143-e208-4552-8380-513c3adf09ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.031 2 DEBUG nova.network.os_vif_util [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.032 2 DEBUG nova.network.os_vif_util [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.033 2 DEBUG os_vif [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2d2820c-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.040 2 INFO os_vif [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:17:41,bridge_name='br-int',has_traffic_filtering=True,id=f2d2820c-dd48-47a4-ab94-dd6136c8e314,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2d2820c-dd')#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.106 2 DEBUG nova.compute.manager [req-d22b07cb-ce66-478c-af09-1cfddb2d40d4 req-6d0eee47-efba-4794-9401-26ac4b0d4a96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Received event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.107 2 DEBUG nova.compute.manager [req-d22b07cb-ce66-478c-af09-1cfddb2d40d4 req-6d0eee47-efba-4794-9401-26ac4b0d4a96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing instance network info cache due to event network-changed-f2d2820c-dd48-47a4-ab94-dd6136c8e314. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.107 2 DEBUG oslo_concurrency.lockutils [req-d22b07cb-ce66-478c-af09-1cfddb2d40d4 req-6d0eee47-efba-4794-9401-26ac4b0d4a96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.107 2 DEBUG oslo_concurrency.lockutils [req-d22b07cb-ce66-478c-af09-1cfddb2d40d4 req-6d0eee47-efba-4794-9401-26ac4b0d4a96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:30 np0005466031 nova_compute[235803]: 2025-10-02 13:10:30.107 2 DEBUG nova.network.neutron [req-d22b07cb-ce66-478c-af09-1cfddb2d40d4 req-6d0eee47-efba-4794-9401-26ac4b0d4a96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Refreshing network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:30.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:30.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:31 np0005466031 nova_compute[235803]: 2025-10-02 13:10:31.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:32 np0005466031 nova_compute[235803]: 2025-10-02 13:10:32.155 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:32 np0005466031 nova_compute[235803]: 2025-10-02 13:10:32.155 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:32 np0005466031 nova_compute[235803]: 2025-10-02 13:10:32.155 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:10:32 np0005466031 nova_compute[235803]: 2025-10-02 13:10:32.432 2 DEBUG nova.network.neutron [req-d22b07cb-ce66-478c-af09-1cfddb2d40d4 req-6d0eee47-efba-4794-9401-26ac4b0d4a96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updated VIF entry in instance network info cache for port f2d2820c-dd48-47a4-ab94-dd6136c8e314. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:32 np0005466031 nova_compute[235803]: 2025-10-02 13:10:32.433 2 DEBUG nova.network.neutron [req-d22b07cb-ce66-478c-af09-1cfddb2d40d4 req-6d0eee47-efba-4794-9401-26ac4b0d4a96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Updating instance_info_cache with network_info: [{"id": "f2d2820c-dd48-47a4-ab94-dd6136c8e314", "address": "fa:16:3e:47:17:41", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": null, "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapf2d2820c-dd", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:32 np0005466031 nova_compute[235803]: 2025-10-02 13:10:32.457 2 DEBUG oslo_concurrency.lockutils [req-d22b07cb-ce66-478c-af09-1cfddb2d40d4 req-6d0eee47-efba-4794-9401-26ac4b0d4a96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-79964143-e208-4552-8380-513c3adf09ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:32.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:32.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e367 e367: 3 total, 3 up, 3 in
Oct  2 09:10:33 np0005466031 nova_compute[235803]: 2025-10-02 13:10:33.989 2 INFO nova.virt.libvirt.driver [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Deleting instance files /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac_del#033[00m
Oct  2 09:10:33 np0005466031 nova_compute[235803]: 2025-10-02 13:10:33.992 2 INFO nova.virt.libvirt.driver [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: 79964143-e208-4552-8380-513c3adf09ac] Deletion of /var/lib/nova/instances/79964143-e208-4552-8380-513c3adf09ac_del complete#033[00m
Oct  2 09:10:34 np0005466031 nova_compute[235803]: 2025-10-02 13:10:34.484 2 INFO nova.scheduler.client.report [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Deleted allocations for instance 79964143-e208-4552-8380-513c3adf09ac#033[00m
Oct  2 09:10:34 np0005466031 nova_compute[235803]: 2025-10-02 13:10:34.525 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:34 np0005466031 nova_compute[235803]: 2025-10-02 13:10:34.525 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:34.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:34 np0005466031 nova_compute[235803]: 2025-10-02 13:10:34.597 2 DEBUG oslo_concurrency.processutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:10:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:34.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:10:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1219523401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:35 np0005466031 nova_compute[235803]: 2025-10-02 13:10:35.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:35 np0005466031 nova_compute[235803]: 2025-10-02 13:10:35.048 2 DEBUG oslo_concurrency.processutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:35 np0005466031 nova_compute[235803]: 2025-10-02 13:10:35.055 2 DEBUG nova.compute.provider_tree [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:10:35 np0005466031 nova_compute[235803]: 2025-10-02 13:10:35.080 2 DEBUG nova.scheduler.client.report [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:10:35 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Oct  2 09:10:35 np0005466031 nova_compute[235803]: 2025-10-02 13:10:35.130 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:35 np0005466031 nova_compute[235803]: 2025-10-02 13:10:35.216 2 DEBUG oslo_concurrency.lockutils [None req-9232b72d-6330-4ef9-b7d0-6fc21fe2df29 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "79964143-e208-4552-8380-513c3adf09ac" "released" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: held 14.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:36 np0005466031 nova_compute[235803]: 2025-10-02 13:10:36.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:36.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:36.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:37 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:37Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:d9:a1 10.100.0.9
Oct  2 09:10:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:38 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1461054822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:38 np0005466031 nova_compute[235803]: 2025-10-02 13:10:38.618 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410623.6161819, 79964143-e208-4552-8380-513c3adf09ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:38 np0005466031 nova_compute[235803]: 2025-10-02 13:10:38.619 2 INFO nova.compute.manager [-] [instance: 79964143-e208-4552-8380-513c3adf09ac] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:10:38 np0005466031 nova_compute[235803]: 2025-10-02 13:10:38.639 2 DEBUG nova.compute.manager [None req-b513b78d-89d4-4c40-8523-cb73b5323e7e - - - - - -] [instance: 79964143-e208-4552-8380-513c3adf09ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:40 np0005466031 nova_compute[235803]: 2025-10-02 13:10:40.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e368 e368: 3 total, 3 up, 3 in
Oct  2 09:10:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:40.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:41 np0005466031 nova_compute[235803]: 2025-10-02 13:10:41.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:41 np0005466031 nova_compute[235803]: 2025-10-02 13:10:41.192 2 DEBUG nova.compute.manager [req-111caa18-cc4e-44cb-b6a9-45eb44cc4373 req-3e29bfec-bcd5-489c-a85b-b541e959c15e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:41 np0005466031 nova_compute[235803]: 2025-10-02 13:10:41.192 2 DEBUG nova.compute.manager [req-111caa18-cc4e-44cb-b6a9-45eb44cc4373 req-3e29bfec-bcd5-489c-a85b-b541e959c15e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing instance network info cache due to event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:41 np0005466031 nova_compute[235803]: 2025-10-02 13:10:41.192 2 DEBUG oslo_concurrency.lockutils [req-111caa18-cc4e-44cb-b6a9-45eb44cc4373 req-3e29bfec-bcd5-489c-a85b-b541e959c15e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:41 np0005466031 nova_compute[235803]: 2025-10-02 13:10:41.192 2 DEBUG oslo_concurrency.lockutils [req-111caa18-cc4e-44cb-b6a9-45eb44cc4373 req-3e29bfec-bcd5-489c-a85b-b541e959c15e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:41 np0005466031 nova_compute[235803]: 2025-10-02 13:10:41.193 2 DEBUG nova.network.neutron [req-111caa18-cc4e-44cb-b6a9-45eb44cc4373 req-3e29bfec-bcd5-489c-a85b-b541e959c15e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:41 np0005466031 nova_compute[235803]: 2025-10-02 13:10:41.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:42 np0005466031 nova_compute[235803]: 2025-10-02 13:10:42.448 2 DEBUG nova.network.neutron [req-111caa18-cc4e-44cb-b6a9-45eb44cc4373 req-3e29bfec-bcd5-489c-a85b-b541e959c15e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updated VIF entry in instance network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:42 np0005466031 nova_compute[235803]: 2025-10-02 13:10:42.449 2 DEBUG nova.network.neutron [req-111caa18-cc4e-44cb-b6a9-45eb44cc4373 req-3e29bfec-bcd5-489c-a85b-b541e959c15e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:42 np0005466031 nova_compute[235803]: 2025-10-02 13:10:42.469 2 DEBUG oslo_concurrency.lockutils [req-111caa18-cc4e-44cb-b6a9-45eb44cc4373 req-3e29bfec-bcd5-489c-a85b-b541e959c15e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:42.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:42.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:10:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:44.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:10:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:44.751 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:44 np0005466031 nova_compute[235803]: 2025-10-02 13:10:44.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:44 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:44.752 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:10:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:44.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:45 np0005466031 nova_compute[235803]: 2025-10-02 13:10:45.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:45 np0005466031 podman[323105]: 2025-10-02 13:10:45.668419637 +0000 UTC m=+0.093985678 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  2 09:10:45 np0005466031 podman[323106]: 2025-10-02 13:10:45.680206076 +0000 UTC m=+0.105618732 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:10:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:46 np0005466031 nova_compute[235803]: 2025-10-02 13:10:46.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:46.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:46.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:10:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:48.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:10:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:48.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:50 np0005466031 nova_compute[235803]: 2025-10-02 13:10:50.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:50.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:50.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:10:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:10:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:10:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:10:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:10:51 np0005466031 nova_compute[235803]: 2025-10-02 13:10:51.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:10:51.753 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:52.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:52.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:53 np0005466031 podman[323410]: 2025-10-02 13:10:53.642766714 +0000 UTC m=+0.061261075 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct  2 09:10:53 np0005466031 podman[323406]: 2025-10-02 13:10:53.65445172 +0000 UTC m=+0.078042648 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:10:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:54.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:54.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:55 np0005466031 nova_compute[235803]: 2025-10-02 13:10:55.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:56 np0005466031 nova_compute[235803]: 2025-10-02 13:10:56.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:56.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:56.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:10:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:58.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:10:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:10:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:58.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:59 np0005466031 ovn_controller[132413]: 2025-10-02T13:10:59Z|00804|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:10:59 np0005466031 nova_compute[235803]: 2025-10-02 13:10:59.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:00 np0005466031 nova_compute[235803]: 2025-10-02 13:11:00.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:00.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:00.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:01 np0005466031 nova_compute[235803]: 2025-10-02 13:11:01.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:02.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:02.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e369 e369: 3 total, 3 up, 3 in
Oct  2 09:11:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:04.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:04 np0005466031 nova_compute[235803]: 2025-10-02 13:11:04.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:04.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:05 np0005466031 nova_compute[235803]: 2025-10-02 13:11:05.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:06 np0005466031 nova_compute[235803]: 2025-10-02 13:11:06.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:06.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:06.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:08.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:08.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:10 np0005466031 nova_compute[235803]: 2025-10-02 13:11:10.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005466031 nova_compute[235803]: 2025-10-02 13:11:10.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:10.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:10.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:11 np0005466031 nova_compute[235803]: 2025-10-02 13:11:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:12.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:12 np0005466031 nova_compute[235803]: 2025-10-02 13:11:12.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:12.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:14.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:14.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:15 np0005466031 nova_compute[235803]: 2025-10-02 13:11:15.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:16 np0005466031 nova_compute[235803]: 2025-10-02 13:11:16.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:16.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:16 np0005466031 podman[323605]: 2025-10-02 13:11:16.656460223 +0000 UTC m=+0.077475992 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:11:16 np0005466031 podman[323606]: 2025-10-02 13:11:16.697822524 +0000 UTC m=+0.116866486 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:11:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:16.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:18.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:18 np0005466031 nova_compute[235803]: 2025-10-02 13:11:18.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e370 e370: 3 total, 3 up, 3 in
Oct  2 09:11:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:18.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:20 np0005466031 nova_compute[235803]: 2025-10-02 13:11:20.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:20.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:21 np0005466031 nova_compute[235803]: 2025-10-02 13:11:21.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:22.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:22 np0005466031 nova_compute[235803]: 2025-10-02 13:11:22.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:22.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:23 np0005466031 nova_compute[235803]: 2025-10-02 13:11:23.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:24.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:24 np0005466031 podman[323656]: 2025-10-02 13:11:24.625089906 +0000 UTC m=+0.044059499 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:11:24 np0005466031 podman[323655]: 2025-10-02 13:11:24.629271587 +0000 UTC m=+0.050589378 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:11:24 np0005466031 nova_compute[235803]: 2025-10-02 13:11:24.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:24 np0005466031 nova_compute[235803]: 2025-10-02 13:11:24.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:11:24 np0005466031 nova_compute[235803]: 2025-10-02 13:11:24.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:11:24 np0005466031 nova_compute[235803]: 2025-10-02 13:11:24.921 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:24 np0005466031 nova_compute[235803]: 2025-10-02 13:11:24.921 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:24 np0005466031 nova_compute[235803]: 2025-10-02 13:11:24.921 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:11:24 np0005466031 nova_compute[235803]: 2025-10-02 13:11:24.922 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 61bad754-8d82-465b-8545-25d700a6e146 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:24.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:25 np0005466031 nova_compute[235803]: 2025-10-02 13:11:25.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:11:25.879 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:11:25.879 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:11:25.880 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:26 np0005466031 nova_compute[235803]: 2025-10-02 13:11:26.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 e371: 3 total, 3 up, 3 in
Oct  2 09:11:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:26.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:26 np0005466031 nova_compute[235803]: 2025-10-02 13:11:26.752 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updating instance_info_cache with network_info: [{"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:26.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:28.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.692 2 DEBUG nova.compute.manager [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.692 2 DEBUG nova.compute.manager [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing instance network info cache due to event network-changed-84c1a249-c4f5-48bf-835d-bbbc75fefeb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.693 2 DEBUG oslo_concurrency.lockutils [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.693 2 DEBUG oslo_concurrency.lockutils [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.693 2 DEBUG nova.network.neutron [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Refreshing network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.790 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-61bad754-8d82-465b-8545-25d700a6e146" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.790 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.790 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.790 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.830 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.830 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.831 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.831 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:11:28 np0005466031 nova_compute[235803]: 2025-10-02 13:11:28.831 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:28.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/351148488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.265 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.395 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.395 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.396 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.400 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.400 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.404 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.405 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.621 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.622 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3492MB free_disk=20.79693603515625GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.622 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.622 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.728 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 61bad754-8d82-465b-8545-25d700a6e146 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.729 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance f07a4381-2291-4a58-a2ca-b04071e65a0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.729 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 469a928f-d7cb-4add-9410-629caac3f6f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.729 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.729 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.845 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.873 2 DEBUG oslo_concurrency.lockutils [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.874 2 DEBUG oslo_concurrency.lockutils [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:29 np0005466031 nova_compute[235803]: 2025-10-02 13:11:29.912 2 INFO nova.compute.manager [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Detaching volume 8347daf9-f32f-4c50-b89e-df9e913044db#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2587128955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.298 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.304 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.323 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.359 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.359 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.463 2 INFO nova.virt.block_device [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Attempting to driver detach volume 8347daf9-f32f-4c50-b89e-df9e913044db from mountpoint /dev/vdb#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.477 2 DEBUG nova.virt.libvirt.driver [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Attempting to detach device vdb from instance 469a928f-d7cb-4add-9410-629caac3f6f8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.478 2 DEBUG nova.virt.libvirt.guest [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct  2 09:11:30 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <shareable/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:11:30 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.487 2 INFO nova.virt.libvirt.driver [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance 469a928f-d7cb-4add-9410-629caac3f6f8 from the persistent domain config.#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.488 2 DEBUG nova.virt.libvirt.driver [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 469a928f-d7cb-4add-9410-629caac3f6f8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.488 2 DEBUG nova.virt.libvirt.guest [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct  2 09:11:30 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <shareable/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct  2 09:11:30 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:11:30 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:11:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:30.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.605 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759410690.6055188, 469a928f-d7cb-4add-9410-629caac3f6f8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.608 2 DEBUG nova.virt.libvirt.driver [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 469a928f-d7cb-4add-9410-629caac3f6f8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.610 2 INFO nova.virt.libvirt.driver [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance 469a928f-d7cb-4add-9410-629caac3f6f8 from the live domain config.#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:30 np0005466031 nova_compute[235803]: 2025-10-02 13:11:30.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:30.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:31 np0005466031 nova_compute[235803]: 2025-10-02 13:11:31.176 2 DEBUG nova.network.neutron [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updated VIF entry in instance network info cache for port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:11:31 np0005466031 nova_compute[235803]: 2025-10-02 13:11:31.176 2 DEBUG nova.network.neutron [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [{"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:31 np0005466031 nova_compute[235803]: 2025-10-02 13:11:31.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:31 np0005466031 nova_compute[235803]: 2025-10-02 13:11:31.290 2 DEBUG nova.objects.instance [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:31 np0005466031 nova_compute[235803]: 2025-10-02 13:11:31.303 2 DEBUG oslo_concurrency.lockutils [req-abebeba6-512f-4e72-9e40-76d3defe7f55 req-478354d3-ce27-44fe-a8b8-3ba5b1022474 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-469a928f-d7cb-4add-9410-629caac3f6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:31 np0005466031 nova_compute[235803]: 2025-10-02 13:11:31.383 2 DEBUG oslo_concurrency.lockutils [None req-4f14dd75-22ed-4325-b67c-ab32b140f28c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:31 np0005466031 nova_compute[235803]: 2025-10-02 13:11:31.682 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:31 np0005466031 nova_compute[235803]: 2025-10-02 13:11:31.682 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:11:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:32.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:32.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:34.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:34.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:35 np0005466031 nova_compute[235803]: 2025-10-02 13:11:35.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:36 np0005466031 nova_compute[235803]: 2025-10-02 13:11:36.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:36.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:37.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:38.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:39.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:40 np0005466031 nova_compute[235803]: 2025-10-02 13:11:40.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:40.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:40 np0005466031 ovn_controller[132413]: 2025-10-02T13:11:40Z|00805|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:11:40 np0005466031 nova_compute[235803]: 2025-10-02 13:11:40.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:41.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:41 np0005466031 nova_compute[235803]: 2025-10-02 13:11:41.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:42.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:43.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:44.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:45.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:45 np0005466031 nova_compute[235803]: 2025-10-02 13:11:45.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:46 np0005466031 nova_compute[235803]: 2025-10-02 13:11:46.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:11:46.335 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:11:46 np0005466031 nova_compute[235803]: 2025-10-02 13:11:46.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:46 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:11:46.336 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:11:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:46.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:47.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:47 np0005466031 podman[323803]: 2025-10-02 13:11:47.617336592 +0000 UTC m=+0.049463344 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  2 09:11:47 np0005466031 podman[323804]: 2025-10-02 13:11:47.650425085 +0000 UTC m=+0.082153696 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Oct  2 09:11:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:49.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:50 np0005466031 nova_compute[235803]: 2025-10-02 13:11:50.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:50.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:51.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:51 np0005466031 nova_compute[235803]: 2025-10-02 13:11:51.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:11:51.338 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:51 np0005466031 ovn_controller[132413]: 2025-10-02T13:11:51Z|00806|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:11:51 np0005466031 nova_compute[235803]: 2025-10-02 13:11:51.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:52.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:53 np0005466031 nova_compute[235803]: 2025-10-02 13:11:53.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:53 np0005466031 nova_compute[235803]: 2025-10-02 13:11:53.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:11:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:55.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:55 np0005466031 nova_compute[235803]: 2025-10-02 13:11:55.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:55 np0005466031 podman[323899]: 2025-10-02 13:11:55.621366094 +0000 UTC m=+0.054017106 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:11:55 np0005466031 podman[323900]: 2025-10-02 13:11:55.664480696 +0000 UTC m=+0.092319919 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:11:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:56 np0005466031 nova_compute[235803]: 2025-10-02 13:11:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:11:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:57.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:11:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:58.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:11:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:11:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:59.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:11:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:11:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:11:59 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:12:00 np0005466031 nova_compute[235803]: 2025-10-02 13:12:00.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:00.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:00 np0005466031 nova_compute[235803]: 2025-10-02 13:12:00.687 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:00 np0005466031 nova_compute[235803]: 2025-10-02 13:12:00.687 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:12:00 np0005466031 nova_compute[235803]: 2025-10-02 13:12:00.730 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:12:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:01.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:01 np0005466031 nova_compute[235803]: 2025-10-02 13:12:01.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:12:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:02.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:12:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:03.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:04.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:05.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:05 np0005466031 nova_compute[235803]: 2025-10-02 13:12:05.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:12:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639919700' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:12:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:12:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639919700' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:12:05 np0005466031 nova_compute[235803]: 2025-10-02 13:12:05.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:12:05 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:12:06 np0005466031 nova_compute[235803]: 2025-10-02 13:12:06.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:06.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:07.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:08.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:09.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:10 np0005466031 nova_compute[235803]: 2025-10-02 13:12:10.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:10.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:12:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:11.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.631 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.632 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.662 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.785 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.785 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.797 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.798 2 INFO nova.compute.claims [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:12:11 np0005466031 nova_compute[235803]: 2025-10-02 13:12:11.982 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e372 e372: 3 total, 3 up, 3 in
Oct  2 09:12:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:12:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3655608364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.523 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.530 2 DEBUG nova.compute.provider_tree [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.555 2 DEBUG nova.scheduler.client.report [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.579 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.580 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.625 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.625 2 DEBUG nova.network.neutron [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.645 2 INFO nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:12:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:12:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:12.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.663 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.771 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.772 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.773 2 INFO nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Creating image(s)#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.794 2 DEBUG nova.storage.rbd_utils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.816 2 DEBUG nova.storage.rbd_utils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.836 2 DEBUG nova.storage.rbd_utils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.839 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.912 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.913 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.913 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.914 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.933 2 DEBUG nova.storage.rbd_utils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:12 np0005466031 nova_compute[235803]: 2025-10-02 13:12:12.937 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:13.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.367 2 DEBUG nova.policy [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:12:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e373 e373: 3 total, 3 up, 3 in
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.408 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.474 2 DEBUG nova.storage.rbd_utils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.785 2 DEBUG nova.objects.instance [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid c0e1f22b-20ca-45ef-82c8-c6b43a890782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.799 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.800 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Ensure instance console log exists: /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.800 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.801 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:13 np0005466031 nova_compute[235803]: 2025-10-02 13:12:13.801 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:14.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:14 np0005466031 nova_compute[235803]: 2025-10-02 13:12:14.679 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:14 np0005466031 nova_compute[235803]: 2025-10-02 13:12:14.931 2 DEBUG nova.network.neutron [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Successfully created port: aa9c6b4c-eb31-4032-9748-d72a0880d5ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:12:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:15.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:15 np0005466031 nova_compute[235803]: 2025-10-02 13:12:15.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:15 np0005466031 nova_compute[235803]: 2025-10-02 13:12:15.991 2 DEBUG nova.network.neutron [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Successfully updated port: aa9c6b4c-eb31-4032-9748-d72a0880d5ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:12:16 np0005466031 nova_compute[235803]: 2025-10-02 13:12:16.032 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:16 np0005466031 nova_compute[235803]: 2025-10-02 13:12:16.032 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:16 np0005466031 nova_compute[235803]: 2025-10-02 13:12:16.032 2 DEBUG nova.network.neutron [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:12:16 np0005466031 nova_compute[235803]: 2025-10-02 13:12:16.107 2 DEBUG nova.compute.manager [req-60b93db2-af84-43e1-91c3-7ef7ba76b987 req-1c99d18a-2d14-4069-b251-d0d5016566b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received event network-changed-aa9c6b4c-eb31-4032-9748-d72a0880d5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:16 np0005466031 nova_compute[235803]: 2025-10-02 13:12:16.107 2 DEBUG nova.compute.manager [req-60b93db2-af84-43e1-91c3-7ef7ba76b987 req-1c99d18a-2d14-4069-b251-d0d5016566b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Refreshing instance network info cache due to event network-changed-aa9c6b4c-eb31-4032-9748-d72a0880d5ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:12:16 np0005466031 nova_compute[235803]: 2025-10-02 13:12:16.107 2 DEBUG oslo_concurrency.lockutils [req-60b93db2-af84-43e1-91c3-7ef7ba76b987 req-1c99d18a-2d14-4069-b251-d0d5016566b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:16 np0005466031 nova_compute[235803]: 2025-10-02 13:12:16.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:16 np0005466031 nova_compute[235803]: 2025-10-02 13:12:16.364 2 DEBUG nova.network.neutron [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:12:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:16.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:17.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.896 2 DEBUG nova.network.neutron [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Updating instance_info_cache with network_info: [{"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.921 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.921 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Instance network_info: |[{"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.922 2 DEBUG oslo_concurrency.lockutils [req-60b93db2-af84-43e1-91c3-7ef7ba76b987 req-1c99d18a-2d14-4069-b251-d0d5016566b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.922 2 DEBUG nova.network.neutron [req-60b93db2-af84-43e1-91c3-7ef7ba76b987 req-1c99d18a-2d14-4069-b251-d0d5016566b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Refreshing network info cache for port aa9c6b4c-eb31-4032-9748-d72a0880d5ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.925 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Start _get_guest_xml network_info=[{"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.929 2 WARNING nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.939 2 DEBUG nova.virt.libvirt.host [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.940 2 DEBUG nova.virt.libvirt.host [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.943 2 DEBUG nova.virt.libvirt.host [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.943 2 DEBUG nova.virt.libvirt.host [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.944 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.944 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.945 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.945 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.945 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.945 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.945 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.945 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.946 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.946 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.946 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.946 2 DEBUG nova.virt.hardware [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:12:17 np0005466031 nova_compute[235803]: 2025-10-02 13:12:17.948 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:12:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3057999980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.406 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.444 2 DEBUG nova.storage.rbd_utils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.448 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:18 np0005466031 podman[324410]: 2025-10-02 13:12:18.636178577 +0000 UTC m=+0.063227672 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:12:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:18.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:18 np0005466031 podman[324411]: 2025-10-02 13:12:18.673157442 +0000 UTC m=+0.098885569 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:12:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:12:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/8473852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.902 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.904 2 DEBUG nova.virt.libvirt.vif [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:12:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-468949133',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-468949133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=205,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-usfws8fm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:12Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=c0e1f22b-20ca-45ef-82c8-c6b43a890782,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.904 2 DEBUG nova.network.os_vif_util [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.905 2 DEBUG nova.network.os_vif_util [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:a6:16,bridge_name='br-int',has_traffic_filtering=True,id=aa9c6b4c-eb31-4032-9748-d72a0880d5ab,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c6b4c-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.906 2 DEBUG nova.objects.instance [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0e1f22b-20ca-45ef-82c8-c6b43a890782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.926 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <uuid>c0e1f22b-20ca-45ef-82c8-c6b43a890782</uuid>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <name>instance-000000cd</name>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-468949133</nova:name>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:12:17</nova:creationTime>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <nova:port uuid="aa9c6b4c-eb31-4032-9748-d72a0880d5ab">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <entry name="serial">c0e1f22b-20ca-45ef-82c8-c6b43a890782</entry>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <entry name="uuid">c0e1f22b-20ca-45ef-82c8-c6b43a890782</entry>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk.config">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:12:a6:16"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <target dev="tapaa9c6b4c-eb"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782/console.log" append="off"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:12:18 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:12:18 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:12:18 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:12:18 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.927 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Preparing to wait for external event network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.927 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.927 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.928 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.928 2 DEBUG nova.virt.libvirt.vif [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:12:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-468949133',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-468949133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=205,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-usfws8fm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:12Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=c0e1f22b-20ca-45ef-82c8-c6b43a890782,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.928 2 DEBUG nova.network.os_vif_util [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.929 2 DEBUG nova.network.os_vif_util [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:a6:16,bridge_name='br-int',has_traffic_filtering=True,id=aa9c6b4c-eb31-4032-9748-d72a0880d5ab,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c6b4c-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.929 2 DEBUG os_vif [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:a6:16,bridge_name='br-int',has_traffic_filtering=True,id=aa9c6b4c-eb31-4032-9748-d72a0880d5ab,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c6b4c-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.931 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9c6b4c-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa9c6b4c-eb, col_values=(('external_ids', {'iface-id': 'aa9c6b4c-eb31-4032-9748-d72a0880d5ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:a6:16', 'vm-uuid': 'c0e1f22b-20ca-45ef-82c8-c6b43a890782'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:18 np0005466031 NetworkManager[44907]: <info>  [1759410738.9367] manager: (tapaa9c6b4c-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:18 np0005466031 nova_compute[235803]: 2025-10-02 13:12:18.943 2 INFO os_vif [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:a6:16,bridge_name='br-int',has_traffic_filtering=True,id=aa9c6b4c-eb31-4032-9748-d72a0880d5ab,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c6b4c-eb')#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.053 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.053 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.053 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:12:a6:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.053 2 INFO nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Using config drive#033[00m
Oct  2 09:12:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:12:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:19.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.149 2 DEBUG nova.storage.rbd_utils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.650 2 INFO nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Creating config drive at /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782/disk.config#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.655 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph3hmvz1_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.749 2 DEBUG nova.network.neutron [req-60b93db2-af84-43e1-91c3-7ef7ba76b987 req-1c99d18a-2d14-4069-b251-d0d5016566b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Updated VIF entry in instance network info cache for port aa9c6b4c-eb31-4032-9748-d72a0880d5ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.750 2 DEBUG nova.network.neutron [req-60b93db2-af84-43e1-91c3-7ef7ba76b987 req-1c99d18a-2d14-4069-b251-d0d5016566b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Updating instance_info_cache with network_info: [{"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.771 2 DEBUG oslo_concurrency.lockutils [req-60b93db2-af84-43e1-91c3-7ef7ba76b987 req-1c99d18a-2d14-4069-b251-d0d5016566b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.804 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph3hmvz1_" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.837 2 DEBUG nova.storage.rbd_utils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:19 np0005466031 nova_compute[235803]: 2025-10-02 13:12:19.842 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782/disk.config c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:20.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:20 np0005466031 nova_compute[235803]: 2025-10-02 13:12:20.783 2 DEBUG oslo_concurrency.processutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782/disk.config c0e1f22b-20ca-45ef-82c8-c6b43a890782_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.940s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:20 np0005466031 nova_compute[235803]: 2025-10-02 13:12:20.783 2 INFO nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Deleting local config drive /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782/disk.config because it was imported into RBD.#033[00m
Oct  2 09:12:20 np0005466031 kernel: tapaa9c6b4c-eb: entered promiscuous mode
Oct  2 09:12:20 np0005466031 NetworkManager[44907]: <info>  [1759410740.8361] manager: (tapaa9c6b4c-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Oct  2 09:12:20 np0005466031 nova_compute[235803]: 2025-10-02 13:12:20.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:20Z|00807|binding|INFO|Claiming lport aa9c6b4c-eb31-4032-9748-d72a0880d5ab for this chassis.
Oct  2 09:12:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:20Z|00808|binding|INFO|aa9c6b4c-eb31-4032-9748-d72a0880d5ab: Claiming fa:16:3e:12:a6:16 10.100.0.8
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.847 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:a6:16 10.100.0.8'], port_security=['fa:16:3e:12:a6:16 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c0e1f22b-20ca-45ef-82c8-c6b43a890782', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'caab64a4-2f87-4e39-a0ac-b96f95aae4c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94f92e0-9e2a-42b5-8a3e-79ddfa458897, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=aa9c6b4c-eb31-4032-9748-d72a0880d5ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.848 141898 INFO neutron.agent.ovn.metadata.agent [-] Port aa9c6b4c-eb31-4032-9748-d72a0880d5ab in datapath dac20349-4f21-4aeb-a4a7-d775590cb44a bound to our chassis#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.849 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dac20349-4f21-4aeb-a4a7-d775590cb44a#033[00m
Oct  2 09:12:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:20Z|00809|binding|INFO|Setting lport aa9c6b4c-eb31-4032-9748-d72a0880d5ab ovn-installed in OVS
Oct  2 09:12:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:20Z|00810|binding|INFO|Setting lport aa9c6b4c-eb31-4032-9748-d72a0880d5ab up in Southbound
Oct  2 09:12:20 np0005466031 nova_compute[235803]: 2025-10-02 13:12:20.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:20 np0005466031 nova_compute[235803]: 2025-10-02 13:12:20.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.865 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0a52f2d0-80a0-49fa-904b-4a17769ac007]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.866 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdac20349-41 in ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.868 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdac20349-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.868 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[45f11550-dfae-4f3e-a715-79656d9dde2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.868 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[93ec4722-4d96-412d-b5a9-ac2661b14a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:20 np0005466031 systemd-machined[192227]: New machine qemu-94-instance-000000cd.
Oct  2 09:12:20 np0005466031 systemd-udevd[324546]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:12:20 np0005466031 systemd[1]: Started Virtual Machine qemu-94-instance-000000cd.
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.882 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a7810d-a438-4510-af34-78c43e205e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:20 np0005466031 NetworkManager[44907]: <info>  [1759410740.8908] device (tapaa9c6b4c-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:12:20 np0005466031 NetworkManager[44907]: <info>  [1759410740.8918] device (tapaa9c6b4c-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.909 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[11c7b3bb-c7eb-47ba-af17-1e1a8cca05b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.939 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb4f7d0-29e9-4258-8c14-93794d70ae16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.944 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[78555e26-6352-4e46-b2e4-71ef78b7092b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:20 np0005466031 systemd-udevd[324550]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:12:20 np0005466031 NetworkManager[44907]: <info>  [1759410740.9456] manager: (tapdac20349-40): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.973 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe474ae-6d42-4371-9502-bc3fde372b7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:20.977 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[21e81667-9d8b-4718-93d3-e26f24993fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:21 np0005466031 NetworkManager[44907]: <info>  [1759410741.0032] device (tapdac20349-40): carrier: link connected
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.004 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[46574f85-657e-4231-8463-32f4242b3c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.020 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3fd9ff-5d9f-4258-8f3f-4afacbfd6553]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdac20349-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:d8:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859658, 'reachable_time': 16817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324578, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.036 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2b23b3-dc39-4d90-9c32-eb2fd84e37aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:d8a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859658, 'tstamp': 859658}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324579, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.054 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[12eee90f-9111-4dc0-b5ef-a7f3f72555b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdac20349-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:d8:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859658, 'reachable_time': 16817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324580, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:21.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.088 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6f9a06-c975-462e-abae-a2a6acf4fd9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.147 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4a858c8d-42b1-4afe-9624-2a9551738f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.149 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdac20349-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.149 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.149 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdac20349-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:21 np0005466031 nova_compute[235803]: 2025-10-02 13:12:21.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:21 np0005466031 NetworkManager[44907]: <info>  [1759410741.1517] manager: (tapdac20349-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Oct  2 09:12:21 np0005466031 kernel: tapdac20349-40: entered promiscuous mode
Oct  2 09:12:21 np0005466031 nova_compute[235803]: 2025-10-02 13:12:21.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.155 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdac20349-40, col_values=(('external_ids', {'iface-id': '71ea06ee-2e8d-4617-a491-cbc5589b4465'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:21 np0005466031 nova_compute[235803]: 2025-10-02 13:12:21.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:21 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:21Z|00811|binding|INFO|Releasing lport 71ea06ee-2e8d-4617-a491-cbc5589b4465 from this chassis (sb_readonly=0)
Oct  2 09:12:21 np0005466031 nova_compute[235803]: 2025-10-02 13:12:21.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.172 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.173 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[58e7badb-f539-4498-a25d-19a40352707c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.174 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-dac20349-4f21-4aeb-a4a7-d775590cb44a
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID dac20349-4f21-4aeb-a4a7-d775590cb44a
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:12:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:21.174 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'env', 'PROCESS_TAG=haproxy-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dac20349-4f21-4aeb-a4a7-d775590cb44a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:12:21 np0005466031 nova_compute[235803]: 2025-10-02 13:12:21.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:21 np0005466031 nova_compute[235803]: 2025-10-02 13:12:21.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:21 np0005466031 podman[324648]: 2025-10-02 13:12:21.528619322 +0000 UTC m=+0.037376047 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:12:21 np0005466031 podman[324648]: 2025-10-02 13:12:21.701373486 +0000 UTC m=+0.210130191 container create 34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:12:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e374 e374: 3 total, 3 up, 3 in
Oct  2 09:12:21 np0005466031 systemd[1]: Started libpod-conmon-34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0.scope.
Oct  2 09:12:21 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:12:21 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85f886bfe91bd9616a29c6167c68d9126543ae97f79909912a29bb4727f016d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:12:21 np0005466031 podman[324648]: 2025-10-02 13:12:21.88314235 +0000 UTC m=+0.391899065 container init 34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:12:21 np0005466031 podman[324648]: 2025-10-02 13:12:21.8914639 +0000 UTC m=+0.400220595 container start 34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:12:21 np0005466031 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[324670]: [NOTICE]   (324674) : New worker (324676) forked
Oct  2 09:12:21 np0005466031 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[324670]: [NOTICE]   (324674) : Loading success.
Oct  2 09:12:22 np0005466031 nova_compute[235803]: 2025-10-02 13:12:22.031 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410742.0312378, c0e1f22b-20ca-45ef-82c8-c6b43a890782 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:12:22 np0005466031 nova_compute[235803]: 2025-10-02 13:12:22.032 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] VM Started (Lifecycle Event)#033[00m
Oct  2 09:12:22 np0005466031 nova_compute[235803]: 2025-10-02 13:12:22.053 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:12:22 np0005466031 nova_compute[235803]: 2025-10-02 13:12:22.058 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410742.0313456, c0e1f22b-20ca-45ef-82c8-c6b43a890782 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:12:22 np0005466031 nova_compute[235803]: 2025-10-02 13:12:22.059 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:12:22 np0005466031 nova_compute[235803]: 2025-10-02 13:12:22.142 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:12:22 np0005466031 nova_compute[235803]: 2025-10-02 13:12:22.145 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:12:22 np0005466031 nova_compute[235803]: 2025-10-02 13:12:22.235 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:12:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:22.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:12:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:23.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:12:23 np0005466031 nova_compute[235803]: 2025-10-02 13:12:23.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e375 e375: 3 total, 3 up, 3 in
Oct  2 09:12:23 np0005466031 nova_compute[235803]: 2025-10-02 13:12:23.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:24 np0005466031 nova_compute[235803]: 2025-10-02 13:12:24.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:24 np0005466031 nova_compute[235803]: 2025-10-02 13:12:24.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:12:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:24.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:24 np0005466031 nova_compute[235803]: 2025-10-02 13:12:24.850 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:24 np0005466031 nova_compute[235803]: 2025-10-02 13:12:24.851 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:24 np0005466031 nova_compute[235803]: 2025-10-02 13:12:24.851 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:12:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:25.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:25.880 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:25.881 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:25.882 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:26 np0005466031 nova_compute[235803]: 2025-10-02 13:12:26.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:26 np0005466031 podman[324688]: 2025-10-02 13:12:26.62189229 +0000 UTC m=+0.057078805 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct  2 09:12:26 np0005466031 podman[324689]: 2025-10-02 13:12:26.646446117 +0000 UTC m=+0.079607543 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  2 09:12:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:26.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:27.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.749 2 DEBUG nova.compute.manager [req-67075266-8a98-43a5-a5b1-0c3146979c53 req-3194d901-9c32-4a65-ba70-05a195dd1025 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received event network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.749 2 DEBUG oslo_concurrency.lockutils [req-67075266-8a98-43a5-a5b1-0c3146979c53 req-3194d901-9c32-4a65-ba70-05a195dd1025 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.749 2 DEBUG oslo_concurrency.lockutils [req-67075266-8a98-43a5-a5b1-0c3146979c53 req-3194d901-9c32-4a65-ba70-05a195dd1025 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.749 2 DEBUG oslo_concurrency.lockutils [req-67075266-8a98-43a5-a5b1-0c3146979c53 req-3194d901-9c32-4a65-ba70-05a195dd1025 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.749 2 DEBUG nova.compute.manager [req-67075266-8a98-43a5-a5b1-0c3146979c53 req-3194d901-9c32-4a65-ba70-05a195dd1025 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Processing event network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.750 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.754 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410747.7533376, c0e1f22b-20ca-45ef-82c8-c6b43a890782 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.755 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.759 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.764 2 INFO nova.virt.libvirt.driver [-] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Instance spawned successfully.#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.765 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.788 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.793 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.807 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.807 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.808 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.808 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.809 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.809 2 DEBUG nova.virt.libvirt.driver [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.845 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.883 2 INFO nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Took 15.11 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.883 2 DEBUG nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.949 2 INFO nova.compute.manager [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Took 16.23 seconds to build instance.#033[00m
Oct  2 09:12:27 np0005466031 nova_compute[235803]: 2025-10-02 13:12:27.967 2 DEBUG oslo_concurrency.lockutils [None req-1ad4a985-a655-41cf-b710-cfb2e4c08d2e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.352 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updating instance_info_cache with network_info: [{"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.369 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-f07a4381-2291-4a58-a2ca-b04071e65a0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.370 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.370 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.370 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.394 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.395 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.395 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.395 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.395 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:28.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:12:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/328279734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.821 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.917 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.918 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.922 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.922 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.925 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.925 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.928 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.928 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:28 np0005466031 nova_compute[235803]: 2025-10-02 13:12:28.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:29.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.122 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.123 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3480MB free_disk=20.739158630371094GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.123 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.124 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.264 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 61bad754-8d82-465b-8545-25d700a6e146 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.264 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance f07a4381-2291-4a58-a2ca-b04071e65a0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.264 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 469a928f-d7cb-4add-9410-629caac3f6f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.264 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance c0e1f22b-20ca-45ef-82c8-c6b43a890782 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.265 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.265 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1088MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.349 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:12:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3156646001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.787 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.793 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.834 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.865 2 DEBUG nova.compute.manager [req-a76060ac-3930-4ec4-9889-cc60e64d81a1 req-43333348-44d1-45bf-9736-9f1cee74e190 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received event network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.865 2 DEBUG oslo_concurrency.lockutils [req-a76060ac-3930-4ec4-9889-cc60e64d81a1 req-43333348-44d1-45bf-9736-9f1cee74e190 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.866 2 DEBUG oslo_concurrency.lockutils [req-a76060ac-3930-4ec4-9889-cc60e64d81a1 req-43333348-44d1-45bf-9736-9f1cee74e190 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.866 2 DEBUG oslo_concurrency.lockutils [req-a76060ac-3930-4ec4-9889-cc60e64d81a1 req-43333348-44d1-45bf-9736-9f1cee74e190 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.866 2 DEBUG nova.compute.manager [req-a76060ac-3930-4ec4-9889-cc60e64d81a1 req-43333348-44d1-45bf-9736-9f1cee74e190 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] No waiting events found dispatching network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.866 2 WARNING nova.compute.manager [req-a76060ac-3930-4ec4-9889-cc60e64d81a1 req-43333348-44d1-45bf-9736-9f1cee74e190 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received unexpected event network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab for instance with vm_state active and task_state None.#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.869 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:12:29 np0005466031 nova_compute[235803]: 2025-10-02 13:12:29.869 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:30 np0005466031 nova_compute[235803]: 2025-10-02 13:12:30.135 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:30.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:31.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:31 np0005466031 nova_compute[235803]: 2025-10-02 13:12:31.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:31 np0005466031 nova_compute[235803]: 2025-10-02 13:12:31.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 e376: 3 total, 3 up, 3 in
Oct  2 09:12:32 np0005466031 nova_compute[235803]: 2025-10-02 13:12:32.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:32 np0005466031 nova_compute[235803]: 2025-10-02 13:12:32.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:12:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:12:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:32.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:12:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:33.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:34 np0005466031 nova_compute[235803]: 2025-10-02 13:12:34.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:34 np0005466031 nova_compute[235803]: 2025-10-02 13:12:34.297 2 DEBUG nova.compute.manager [req-a57e04b4-f041-4c22-8ff6-5d3c84942df2 req-343d237f-dd49-457e-b386-63c6e30f87dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received event network-changed-aa9c6b4c-eb31-4032-9748-d72a0880d5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:34 np0005466031 nova_compute[235803]: 2025-10-02 13:12:34.297 2 DEBUG nova.compute.manager [req-a57e04b4-f041-4c22-8ff6-5d3c84942df2 req-343d237f-dd49-457e-b386-63c6e30f87dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Refreshing instance network info cache due to event network-changed-aa9c6b4c-eb31-4032-9748-d72a0880d5ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:12:34 np0005466031 nova_compute[235803]: 2025-10-02 13:12:34.298 2 DEBUG oslo_concurrency.lockutils [req-a57e04b4-f041-4c22-8ff6-5d3c84942df2 req-343d237f-dd49-457e-b386-63c6e30f87dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:34 np0005466031 nova_compute[235803]: 2025-10-02 13:12:34.298 2 DEBUG oslo_concurrency.lockutils [req-a57e04b4-f041-4c22-8ff6-5d3c84942df2 req-343d237f-dd49-457e-b386-63c6e30f87dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:34 np0005466031 nova_compute[235803]: 2025-10-02 13:12:34.298 2 DEBUG nova.network.neutron [req-a57e04b4-f041-4c22-8ff6-5d3c84942df2 req-343d237f-dd49-457e-b386-63c6e30f87dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Refreshing network info cache for port aa9c6b4c-eb31-4032-9748-d72a0880d5ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:12:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:34.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:35.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:36 np0005466031 nova_compute[235803]: 2025-10-02 13:12:36.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:36 np0005466031 nova_compute[235803]: 2025-10-02 13:12:36.345 2 DEBUG nova.network.neutron [req-a57e04b4-f041-4c22-8ff6-5d3c84942df2 req-343d237f-dd49-457e-b386-63c6e30f87dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Updated VIF entry in instance network info cache for port aa9c6b4c-eb31-4032-9748-d72a0880d5ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:12:36 np0005466031 nova_compute[235803]: 2025-10-02 13:12:36.345 2 DEBUG nova.network.neutron [req-a57e04b4-f041-4c22-8ff6-5d3c84942df2 req-343d237f-dd49-457e-b386-63c6e30f87dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Updating instance_info_cache with network_info: [{"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:36 np0005466031 nova_compute[235803]: 2025-10-02 13:12:36.363 2 DEBUG oslo_concurrency.lockutils [req-a57e04b4-f041-4c22-8ff6-5d3c84942df2 req-343d237f-dd49-457e-b386-63c6e30f87dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c0e1f22b-20ca-45ef-82c8-c6b43a890782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:36.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:37.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:39 np0005466031 nova_compute[235803]: 2025-10-02 13:12:39.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:39.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:40.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:41.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:41 np0005466031 nova_compute[235803]: 2025-10-02 13:12:41.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:41 np0005466031 nova_compute[235803]: 2025-10-02 13:12:41.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:42.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:43.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:44 np0005466031 nova_compute[235803]: 2025-10-02 13:12:44.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:44.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:45.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:46 np0005466031 nova_compute[235803]: 2025-10-02 13:12:46.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:46.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:46 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:46Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:a6:16 10.100.0.8
Oct  2 09:12:46 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:46Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:a6:16 10.100.0.8
Oct  2 09:12:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:47.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:48.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:49 np0005466031 nova_compute[235803]: 2025-10-02 13:12:49.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:49.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:49 np0005466031 podman[324829]: 2025-10-02 13:12:49.650727734 +0000 UTC m=+0.065468197 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 09:12:49 np0005466031 podman[324830]: 2025-10-02 13:12:49.710934627 +0000 UTC m=+0.122784506 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:12:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:50 np0005466031 nova_compute[235803]: 2025-10-02 13:12:50.861 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:50 np0005466031 nova_compute[235803]: 2025-10-02 13:12:50.862 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:50 np0005466031 nova_compute[235803]: 2025-10-02 13:12:50.862 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:50 np0005466031 nova_compute[235803]: 2025-10-02 13:12:50.862 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:50 np0005466031 nova_compute[235803]: 2025-10-02 13:12:50.862 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:50 np0005466031 nova_compute[235803]: 2025-10-02 13:12:50.864 2 INFO nova.compute.manager [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Terminating instance#033[00m
Oct  2 09:12:50 np0005466031 nova_compute[235803]: 2025-10-02 13:12:50.865 2 DEBUG nova.compute.manager [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:12:51 np0005466031 kernel: tap84c1a249-c4 (unregistering): left promiscuous mode
Oct  2 09:12:51 np0005466031 NetworkManager[44907]: <info>  [1759410771.0803] device (tap84c1a249-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:51Z|00812|binding|INFO|Releasing lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 from this chassis (sb_readonly=0)
Oct  2 09:12:51 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:51Z|00813|binding|INFO|Setting lport 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 down in Southbound
Oct  2 09:12:51 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:51Z|00814|binding|INFO|Removing iface tap84c1a249-c4 ovn-installed in OVS
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.097 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:d9:a1 10.100.0.9'], port_security=['fa:16:3e:67:d9:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '469a928f-d7cb-4add-9410-629caac3f6f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=84c1a249-c4f5-48bf-835d-bbbc75fefeb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.099 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 84c1a249-c4f5-48bf-835d-bbbc75fefeb0 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.100 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:51.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.122 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[09d6a8a3-9f03-47df-a277-a00cac58dd4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:51 np0005466031 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c7.scope: Deactivated successfully.
Oct  2 09:12:51 np0005466031 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c7.scope: Consumed 18.572s CPU time.
Oct  2 09:12:51 np0005466031 systemd-machined[192227]: Machine qemu-93-instance-000000c7 terminated.
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.164 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa190cc-77e4-4bd8-9de1-8eed89caf025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.169 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bc96d2b6-14ab-446e-baea-e34b13dbeeb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.208 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9e5cdd-79b1-45d6-b67c-2f5fd7843c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.238 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b5dee21b-c51b-436f-b85a-e6add0fe0909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835666, 'reachable_time': 15897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324888, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.266 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6be569eb-2c85-4d5b-a057-6935a9aaef53]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835678, 'tstamp': 835678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324889, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835681, 'tstamp': 835681}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324889, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.268 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.277 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.278 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.278 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.279 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.312 2 INFO nova.virt.libvirt.driver [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Instance destroyed successfully.#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.313 2 DEBUG nova.objects.instance [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid 469a928f-d7cb-4add-9410-629caac3f6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.328 2 DEBUG nova.virt.libvirt.vif [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=199,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:10:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-17mdigwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=469a928f-d7cb-4add-9410-629caac3f6f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.330 2 DEBUG nova.network.os_vif_util [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "address": "fa:16:3e:67:d9:a1", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c1a249-c4", "ovs_interfaceid": "84c1a249-c4f5-48bf-835d-bbbc75fefeb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.333 2 DEBUG nova.network.os_vif_util [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.334 2 DEBUG os_vif [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1a249-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.345 2 INFO os_vif [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:d9:a1,bridge_name='br-int',has_traffic_filtering=True,id=84c1a249-c4f5-48bf-835d-bbbc75fefeb0,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c1a249-c4')#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.524 2 DEBUG nova.compute.manager [req-965ae2a2-22af-499f-8c00-9cf47a2d0823 req-66600362-f308-4e9e-92c0-279c1a070863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.524 2 DEBUG oslo_concurrency.lockutils [req-965ae2a2-22af-499f-8c00-9cf47a2d0823 req-66600362-f308-4e9e-92c0-279c1a070863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.524 2 DEBUG oslo_concurrency.lockutils [req-965ae2a2-22af-499f-8c00-9cf47a2d0823 req-66600362-f308-4e9e-92c0-279c1a070863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.525 2 DEBUG oslo_concurrency.lockutils [req-965ae2a2-22af-499f-8c00-9cf47a2d0823 req-66600362-f308-4e9e-92c0-279c1a070863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.525 2 DEBUG nova.compute.manager [req-965ae2a2-22af-499f-8c00-9cf47a2d0823 req-66600362-f308-4e9e-92c0-279c1a070863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.525 2 DEBUG nova.compute.manager [req-965ae2a2-22af-499f-8c00-9cf47a2d0823 req-66600362-f308-4e9e-92c0-279c1a070863 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-unplugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:12:51 np0005466031 nova_compute[235803]: 2025-10-02 13:12:51.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.651 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:12:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:51.652 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:12:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:53.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.626 2 DEBUG nova.compute.manager [req-d4dcd385-4e88-4e22-8c7a-36ab4e8bde47 req-295a7144-be13-4563-a2c3-0fafcffc38d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.626 2 DEBUG oslo_concurrency.lockutils [req-d4dcd385-4e88-4e22-8c7a-36ab4e8bde47 req-295a7144-be13-4563-a2c3-0fafcffc38d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.626 2 DEBUG oslo_concurrency.lockutils [req-d4dcd385-4e88-4e22-8c7a-36ab4e8bde47 req-295a7144-be13-4563-a2c3-0fafcffc38d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.626 2 DEBUG oslo_concurrency.lockutils [req-d4dcd385-4e88-4e22-8c7a-36ab4e8bde47 req-295a7144-be13-4563-a2c3-0fafcffc38d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.627 2 DEBUG nova.compute.manager [req-d4dcd385-4e88-4e22-8c7a-36ab4e8bde47 req-295a7144-be13-4563-a2c3-0fafcffc38d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] No waiting events found dispatching network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.627 2 WARNING nova.compute.manager [req-d4dcd385-4e88-4e22-8c7a-36ab4e8bde47 req-295a7144-be13-4563-a2c3-0fafcffc38d8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received unexpected event network-vif-plugged-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.675 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.675 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.675 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.676 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.676 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.677 2 INFO nova.compute.manager [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Terminating instance#033[00m
Oct  2 09:12:53 np0005466031 nova_compute[235803]: 2025-10-02 13:12:53.678 2 DEBUG nova.compute.manager [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:12:54 np0005466031 kernel: tapaa9c6b4c-eb (unregistering): left promiscuous mode
Oct  2 09:12:54 np0005466031 NetworkManager[44907]: <info>  [1759410774.5786] device (tapaa9c6b4c-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:12:54 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:54Z|00815|binding|INFO|Releasing lport aa9c6b4c-eb31-4032-9748-d72a0880d5ab from this chassis (sb_readonly=0)
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:54 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:54Z|00816|binding|INFO|Setting lport aa9c6b4c-eb31-4032-9748-d72a0880d5ab down in Southbound
Oct  2 09:12:54 np0005466031 ovn_controller[132413]: 2025-10-02T13:12:54Z|00817|binding|INFO|Removing iface tapaa9c6b4c-eb ovn-installed in OVS
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:54.641 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:a6:16 10.100.0.8'], port_security=['fa:16:3e:12:a6:16 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c0e1f22b-20ca-45ef-82c8-c6b43a890782', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e7e26e05-2bc9-4913-a62a-e635612d78e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94f92e0-9e2a-42b5-8a3e-79ddfa458897, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=aa9c6b4c-eb31-4032-9748-d72a0880d5ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:12:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:54.643 141898 INFO neutron.agent.ovn.metadata.agent [-] Port aa9c6b4c-eb31-4032-9748-d72a0880d5ab in datapath dac20349-4f21-4aeb-a4a7-d775590cb44a unbound from our chassis#033[00m
Oct  2 09:12:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:54.646 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dac20349-4f21-4aeb-a4a7-d775590cb44a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:12:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:54.652 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fb390457-e27c-4648-8a49-1f8545f482ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:54.653 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a namespace which is not needed anymore#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:12:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:54.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:12:54 np0005466031 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000cd.scope: Deactivated successfully.
Oct  2 09:12:54 np0005466031 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000cd.scope: Consumed 16.302s CPU time.
Oct  2 09:12:54 np0005466031 systemd-machined[192227]: Machine qemu-94-instance-000000cd terminated.
Oct  2 09:12:54 np0005466031 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[324670]: [NOTICE]   (324674) : haproxy version is 2.8.14-c23fe91
Oct  2 09:12:54 np0005466031 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[324670]: [NOTICE]   (324674) : path to executable is /usr/sbin/haproxy
Oct  2 09:12:54 np0005466031 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[324670]: [WARNING]  (324674) : Exiting Master process...
Oct  2 09:12:54 np0005466031 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[324670]: [ALERT]    (324674) : Current worker (324676) exited with code 143 (Terminated)
Oct  2 09:12:54 np0005466031 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[324670]: [WARNING]  (324674) : All workers exited. Exiting... (0)
Oct  2 09:12:54 np0005466031 systemd[1]: libpod-34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0.scope: Deactivated successfully.
Oct  2 09:12:54 np0005466031 podman[324995]: 2025-10-02 13:12:54.838275466 +0000 UTC m=+0.076467663 container died 34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.917 2 INFO nova.virt.libvirt.driver [-] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Instance destroyed successfully.#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.918 2 DEBUG nova.objects.instance [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid c0e1f22b-20ca-45ef-82c8-c6b43a890782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.938 2 DEBUG nova.virt.libvirt.vif [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:12:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-468949133',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-468949133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=205,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:12:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-usfws8fm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:12:27Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=c0e1f22b-20ca-45ef-82c8-c6b43a890782,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.939 2 DEBUG nova.network.os_vif_util [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "address": "fa:16:3e:12:a6:16", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9c6b4c-eb", "ovs_interfaceid": "aa9c6b4c-eb31-4032-9748-d72a0880d5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.939 2 DEBUG nova.network.os_vif_util [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:a6:16,bridge_name='br-int',has_traffic_filtering=True,id=aa9c6b4c-eb31-4032-9748-d72a0880d5ab,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c6b4c-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.940 2 DEBUG os_vif [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:a6:16,bridge_name='br-int',has_traffic_filtering=True,id=aa9c6b4c-eb31-4032-9748-d72a0880d5ab,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c6b4c-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9c6b4c-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:54 np0005466031 nova_compute[235803]: 2025-10-02 13:12:54.952 2 INFO os_vif [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:a6:16,bridge_name='br-int',has_traffic_filtering=True,id=aa9c6b4c-eb31-4032-9748-d72a0880d5ab,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9c6b4c-eb')#033[00m
Oct  2 09:12:55 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0-userdata-shm.mount: Deactivated successfully.
Oct  2 09:12:55 np0005466031 systemd[1]: var-lib-containers-storage-overlay-c85f886bfe91bd9616a29c6167c68d9126543ae97f79909912a29bb4727f016d-merged.mount: Deactivated successfully.
Oct  2 09:12:55 np0005466031 podman[324995]: 2025-10-02 13:12:55.052722551 +0000 UTC m=+0.290914738 container cleanup 34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:12:55 np0005466031 systemd[1]: libpod-conmon-34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0.scope: Deactivated successfully.
Oct  2 09:12:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:55.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:55 np0005466031 podman[325052]: 2025-10-02 13:12:55.342492304 +0000 UTC m=+0.266774312 container remove 34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.349 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5234a980-fd07-452f-8a7c-5b0fd93bf899]: (4, ('Thu Oct  2 01:12:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a (34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0)\n34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0\nThu Oct  2 01:12:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a (34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0)\n34ab681db0392c7e457d5e70821f215f4c3a3185677daeef348f604fbc3fe6f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.353 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb3175e-3413-4236-8c66-c02b6f4992e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.355 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdac20349-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:55 np0005466031 kernel: tapdac20349-40: left promiscuous mode
Oct  2 09:12:55 np0005466031 nova_compute[235803]: 2025-10-02 13:12:55.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:55 np0005466031 nova_compute[235803]: 2025-10-02 13:12:55.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.377 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0a7dfa-5f13-486d-9151-c072a0f82959]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.401 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1add2376-d6fc-475e-b940-b1ce6d6de2bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.403 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aab25301-6263-4978-b648-c59a8cc8abf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.423 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d3969a4d-c94e-4627-bfb1-7cfb4cc41d05]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859652, 'reachable_time': 31515, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325070, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:55 np0005466031 systemd[1]: run-netns-ovnmeta\x2ddac20349\x2d4f21\x2d4aeb\x2da4a7\x2dd775590cb44a.mount: Deactivated successfully.
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.427 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:12:55 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:12:55.428 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4d3687-ad01-45ab-b024-c0db8cecc6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:56 np0005466031 nova_compute[235803]: 2025-10-02 13:12:56.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:56 np0005466031 nova_compute[235803]: 2025-10-02 13:12:56.541 2 INFO nova.virt.libvirt.driver [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Deleting instance files /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8_del#033[00m
Oct  2 09:12:56 np0005466031 nova_compute[235803]: 2025-10-02 13:12:56.542 2 INFO nova.virt.libvirt.driver [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Deletion of /var/lib/nova/instances/469a928f-d7cb-4add-9410-629caac3f6f8_del complete#033[00m
Oct  2 09:12:56 np0005466031 nova_compute[235803]: 2025-10-02 13:12:56.591 2 INFO nova.compute.manager [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Took 5.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:12:56 np0005466031 nova_compute[235803]: 2025-10-02 13:12:56.592 2 DEBUG oslo.service.loopingcall [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:12:56 np0005466031 nova_compute[235803]: 2025-10-02 13:12:56.592 2 DEBUG nova.compute.manager [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:12:56 np0005466031 nova_compute[235803]: 2025-10-02 13:12:56.592 2 DEBUG nova.network.neutron [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:12:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:56.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:57.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:57 np0005466031 nova_compute[235803]: 2025-10-02 13:12:57.450 2 DEBUG nova.compute.manager [req-7a8bf6f3-16b4-4fe4-9d99-af5986556ccb req-8ecd2811-733e-4004-b604-2fbaf61366ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received event network-vif-unplugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:57 np0005466031 nova_compute[235803]: 2025-10-02 13:12:57.451 2 DEBUG oslo_concurrency.lockutils [req-7a8bf6f3-16b4-4fe4-9d99-af5986556ccb req-8ecd2811-733e-4004-b604-2fbaf61366ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:57 np0005466031 nova_compute[235803]: 2025-10-02 13:12:57.451 2 DEBUG oslo_concurrency.lockutils [req-7a8bf6f3-16b4-4fe4-9d99-af5986556ccb req-8ecd2811-733e-4004-b604-2fbaf61366ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:57 np0005466031 nova_compute[235803]: 2025-10-02 13:12:57.451 2 DEBUG oslo_concurrency.lockutils [req-7a8bf6f3-16b4-4fe4-9d99-af5986556ccb req-8ecd2811-733e-4004-b604-2fbaf61366ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:57 np0005466031 nova_compute[235803]: 2025-10-02 13:12:57.451 2 DEBUG nova.compute.manager [req-7a8bf6f3-16b4-4fe4-9d99-af5986556ccb req-8ecd2811-733e-4004-b604-2fbaf61366ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] No waiting events found dispatching network-vif-unplugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:12:57 np0005466031 nova_compute[235803]: 2025-10-02 13:12:57.452 2 DEBUG nova.compute.manager [req-7a8bf6f3-16b4-4fe4-9d99-af5986556ccb req-8ecd2811-733e-4004-b604-2fbaf61366ac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received event network-vif-unplugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:12:57 np0005466031 podman[325073]: 2025-10-02 13:12:57.62542641 +0000 UTC m=+0.057141696 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:12:57 np0005466031 podman[325072]: 2025-10-02 13:12:57.628593001 +0000 UTC m=+0.063056247 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd)
Oct  2 09:12:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:58.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:12:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:59.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.343 2 DEBUG nova.network.neutron [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.367 2 INFO nova.compute.manager [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Took 2.77 seconds to deallocate network for instance.#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.407 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.408 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.418 2 DEBUG nova.compute.manager [req-e61c7e3b-bb2d-429b-ba99-1c86f783ac20 req-2ac09a2e-c7e2-45f7-9132-90ee584274de 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Received event network-vif-deleted-84c1a249-c4f5-48bf-835d-bbbc75fefeb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.526 2 DEBUG nova.compute.manager [req-e75bc8b0-e442-4634-8efb-6b1d14caf27c req-6a140436-775a-4b7e-bd5f-267093bf4517 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received event network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.526 2 DEBUG oslo_concurrency.lockutils [req-e75bc8b0-e442-4634-8efb-6b1d14caf27c req-6a140436-775a-4b7e-bd5f-267093bf4517 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.527 2 DEBUG oslo_concurrency.lockutils [req-e75bc8b0-e442-4634-8efb-6b1d14caf27c req-6a140436-775a-4b7e-bd5f-267093bf4517 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.527 2 DEBUG oslo_concurrency.lockutils [req-e75bc8b0-e442-4634-8efb-6b1d14caf27c req-6a140436-775a-4b7e-bd5f-267093bf4517 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.527 2 DEBUG nova.compute.manager [req-e75bc8b0-e442-4634-8efb-6b1d14caf27c req-6a140436-775a-4b7e-bd5f-267093bf4517 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] No waiting events found dispatching network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.527 2 WARNING nova.compute.manager [req-e75bc8b0-e442-4634-8efb-6b1d14caf27c req-6a140436-775a-4b7e-bd5f-267093bf4517 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received unexpected event network-vif-plugged-aa9c6b4c-eb31-4032-9748-d72a0880d5ab for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.529 2 DEBUG oslo_concurrency.processutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:59 np0005466031 nova_compute[235803]: 2025-10-02 13:12:59.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3468991839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:00 np0005466031 nova_compute[235803]: 2025-10-02 13:13:00.104 2 DEBUG oslo_concurrency.processutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:00 np0005466031 nova_compute[235803]: 2025-10-02 13:13:00.114 2 DEBUG nova.compute.provider_tree [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:00 np0005466031 nova_compute[235803]: 2025-10-02 13:13:00.133 2 DEBUG nova.scheduler.client.report [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:00 np0005466031 nova_compute[235803]: 2025-10-02 13:13:00.163 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:00 np0005466031 nova_compute[235803]: 2025-10-02 13:13:00.193 2 INFO nova.scheduler.client.report [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocations for instance 469a928f-d7cb-4add-9410-629caac3f6f8#033[00m
Oct  2 09:13:00 np0005466031 nova_compute[235803]: 2025-10-02 13:13:00.260 2 DEBUG oslo_concurrency.lockutils [None req-1b340cc4-fb8d-4c58-8506-bbf10f8c1b54 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "469a928f-d7cb-4add-9410-629caac3f6f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:00.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:01.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:01 np0005466031 nova_compute[235803]: 2025-10-02 13:13:01.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:01 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:01.654 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:01 np0005466031 nova_compute[235803]: 2025-10-02 13:13:01.795 2 INFO nova.virt.libvirt.driver [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Deleting instance files /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782_del#033[00m
Oct  2 09:13:01 np0005466031 nova_compute[235803]: 2025-10-02 13:13:01.796 2 INFO nova.virt.libvirt.driver [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Deletion of /var/lib/nova/instances/c0e1f22b-20ca-45ef-82c8-c6b43a890782_del complete#033[00m
Oct  2 09:13:01 np0005466031 nova_compute[235803]: 2025-10-02 13:13:01.877 2 INFO nova.compute.manager [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Took 8.20 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:13:01 np0005466031 nova_compute[235803]: 2025-10-02 13:13:01.878 2 DEBUG oslo.service.loopingcall [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:13:01 np0005466031 nova_compute[235803]: 2025-10-02 13:13:01.879 2 DEBUG nova.compute.manager [-] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:13:01 np0005466031 nova_compute[235803]: 2025-10-02 13:13:01.879 2 DEBUG nova.network.neutron [-] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:13:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:02.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:03.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:03 np0005466031 nova_compute[235803]: 2025-10-02 13:13:03.366 2 DEBUG nova.network.neutron [-] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:03 np0005466031 nova_compute[235803]: 2025-10-02 13:13:03.391 2 INFO nova.compute.manager [-] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Took 1.51 seconds to deallocate network for instance.#033[00m
Oct  2 09:13:03 np0005466031 nova_compute[235803]: 2025-10-02 13:13:03.430 2 DEBUG nova.compute.manager [req-681b210e-f4f7-4ae2-9fae-09364f1e55ba req-bacc2194-5795-4b2c-a233-02bf1d52d897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Received event network-vif-deleted-aa9c6b4c-eb31-4032-9748-d72a0880d5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:03 np0005466031 nova_compute[235803]: 2025-10-02 13:13:03.452 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:03 np0005466031 nova_compute[235803]: 2025-10-02 13:13:03.454 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:03 np0005466031 nova_compute[235803]: 2025-10-02 13:13:03.546 2 DEBUG oslo_concurrency.processutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:04 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3318170492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:04 np0005466031 nova_compute[235803]: 2025-10-02 13:13:04.051 2 DEBUG oslo_concurrency.processutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:04 np0005466031 nova_compute[235803]: 2025-10-02 13:13:04.059 2 DEBUG nova.compute.provider_tree [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:04 np0005466031 nova_compute[235803]: 2025-10-02 13:13:04.081 2 DEBUG nova.scheduler.client.report [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:04 np0005466031 nova_compute[235803]: 2025-10-02 13:13:04.102 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:04 np0005466031 nova_compute[235803]: 2025-10-02 13:13:04.127 2 INFO nova.scheduler.client.report [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance c0e1f22b-20ca-45ef-82c8-c6b43a890782#033[00m
Oct  2 09:13:04 np0005466031 nova_compute[235803]: 2025-10-02 13:13:04.177 2 DEBUG oslo_concurrency.lockutils [None req-53c3cbb3-aeb1-4680-bdfa-d9de5f4be3a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "c0e1f22b-20ca-45ef-82c8-c6b43a890782" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:04.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:04 np0005466031 nova_compute[235803]: 2025-10-02 13:13:04.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:05.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:13:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2316275095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:13:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:13:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2316275095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:13:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:06 np0005466031 nova_compute[235803]: 2025-10-02 13:13:06.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466031 nova_compute[235803]: 2025-10-02 13:13:06.310 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410771.3088915, 469a928f-d7cb-4add-9410-629caac3f6f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:06 np0005466031 nova_compute[235803]: 2025-10-02 13:13:06.311 2 INFO nova.compute.manager [-] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:13:06 np0005466031 nova_compute[235803]: 2025-10-02 13:13:06.334 2 DEBUG nova.compute.manager [None req-bdcac59d-0044-417b-9271-fe36c4f34d7c - - - - - -] [instance: 469a928f-d7cb-4add-9410-629caac3f6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:06.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:07.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:08.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:09.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:09 np0005466031 nova_compute[235803]: 2025-10-02 13:13:09.915 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410774.9146025, c0e1f22b-20ca-45ef-82c8-c6b43a890782 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:09 np0005466031 nova_compute[235803]: 2025-10-02 13:13:09.916 2 INFO nova.compute.manager [-] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:13:09 np0005466031 nova_compute[235803]: 2025-10-02 13:13:09.936 2 DEBUG nova.compute.manager [None req-8ea7d47d-2172-4ca4-9a03-41dfad4cc725 - - - - - -] [instance: c0e1f22b-20ca-45ef-82c8-c6b43a890782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:09 np0005466031 nova_compute[235803]: 2025-10-02 13:13:09.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:10.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:11.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:11 np0005466031 nova_compute[235803]: 2025-10-02 13:13:11.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:13:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:13:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:12.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:13.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:14.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:14 np0005466031 nova_compute[235803]: 2025-10-02 13:13:14.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3948110413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:15.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:15 np0005466031 nova_compute[235803]: 2025-10-02 13:13:15.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:16 np0005466031 nova_compute[235803]: 2025-10-02 13:13:16.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:16 np0005466031 nova_compute[235803]: 2025-10-02 13:13:16.408 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:16 np0005466031 nova_compute[235803]: 2025-10-02 13:13:16.409 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:16 np0005466031 nova_compute[235803]: 2025-10-02 13:13:16.409 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:16 np0005466031 nova_compute[235803]: 2025-10-02 13:13:16.409 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:16 np0005466031 nova_compute[235803]: 2025-10-02 13:13:16.409 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:16 np0005466031 nova_compute[235803]: 2025-10-02 13:13:16.410 2 INFO nova.compute.manager [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Terminating instance#033[00m
Oct  2 09:13:16 np0005466031 nova_compute[235803]: 2025-10-02 13:13:16.411 2 DEBUG nova.compute.manager [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:13:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:16.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:17 np0005466031 kernel: tap8313f187-d7 (unregistering): left promiscuous mode
Oct  2 09:13:17 np0005466031 NetworkManager[44907]: <info>  [1759410797.0832] device (tap8313f187-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:13:17 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:17Z|00818|binding|INFO|Releasing lport 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 from this chassis (sb_readonly=0)
Oct  2 09:13:17 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:17Z|00819|binding|INFO|Setting lport 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 down in Southbound
Oct  2 09:13:17 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:17Z|00820|binding|INFO|Removing iface tap8313f187-d7 ovn-installed in OVS
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.127 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:8e:57 10.100.0.6'], port_security=['fa:16:3e:ca:8e:57 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f07a4381-2291-4a58-a2ca-b04071e65a0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=8313f187-d7bf-46d7-a7fe-6454eaa6bc87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.128 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 8313f187-d7bf-46d7-a7fe-6454eaa6bc87 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.129 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc#033[00m
Oct  2 09:13:17 np0005466031 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c4.scope: Deactivated successfully.
Oct  2 09:13:17 np0005466031 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c4.scope: Consumed 23.859s CPU time.
Oct  2 09:13:17 np0005466031 systemd-machined[192227]: Machine qemu-91-instance-000000c4 terminated.
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.148 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[53cca890-61bb-4ac3-99c9-2fc645b535fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:17.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.182 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8be95c1e-db4c-4353-8e7a-9b67f5c7742c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.185 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1df192-83ff-42df-be46-9deea7696348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.209 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[75e5413f-2754-47bb-bb65-9207bd566df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.233 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[37d37518-7a69-4ef2-a62d-e4c07fa667b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835666, 'reachable_time': 15897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325358, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.252 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2e53927b-e868-4f04-8c4a-3ed2d4507e83]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835678, 'tstamp': 835678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325362, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835681, 'tstamp': 835681}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325362, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.254 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.260 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.260 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.260 2 INFO nova.virt.libvirt.driver [-] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Instance destroyed successfully.#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.261 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.261 2 DEBUG nova.objects.instance [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid f07a4381-2291-4a58-a2ca-b04071e65a0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:17.261 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.341 2 DEBUG nova.virt.libvirt.vif [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=196,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-yct79g8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=f07a4381-2291-4a58-a2ca-b04071e65a0a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.342 2 DEBUG nova.network.os_vif_util [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "address": "fa:16:3e:ca:8e:57", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8313f187-d7", "ovs_interfaceid": "8313f187-d7bf-46d7-a7fe-6454eaa6bc87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.343 2 DEBUG nova.network.os_vif_util [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ca:8e:57,bridge_name='br-int',has_traffic_filtering=True,id=8313f187-d7bf-46d7-a7fe-6454eaa6bc87,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8313f187-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.343 2 DEBUG os_vif [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:8e:57,bridge_name='br-int',has_traffic_filtering=True,id=8313f187-d7bf-46d7-a7fe-6454eaa6bc87,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8313f187-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8313f187-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:17 np0005466031 nova_compute[235803]: 2025-10-02 13:13:17.350 2 INFO os_vif [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:8e:57,bridge_name='br-int',has_traffic_filtering=True,id=8313f187-d7bf-46d7-a7fe-6454eaa6bc87,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8313f187-d7')#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.079 2 DEBUG nova.compute.manager [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-vif-unplugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.079 2 DEBUG oslo_concurrency.lockutils [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.080 2 DEBUG oslo_concurrency.lockutils [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.080 2 DEBUG oslo_concurrency.lockutils [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.080 2 DEBUG nova.compute.manager [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] No waiting events found dispatching network-vif-unplugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.080 2 DEBUG nova.compute.manager [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-vif-unplugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.080 2 DEBUG nova.compute.manager [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.081 2 DEBUG oslo_concurrency.lockutils [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.081 2 DEBUG oslo_concurrency.lockutils [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.081 2 DEBUG oslo_concurrency.lockutils [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.081 2 DEBUG nova.compute.manager [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] No waiting events found dispatching network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:18 np0005466031 nova_compute[235803]: 2025-10-02 13:13:18.082 2 WARNING nova.compute.manager [req-139673f7-bed8-4f1a-9b79-20ed50448901 req-39fa317a-83dd-4fed-b50b-b8e409c20cd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received unexpected event network-vif-plugged-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:13:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:18.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:19.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:19 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:19Z|00821|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:13:19 np0005466031 nova_compute[235803]: 2025-10-02 13:13:19.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:19 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:19Z|00822|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:13:19 np0005466031 nova_compute[235803]: 2025-10-02 13:13:19.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:20 np0005466031 nova_compute[235803]: 2025-10-02 13:13:20.108 2 INFO nova.virt.libvirt.driver [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Deleting instance files /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a_del#033[00m
Oct  2 09:13:20 np0005466031 nova_compute[235803]: 2025-10-02 13:13:20.109 2 INFO nova.virt.libvirt.driver [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Deletion of /var/lib/nova/instances/f07a4381-2291-4a58-a2ca-b04071e65a0a_del complete#033[00m
Oct  2 09:13:20 np0005466031 nova_compute[235803]: 2025-10-02 13:13:20.162 2 INFO nova.compute.manager [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Took 3.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:13:20 np0005466031 nova_compute[235803]: 2025-10-02 13:13:20.162 2 DEBUG oslo.service.loopingcall [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:13:20 np0005466031 nova_compute[235803]: 2025-10-02 13:13:20.163 2 DEBUG nova.compute.manager [-] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:13:20 np0005466031 nova_compute[235803]: 2025-10-02 13:13:20.163 2 DEBUG nova.network.neutron [-] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:13:20 np0005466031 nova_compute[235803]: 2025-10-02 13:13:20.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:20 np0005466031 podman[325443]: 2025-10-02 13:13:20.652192312 +0000 UTC m=+0.067300598 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:13:20 np0005466031 podman[325444]: 2025-10-02 13:13:20.680385844 +0000 UTC m=+0.095043287 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:13:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:20.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:21.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:21 np0005466031 nova_compute[235803]: 2025-10-02 13:13:21.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:22 np0005466031 nova_compute[235803]: 2025-10-02 13:13:22.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:22 np0005466031 nova_compute[235803]: 2025-10-02 13:13:22.591 2 DEBUG nova.network.neutron [-] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:22 np0005466031 nova_compute[235803]: 2025-10-02 13:13:22.611 2 INFO nova.compute.manager [-] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Took 2.45 seconds to deallocate network for instance.#033[00m
Oct  2 09:13:22 np0005466031 nova_compute[235803]: 2025-10-02 13:13:22.673 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:22 np0005466031 nova_compute[235803]: 2025-10-02 13:13:22.673 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:22 np0005466031 nova_compute[235803]: 2025-10-02 13:13:22.676 2 DEBUG nova.compute.manager [req-d090b23c-82ea-4fd8-8fc5-022520c1c4f0 req-1ccef350-85de-447b-9f24-1b5748464808 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Received event network-vif-deleted-8313f187-d7bf-46d7-a7fe-6454eaa6bc87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:23.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:23 np0005466031 nova_compute[235803]: 2025-10-02 13:13:23.205 2 DEBUG oslo_concurrency.processutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1530611564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:23 np0005466031 nova_compute[235803]: 2025-10-02 13:13:23.669 2 DEBUG oslo_concurrency.processutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:23 np0005466031 nova_compute[235803]: 2025-10-02 13:13:23.676 2 DEBUG nova.compute.provider_tree [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:23 np0005466031 nova_compute[235803]: 2025-10-02 13:13:23.689 2 DEBUG nova.scheduler.client.report [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:23 np0005466031 nova_compute[235803]: 2025-10-02 13:13:23.711 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:23 np0005466031 nova_compute[235803]: 2025-10-02 13:13:23.737 2 INFO nova.scheduler.client.report [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocations for instance f07a4381-2291-4a58-a2ca-b04071e65a0a#033[00m
Oct  2 09:13:23 np0005466031 nova_compute[235803]: 2025-10-02 13:13:23.831 2 DEBUG oslo_concurrency.lockutils [None req-333ea10f-fe3c-4a30-b13b-5e4c0ce4e938 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "f07a4381-2291-4a58-a2ca-b04071e65a0a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:24 np0005466031 nova_compute[235803]: 2025-10-02 13:13:24.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:24.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.711 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.711 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.712 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.742 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.743 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.743 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.743 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:13:25 np0005466031 nova_compute[235803]: 2025-10-02 13:13:25.743 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:25.881 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:25.881 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:25.881 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1372799750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.215 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.301 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.302 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.351 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "61bad754-8d82-465b-8545-25d700a6e146" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.352 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.353 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "61bad754-8d82-465b-8545-25d700a6e146-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.353 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.353 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.355 2 INFO nova.compute.manager [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Terminating instance#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.356 2 DEBUG nova.compute.manager [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:13:26 np0005466031 kernel: tap1f77e4ed-6c (unregistering): left promiscuous mode
Oct  2 09:13:26 np0005466031 NetworkManager[44907]: <info>  [1759410806.4099] device (tap1f77e4ed-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:13:26 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:26Z|00823|binding|INFO|Releasing lport 1f77e4ed-6c42-4686-ae66-d3693020fddd from this chassis (sb_readonly=0)
Oct  2 09:13:26 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:26Z|00824|binding|INFO|Setting lport 1f77e4ed-6c42-4686-ae66-d3693020fddd down in Southbound
Oct  2 09:13:26 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:26Z|00825|binding|INFO|Removing iface tap1f77e4ed-6c ovn-installed in OVS
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.467 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:a4:0c 10.100.0.14'], port_security=['fa:16:3e:53:a4:0c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '61bad754-8d82-465b-8545-25d700a6e146', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e0b78e6-81a7-466c-a6a5-7c1350a20a08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=1f77e4ed-6c42-4686-ae66-d3693020fddd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.468 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 1f77e4ed-6c42-4686-ae66-d3693020fddd in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.470 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9001b9c-bca6-4085-a954-1414269e31bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.471 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0d289e-a5bd-4fa8-be0b-c1697f5b9161]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.471 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace which is not needed anymore#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:26 np0005466031 virtqemud[235323]: An error occurred, but the cause is unknown
Oct  2 09:13:26 np0005466031 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Oct  2 09:13:26 np0005466031 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c2.scope: Consumed 25.260s CPU time.
Oct  2 09:13:26 np0005466031 systemd-machined[192227]: Machine qemu-90-instance-000000c2 terminated.
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.572 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.573 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3915MB free_disk=20.942607879638672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.573 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.574 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.593 2 INFO nova.virt.libvirt.driver [-] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Instance destroyed successfully.#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.593 2 DEBUG nova.objects.instance [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid 61bad754-8d82-465b-8545-25d700a6e146 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.613 2 DEBUG nova.virt.libvirt.vif [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-851533710',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-851533710',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-0899y8tr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:22Z,user_data=None,user_id='156cc6022c70402ab6d194a340b076d5',uuid=61bad754-8d82-465b-8545-25d700a6e146,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.614 2 DEBUG nova.network.os_vif_util [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "address": "fa:16:3e:53:a4:0c", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f77e4ed-6c", "ovs_interfaceid": "1f77e4ed-6c42-4686-ae66-d3693020fddd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.614 2 DEBUG nova.network.os_vif_util [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:a4:0c,bridge_name='br-int',has_traffic_filtering=True,id=1f77e4ed-6c42-4686-ae66-d3693020fddd,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f77e4ed-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.615 2 DEBUG os_vif [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:a4:0c,bridge_name='br-int',has_traffic_filtering=True,id=1f77e4ed-6c42-4686-ae66-d3693020fddd,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f77e4ed-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:13:26 np0005466031 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[320712]: [NOTICE]   (320716) : haproxy version is 2.8.14-c23fe91
Oct  2 09:13:26 np0005466031 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[320712]: [NOTICE]   (320716) : path to executable is /usr/sbin/haproxy
Oct  2 09:13:26 np0005466031 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[320712]: [WARNING]  (320716) : Exiting Master process...
Oct  2 09:13:26 np0005466031 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[320712]: [WARNING]  (320716) : Exiting Master process...
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:26 np0005466031 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[320712]: [ALERT]    (320716) : Current worker (320719) exited with code 143 (Terminated)
Oct  2 09:13:26 np0005466031 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[320712]: [WARNING]  (320716) : All workers exited. Exiting... (0)
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f77e4ed-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:26 np0005466031 systemd[1]: libpod-ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a.scope: Deactivated successfully.
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:13:26 np0005466031 podman[325561]: 2025-10-02 13:13:26.626675374 +0000 UTC m=+0.058182867 container died ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.628 2 INFO os_vif [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:a4:0c,bridge_name='br-int',has_traffic_filtering=True,id=1f77e4ed-6c42-4686-ae66-d3693020fddd,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f77e4ed-6c')#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.667 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 61bad754-8d82-465b-8545-25d700a6e146 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.668 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.668 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:13:26 np0005466031 systemd[1]: var-lib-containers-storage-overlay-0f6d19d9cd68aaea6f8d9a700a0835fe96d7882492418a80e8a9c47c9e1c6484-merged.mount: Deactivated successfully.
Oct  2 09:13:26 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a-userdata-shm.mount: Deactivated successfully.
Oct  2 09:13:26 np0005466031 podman[325561]: 2025-10-02 13:13:26.70469509 +0000 UTC m=+0.136202573 container cleanup ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:13:26 np0005466031 systemd[1]: libpod-conmon-ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a.scope: Deactivated successfully.
Oct  2 09:13:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:26.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.751 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:26 np0005466031 podman[325604]: 2025-10-02 13:13:26.876977161 +0000 UTC m=+0.148195318 container remove ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.883 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9b664678-ccfe-4be3-8b95-c190b8b7ce0a]: (4, ('Thu Oct  2 01:13:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a)\nebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a\nThu Oct  2 01:13:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (ebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a)\nebb34fea738d23de992b133b67e1c29216f6ae31099956fab21e059eff333f6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.884 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[88c8ffd6-3b19-4986-8b77-6c55b6e61c44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.885 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:26 np0005466031 kernel: tapd9001b9c-b0: left promiscuous mode
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.891 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fe22b7b4-e123-4a4a-82e3-68c3fe488866]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.912 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc34742-4dbb-4455-be9f-5741bc1bb457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.913 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[47f27425-eacc-4a70-93da-69fcee60fb15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.930 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[550c65f1-5ceb-4a83-81aa-584c9af02007]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835659, 'reachable_time': 23524, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325638, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:26 np0005466031 systemd[1]: run-netns-ovnmeta\x2dd9001b9c\x2dbca6\x2d4085\x2da954\x2d1414269e31bc.mount: Deactivated successfully.
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.933 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:13:26 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:26.933 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d831149b-3b6a-4da8-b28e-ec9558bd0536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.939 2 DEBUG nova.compute.manager [req-2488802a-bc30-4809-8acc-22e55384f760 req-a38db7af-a94c-4293-9dda-a1e2949f81ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received event network-vif-unplugged-1f77e4ed-6c42-4686-ae66-d3693020fddd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.940 2 DEBUG oslo_concurrency.lockutils [req-2488802a-bc30-4809-8acc-22e55384f760 req-a38db7af-a94c-4293-9dda-a1e2949f81ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "61bad754-8d82-465b-8545-25d700a6e146-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.940 2 DEBUG oslo_concurrency.lockutils [req-2488802a-bc30-4809-8acc-22e55384f760 req-a38db7af-a94c-4293-9dda-a1e2949f81ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.941 2 DEBUG oslo_concurrency.lockutils [req-2488802a-bc30-4809-8acc-22e55384f760 req-a38db7af-a94c-4293-9dda-a1e2949f81ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.941 2 DEBUG nova.compute.manager [req-2488802a-bc30-4809-8acc-22e55384f760 req-a38db7af-a94c-4293-9dda-a1e2949f81ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] No waiting events found dispatching network-vif-unplugged-1f77e4ed-6c42-4686-ae66-d3693020fddd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:26 np0005466031 nova_compute[235803]: 2025-10-02 13:13:26.941 2 DEBUG nova.compute.manager [req-2488802a-bc30-4809-8acc-22e55384f760 req-a38db7af-a94c-4293-9dda-a1e2949f81ce 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received event network-vif-unplugged-1f77e4ed-6c42-4686-ae66-d3693020fddd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:13:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:27.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3093970101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.253 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.261 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.276 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.301 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.301 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.426 2 INFO nova.virt.libvirt.driver [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Deleting instance files /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146_del#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.428 2 INFO nova.virt.libvirt.driver [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Deletion of /var/lib/nova/instances/61bad754-8d82-465b-8545-25d700a6e146_del complete#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.485 2 INFO nova.compute.manager [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.486 2 DEBUG oslo.service.loopingcall [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.487 2 DEBUG nova.compute.manager [-] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:13:27 np0005466031 nova_compute[235803]: 2025-10-02 13:13:27.487 2 DEBUG nova.network.neutron [-] [instance: 61bad754-8d82-465b-8545-25d700a6e146] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:13:28 np0005466031 nova_compute[235803]: 2025-10-02 13:13:28.590 2 DEBUG nova.network.neutron [-] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:28 np0005466031 nova_compute[235803]: 2025-10-02 13:13:28.615 2 INFO nova.compute.manager [-] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Took 1.13 seconds to deallocate network for instance.#033[00m
Oct  2 09:13:28 np0005466031 podman[325662]: 2025-10-02 13:13:28.631644835 +0000 UTC m=+0.059972228 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 09:13:28 np0005466031 podman[325661]: 2025-10-02 13:13:28.641726955 +0000 UTC m=+0.070454289 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:13:28 np0005466031 nova_compute[235803]: 2025-10-02 13:13:28.660 2 DEBUG nova.compute.manager [req-dd44325f-d35e-4b81-b3d8-f19e5820b86c req-8fa04805-93c1-491a-b5aa-868b42988faa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received event network-vif-deleted-1f77e4ed-6c42-4686-ae66-d3693020fddd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:13:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:28.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:13:28 np0005466031 nova_compute[235803]: 2025-10-02 13:13:28.824 2 INFO nova.compute.manager [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Oct  2 09:13:28 np0005466031 nova_compute[235803]: 2025-10-02 13:13:28.885 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:28 np0005466031 nova_compute[235803]: 2025-10-02 13:13:28.886 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:28 np0005466031 nova_compute[235803]: 2025-10-02 13:13:28.943 2 DEBUG oslo_concurrency.processutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.031 2 DEBUG nova.compute.manager [req-ed286190-8dbf-40a0-b4df-9226934d8178 req-3d78aedc-ac50-43a3-930c-48f1de85fc2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received event network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.032 2 DEBUG oslo_concurrency.lockutils [req-ed286190-8dbf-40a0-b4df-9226934d8178 req-3d78aedc-ac50-43a3-930c-48f1de85fc2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "61bad754-8d82-465b-8545-25d700a6e146-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.033 2 DEBUG oslo_concurrency.lockutils [req-ed286190-8dbf-40a0-b4df-9226934d8178 req-3d78aedc-ac50-43a3-930c-48f1de85fc2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.033 2 DEBUG oslo_concurrency.lockutils [req-ed286190-8dbf-40a0-b4df-9226934d8178 req-3d78aedc-ac50-43a3-930c-48f1de85fc2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.033 2 DEBUG nova.compute.manager [req-ed286190-8dbf-40a0-b4df-9226934d8178 req-3d78aedc-ac50-43a3-930c-48f1de85fc2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] No waiting events found dispatching network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.034 2 WARNING nova.compute.manager [req-ed286190-8dbf-40a0-b4df-9226934d8178 req-3d78aedc-ac50-43a3-930c-48f1de85fc2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Received unexpected event network-vif-plugged-1f77e4ed-6c42-4686-ae66-d3693020fddd for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:13:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:29.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/532331378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.438 2 DEBUG oslo_concurrency.processutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.444 2 DEBUG nova.compute.provider_tree [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.459 2 DEBUG nova.scheduler.client.report [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.481 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.509 2 INFO nova.scheduler.client.report [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocations for instance 61bad754-8d82-465b-8545-25d700a6e146#033[00m
Oct  2 09:13:29 np0005466031 nova_compute[235803]: 2025-10-02 13:13:29.594 2 DEBUG oslo_concurrency.lockutils [None req-615b8a1f-141a-478b-bfc8-1293e52eeca0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "61bad754-8d82-465b-8545-25d700a6e146" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:30 np0005466031 nova_compute[235803]: 2025-10-02 13:13:30.225 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:31.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:31 np0005466031 nova_compute[235803]: 2025-10-02 13:13:31.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:13:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1278279996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:13:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:13:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1278279996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:13:31 np0005466031 nova_compute[235803]: 2025-10-02 13:13:31.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:32 np0005466031 nova_compute[235803]: 2025-10-02 13:13:32.257 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410797.2559452, f07a4381-2291-4a58-a2ca-b04071e65a0a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:32 np0005466031 nova_compute[235803]: 2025-10-02 13:13:32.258 2 INFO nova.compute.manager [-] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:13:32 np0005466031 nova_compute[235803]: 2025-10-02 13:13:32.278 2 DEBUG nova.compute.manager [None req-04cac9d3-7ed0-47ff-bbf1-5d64f06c8c98 - - - - - -] [instance: f07a4381-2291-4a58-a2ca-b04071e65a0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:32.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:33.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:33 np0005466031 nova_compute[235803]: 2025-10-02 13:13:33.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:33 np0005466031 nova_compute[235803]: 2025-10-02 13:13:33.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:33 np0005466031 nova_compute[235803]: 2025-10-02 13:13:33.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:13:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e377 e377: 3 total, 3 up, 3 in
Oct  2 09:13:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:34.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:35.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:35 np0005466031 nova_compute[235803]: 2025-10-02 13:13:35.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:36 np0005466031 nova_compute[235803]: 2025-10-02 13:13:36.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:36 np0005466031 nova_compute[235803]: 2025-10-02 13:13:36.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:13:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:36.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:13:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:37.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e378 e378: 3 total, 3 up, 3 in
Oct  2 09:13:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:38.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:39.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e379 e379: 3 total, 3 up, 3 in
Oct  2 09:13:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:40.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e380 e380: 3 total, 3 up, 3 in
Oct  2 09:13:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:41 np0005466031 nova_compute[235803]: 2025-10-02 13:13:41.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:41 np0005466031 nova_compute[235803]: 2025-10-02 13:13:41.592 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410806.5916624, 61bad754-8d82-465b-8545-25d700a6e146 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:41 np0005466031 nova_compute[235803]: 2025-10-02 13:13:41.593 2 INFO nova.compute.manager [-] [instance: 61bad754-8d82-465b-8545-25d700a6e146] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:13:41 np0005466031 nova_compute[235803]: 2025-10-02 13:13:41.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:41 np0005466031 nova_compute[235803]: 2025-10-02 13:13:41.623 2 DEBUG nova.compute.manager [None req-fbafa0f9-c85d-4609-802b-3bbf7ee2508d - - - - - -] [instance: 61bad754-8d82-465b-8545-25d700a6e146] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:42.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:43.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:44.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:45.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:46 np0005466031 nova_compute[235803]: 2025-10-02 13:13:46.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:46 np0005466031 nova_compute[235803]: 2025-10-02 13:13:46.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:46.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:46 np0005466031 nova_compute[235803]: 2025-10-02 13:13:46.864 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:46 np0005466031 nova_compute[235803]: 2025-10-02 13:13:46.865 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:46 np0005466031 nova_compute[235803]: 2025-10-02 13:13:46.910 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:13:46 np0005466031 nova_compute[235803]: 2025-10-02 13:13:46.996 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:46 np0005466031 nova_compute[235803]: 2025-10-02 13:13:46.997 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.005 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.006 2 INFO nova.compute.claims [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.118 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:47.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4100119741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.585 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.591 2 DEBUG nova.compute.provider_tree [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.609 2 DEBUG nova.scheduler.client.report [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.669 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.670 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.738 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.739 2 DEBUG nova.network.neutron [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.788 2 INFO nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.833 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.950 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.952 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.952 2 INFO nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Creating image(s)#033[00m
Oct  2 09:13:47 np0005466031 nova_compute[235803]: 2025-10-02 13:13:47.976 2 DEBUG nova.storage.rbd_utils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image f2511cad-2f04-468f-99b3-f7302be124b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.001 2 DEBUG nova.storage.rbd_utils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image f2511cad-2f04-468f-99b3-f7302be124b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.027 2 DEBUG nova.storage.rbd_utils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image f2511cad-2f04-468f-99b3-f7302be124b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.030 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "061bafbe80e766b15d75144b1a7ce7b9094c8cfb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.031 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "061bafbe80e766b15d75144b1a7ce7b9094c8cfb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.255 2 DEBUG nova.virt.libvirt.imagebackend [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/23101426-fbb2-4946-b9ce-fdcd0b2d5391/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/23101426-fbb2-4946-b9ce-fdcd0b2d5391/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.323 2 DEBUG nova.virt.libvirt.imagebackend [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/23101426-fbb2-4946-b9ce-fdcd0b2d5391/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.324 2 DEBUG nova.storage.rbd_utils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] cloning images/23101426-fbb2-4946-b9ce-fdcd0b2d5391@snap to None/f2511cad-2f04-468f-99b3-f7302be124b8_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.393 2 DEBUG nova.policy [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81db307ac1f846188ce19b644ebcc396', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cbaefa5c700c4ed495a5244732eed7e3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.671 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "061bafbe80e766b15d75144b1a7ce7b9094c8cfb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.743952) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828743982, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2453, "num_deletes": 255, "total_data_size": 5697494, "memory_usage": 5781376, "flush_reason": "Manual Compaction"}
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Oct  2 09:13:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:48.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828774652, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3723786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71564, "largest_seqno": 74012, "table_properties": {"data_size": 3713812, "index_size": 6339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21332, "raw_average_key_size": 20, "raw_value_size": 3693614, "raw_average_value_size": 3614, "num_data_blocks": 274, "num_entries": 1022, "num_filter_entries": 1022, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410630, "oldest_key_time": 1759410630, "file_creation_time": 1759410828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 30745 microseconds, and 8080 cpu microseconds.
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.774697) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3723786 bytes OK
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.774714) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.776105) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.776119) EVENT_LOG_v1 {"time_micros": 1759410828776114, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.776135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5686620, prev total WAL file size 5686620, number of live WAL files 2.
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.777376) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3636KB)], [147(10054KB)]
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828777440, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 14019305, "oldest_snapshot_seqno": -1}
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.824 2 DEBUG nova.objects.instance [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'migration_context' on Instance uuid f2511cad-2f04-468f-99b3-f7302be124b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.846 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.847 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Ensure instance console log exists: /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.847 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.847 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:48 np0005466031 nova_compute[235803]: 2025-10-02 13:13:48.848 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 9539 keys, 12075135 bytes, temperature: kUnknown
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828878725, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 12075135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12013803, "index_size": 36387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 251184, "raw_average_key_size": 26, "raw_value_size": 11846974, "raw_average_value_size": 1241, "num_data_blocks": 1385, "num_entries": 9539, "num_filter_entries": 9539, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.878967) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12075135 bytes
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.880908) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.3 rd, 119.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 10069, records dropped: 530 output_compression: NoCompression
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.880927) EVENT_LOG_v1 {"time_micros": 1759410828880917, "job": 94, "event": "compaction_finished", "compaction_time_micros": 101348, "compaction_time_cpu_micros": 28803, "output_level": 6, "num_output_files": 1, "total_output_size": 12075135, "num_input_records": 10069, "num_output_records": 9539, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828881740, "job": 94, "event": "table_file_deletion", "file_number": 149}
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828883790, "job": 94, "event": "table_file_deletion", "file_number": 147}
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.777283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.883930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.883936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.883938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.883940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:13:48.883942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:49.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:49 np0005466031 nova_compute[235803]: 2025-10-02 13:13:49.384 2 DEBUG nova.network.neutron [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Successfully created port: a0a28341-736a-4a2d-b80f-efe5fb1b2239 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:13:50 np0005466031 nova_compute[235803]: 2025-10-02 13:13:50.297 2 DEBUG nova.network.neutron [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Successfully updated port: a0a28341-736a-4a2d-b80f-efe5fb1b2239 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:13:50 np0005466031 nova_compute[235803]: 2025-10-02 13:13:50.318 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:50 np0005466031 nova_compute[235803]: 2025-10-02 13:13:50.318 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquired lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:50 np0005466031 nova_compute[235803]: 2025-10-02 13:13:50.318 2 DEBUG nova.network.neutron [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:13:50 np0005466031 nova_compute[235803]: 2025-10-02 13:13:50.398 2 DEBUG nova.compute.manager [req-c5d93da1-92d2-4ea8-8ee2-4e8ae3ef6cf4 req-b348dbd8-2f8a-42f7-a321-b310f18adae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-changed-a0a28341-736a-4a2d-b80f-efe5fb1b2239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:50 np0005466031 nova_compute[235803]: 2025-10-02 13:13:50.398 2 DEBUG nova.compute.manager [req-c5d93da1-92d2-4ea8-8ee2-4e8ae3ef6cf4 req-b348dbd8-2f8a-42f7-a321-b310f18adae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Refreshing instance network info cache due to event network-changed-a0a28341-736a-4a2d-b80f-efe5fb1b2239. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:13:50 np0005466031 nova_compute[235803]: 2025-10-02 13:13:50.398 2 DEBUG oslo_concurrency.lockutils [req-c5d93da1-92d2-4ea8-8ee2-4e8ae3ef6cf4 req-b348dbd8-2f8a-42f7-a321-b310f18adae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:50 np0005466031 nova_compute[235803]: 2025-10-02 13:13:50.461 2 DEBUG nova.network.neutron [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:13:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:50.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e381 e381: 3 total, 3 up, 3 in
Oct  2 09:13:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:51.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.343 2 DEBUG nova.network.neutron [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updating instance_info_cache with network_info: [{"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.359 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Releasing lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.360 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Instance network_info: |[{"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.360 2 DEBUG oslo_concurrency.lockutils [req-c5d93da1-92d2-4ea8-8ee2-4e8ae3ef6cf4 req-b348dbd8-2f8a-42f7-a321-b310f18adae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.360 2 DEBUG nova.network.neutron [req-c5d93da1-92d2-4ea8-8ee2-4e8ae3ef6cf4 req-b348dbd8-2f8a-42f7-a321-b310f18adae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Refreshing network info cache for port a0a28341-736a-4a2d-b80f-efe5fb1b2239 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.363 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Start _get_guest_xml network_info=[{"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:13:35Z,direct_url=<?>,disk_format='raw',id=23101426-fbb2-4946-b9ce-fdcd0b2d5391,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-825838792',owner='cbaefa5c700c4ed495a5244732eed7e3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:13:42Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '23101426-fbb2-4946-b9ce-fdcd0b2d5391'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.366 2 WARNING nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.373 2 DEBUG nova.virt.libvirt.host [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.373 2 DEBUG nova.virt.libvirt.host [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.379 2 DEBUG nova.virt.libvirt.host [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.380 2 DEBUG nova.virt.libvirt.host [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.381 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.381 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:13:35Z,direct_url=<?>,disk_format='raw',id=23101426-fbb2-4946-b9ce-fdcd0b2d5391,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-825838792',owner='cbaefa5c700c4ed495a5244732eed7e3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:13:42Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.382 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.382 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.382 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.382 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.382 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.383 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.383 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.383 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.383 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.383 2 DEBUG nova.virt.hardware [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.386 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:51 np0005466031 podman[326006]: 2025-10-02 13:13:51.631407431 +0000 UTC m=+0.058806094 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 09:13:51 np0005466031 podman[326007]: 2025-10-02 13:13:51.664672629 +0000 UTC m=+0.089655643 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct  2 09:13:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:13:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2946218505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.853 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.890 2 DEBUG nova.storage.rbd_utils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image f2511cad-2f04-468f-99b3-f7302be124b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:51 np0005466031 nova_compute[235803]: 2025-10-02 13:13:51.897 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:13:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2028186615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:13:52 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:13:52 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.359 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.361 2 DEBUG nova.virt.libvirt.vif [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1860760860',display_name='tempest-TestStampPattern-server-1860760860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-1860760860',id=207,image_ref='23101426-fbb2-4946-b9ce-fdcd0b2d5391',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKUbc0knAwx6AjWLxEzN/Myua8DLnB1wbhcbmQ6eEauumE5/uQW0dSqivGfoQK/c14gwHJVzybj68xv4MB1iOou4+ZOgUXCtWGooPy7in3/oc/+fGSq5+qeVZlJgs3yxeQ==',key_name='tempest-TestStampPattern-2020443839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbaefa5c700c4ed495a5244732eed7e3',ramdisk_id='',reservation_id='r-nob92iki',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='97dd449c-87a7-4278-a819-3d412f587a4c',image_min_disk='1',image_min_ram='0',image_owner_id='cbaefa5c700c4ed495a5244732eed7e3',image_owner_project_name='tempest-TestStampPattern-1060565162',image_owner_user_name='tempest-TestStampPattern-1060565162-project-member',image_user_id='81db307ac1f846188ce19b644ebcc396',network_allocated='True',owner_project_name='tempest-TestStampPattern-1060565162',owner_user_name='tempest-TestStampPattern-1060565162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:47Z,user_data=None,user_id='81db307ac1f846188ce19b644ebcc396',uuid=f2511cad-2f04-468f-99b3-f7302be124b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.361 2 DEBUG nova.network.os_vif_util [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converting VIF {"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.362 2 DEBUG nova.network.os_vif_util [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=a0a28341-736a-4a2d-b80f-efe5fb1b2239,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a28341-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.364 2 DEBUG nova.objects.instance [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f2511cad-2f04-468f-99b3-f7302be124b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.418 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <uuid>f2511cad-2f04-468f-99b3-f7302be124b8</uuid>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <name>instance-000000cf</name>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestStampPattern-server-1860760860</nova:name>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:13:51</nova:creationTime>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <nova:user uuid="81db307ac1f846188ce19b644ebcc396">tempest-TestStampPattern-1060565162-project-member</nova:user>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <nova:project uuid="cbaefa5c700c4ed495a5244732eed7e3">tempest-TestStampPattern-1060565162</nova:project>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="23101426-fbb2-4946-b9ce-fdcd0b2d5391"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <nova:port uuid="a0a28341-736a-4a2d-b80f-efe5fb1b2239">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <entry name="serial">f2511cad-2f04-468f-99b3-f7302be124b8</entry>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <entry name="uuid">f2511cad-2f04-468f-99b3-f7302be124b8</entry>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f2511cad-2f04-468f-99b3-f7302be124b8_disk">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/f2511cad-2f04-468f-99b3-f7302be124b8_disk.config">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:5d:7f:92"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <target dev="tapa0a28341-73"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8/console.log" append="off"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:13:52 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:13:52 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:13:52 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:13:52 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.420 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Preparing to wait for external event network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.421 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.421 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.421 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.422 2 DEBUG nova.virt.libvirt.vif [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1860760860',display_name='tempest-TestStampPattern-server-1860760860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-1860760860',id=207,image_ref='23101426-fbb2-4946-b9ce-fdcd0b2d5391',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKUbc0knAwx6AjWLxEzN/Myua8DLnB1wbhcbmQ6eEauumE5/uQW0dSqivGfoQK/c14gwHJVzybj68xv4MB1iOou4+ZOgUXCtWGooPy7in3/oc/+fGSq5+qeVZlJgs3yxeQ==',key_name='tempest-TestStampPattern-2020443839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbaefa5c700c4ed495a5244732eed7e3',ramdisk_id='',reservation_id='r-nob92iki',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='97dd449c-87a7-4278-a819-3d412f587a4c',image_min_disk='1',image_min_ram='0',image_owner_id='cbaefa5c700c4ed495a5244732eed7e3',image_owner_project_name='tempest-TestStampPattern-1060565162',image_owner_user_name='tempest-TestStampPattern-1060565162-project-member',image_user_id='81db307ac1f846188ce19b644ebcc396',network_allocated='True',owner_project_name='tempest-TestStampPattern-1060565162',owner_user_name='tempest-TestStampPattern-1060565162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:47Z,user_data=None,user_id='81db307ac1f846188ce19b644ebcc396',uuid=f2511cad-2f04-468f-99b3-f7302be124b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.422 2 DEBUG nova.network.os_vif_util [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converting VIF {"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.423 2 DEBUG nova.network.os_vif_util [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=a0a28341-736a-4a2d-b80f-efe5fb1b2239,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a28341-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.423 2 DEBUG os_vif [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=a0a28341-736a-4a2d-b80f-efe5fb1b2239,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a28341-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0a28341-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0a28341-73, col_values=(('external_ids', {'iface-id': 'a0a28341-736a-4a2d-b80f-efe5fb1b2239', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:7f:92', 'vm-uuid': 'f2511cad-2f04-468f-99b3-f7302be124b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:52 np0005466031 NetworkManager[44907]: <info>  [1759410832.4296] manager: (tapa0a28341-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.435 2 INFO os_vif [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=a0a28341-736a-4a2d-b80f-efe5fb1b2239,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a28341-73')#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.532 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.532 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.532 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No VIF found with MAC fa:16:3e:5d:7f:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.533 2 INFO nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Using config drive#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.560 2 DEBUG nova.storage.rbd_utils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image f2511cad-2f04-468f-99b3-f7302be124b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.566 2 DEBUG nova.network.neutron [req-c5d93da1-92d2-4ea8-8ee2-4e8ae3ef6cf4 req-b348dbd8-2f8a-42f7-a321-b310f18adae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updated VIF entry in instance network info cache for port a0a28341-736a-4a2d-b80f-efe5fb1b2239. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.567 2 DEBUG nova.network.neutron [req-c5d93da1-92d2-4ea8-8ee2-4e8ae3ef6cf4 req-b348dbd8-2f8a-42f7-a321-b310f18adae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updating instance_info_cache with network_info: [{"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.598 2 DEBUG oslo_concurrency.lockutils [req-c5d93da1-92d2-4ea8-8ee2-4e8ae3ef6cf4 req-b348dbd8-2f8a-42f7-a321-b310f18adae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:13:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:52.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.947 2 INFO nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Creating config drive at /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8/disk.config#033[00m
Oct  2 09:13:52 np0005466031 nova_compute[235803]: 2025-10-02 13:13:52.953 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqfmptr2b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.088 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqfmptr2b" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.126 2 DEBUG nova.storage.rbd_utils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] rbd image f2511cad-2f04-468f-99b3-f7302be124b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.132 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8/disk.config f2511cad-2f04-468f-99b3-f7302be124b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:53.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.581 2 DEBUG oslo_concurrency.processutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8/disk.config f2511cad-2f04-468f-99b3-f7302be124b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.582 2 INFO nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Deleting local config drive /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8/disk.config because it was imported into RBD.#033[00m
Oct  2 09:13:53 np0005466031 kernel: tapa0a28341-73: entered promiscuous mode
Oct  2 09:13:53 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:53Z|00826|binding|INFO|Claiming lport a0a28341-736a-4a2d-b80f-efe5fb1b2239 for this chassis.
Oct  2 09:13:53 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:53Z|00827|binding|INFO|a0a28341-736a-4a2d-b80f-efe5fb1b2239: Claiming fa:16:3e:5d:7f:92 10.100.0.6
Oct  2 09:13:53 np0005466031 NetworkManager[44907]: <info>  [1759410833.6458] manager: (tapa0a28341-73): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:53 np0005466031 NetworkManager[44907]: <info>  [1759410833.6693] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct  2 09:13:53 np0005466031 NetworkManager[44907]: <info>  [1759410833.6700] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.673 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:7f:92 10.100.0.6'], port_security=['fa:16:3e:5d:7f:92 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f2511cad-2f04-468f-99b3-f7302be124b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-059f5861-22ab-45f3-a914-fb801f3c71f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbaefa5c700c4ed495a5244732eed7e3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '858bdb72-cf27-4a78-a9f7-c4548894dc59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a904a34-19d9-4790-850b-39af4c509e92, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=a0a28341-736a-4a2d-b80f-efe5fb1b2239) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.675 141898 INFO neutron.agent.ovn.metadata.agent [-] Port a0a28341-736a-4a2d-b80f-efe5fb1b2239 in datapath 059f5861-22ab-45f3-a914-fb801f3c71f9 bound to our chassis#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.676 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 059f5861-22ab-45f3-a914-fb801f3c71f9#033[00m
Oct  2 09:13:53 np0005466031 systemd-machined[192227]: New machine qemu-95-instance-000000cf.
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.694 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6edca839-3fff-4ec7-ac73-2e03d4b57176]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.695 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap059f5861-21 in ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.697 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap059f5861-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.698 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7052f982-73ac-43da-b56e-6e7faeedbf07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.699 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5faedb93-f4df-41bf-be24-7de37f627dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.715 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[9674f1df-2757-4db4-9228-c10b03098221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 systemd[1]: Started Virtual Machine qemu-95-instance-000000cf.
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.739 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[57e042bf-a6ed-430b-b8af-d28ef4fe937c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 systemd-udevd[326168]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:13:53 np0005466031 NetworkManager[44907]: <info>  [1759410833.7675] device (tapa0a28341-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:13:53 np0005466031 NetworkManager[44907]: <info>  [1759410833.7683] device (tapa0a28341-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.776 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[c308e864-5525-4475-9989-c6577b1e43f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 NetworkManager[44907]: <info>  [1759410833.7888] manager: (tap059f5861-20): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.787 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[be223db5-4c0d-48d0-9c99-f65ea8dfaed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.831 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d096ca-f346-4e88-b884-32bc24e825be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.836 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[55daf176-2680-4048-b761-cbb8ae3354e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 NetworkManager[44907]: <info>  [1759410833.8651] device (tap059f5861-20): carrier: link connected
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.874 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[24454609-3307-4630-9a43-a55383a010c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.900 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8502e6d8-edb6-4843-b9e7-36cc119034d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap059f5861-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e7:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868945, 'reachable_time': 15307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326198, 'error': None, 'target': 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.924 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6539a911-f90e-4f68-a140-aed46dd0f0a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:e72a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 868945, 'tstamp': 868945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326199, 'error': None, 'target': 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:53.947 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e745decc-84df-47c4-a0df-d2ffb7297f94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap059f5861-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:e7:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868945, 'reachable_time': 15307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326200, 'error': None, 'target': 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:53 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:53Z|00828|binding|INFO|Setting lport a0a28341-736a-4a2d-b80f-efe5fb1b2239 ovn-installed in OVS
Oct  2 09:13:53 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:53Z|00829|binding|INFO|Setting lport a0a28341-736a-4a2d-b80f-efe5fb1b2239 up in Southbound
Oct  2 09:13:53 np0005466031 nova_compute[235803]: 2025-10-02 13:13:53.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.000 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[22cf5222-5e2f-4c35-8d17-b2f9a36a3603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.081 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[81d0f581-6f19-46dd-b10a-293724395917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.083 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap059f5861-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.084 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.084 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap059f5861-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:54 np0005466031 kernel: tap059f5861-20: entered promiscuous mode
Oct  2 09:13:54 np0005466031 NetworkManager[44907]: <info>  [1759410834.0869] manager: (tap059f5861-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.092 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap059f5861-20, col_values=(('external_ids', {'iface-id': 'd7b1128a-bc65-448f-ac61-7bb6414ffd02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:54 np0005466031 ovn_controller[132413]: 2025-10-02T13:13:54Z|00830|binding|INFO|Releasing lport d7b1128a-bc65-448f-ac61-7bb6414ffd02 from this chassis (sb_readonly=0)
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.097 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/059f5861-22ab-45f3-a914-fb801f3c71f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/059f5861-22ab-45f3-a914-fb801f3c71f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.098 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ac164f3f-a622-4ecc-a68e-5709d53f147b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.099 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-059f5861-22ab-45f3-a914-fb801f3c71f9
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/059f5861-22ab-45f3-a914-fb801f3c71f9.pid.haproxy
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 059f5861-22ab-45f3-a914-fb801f3c71f9
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.100 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'env', 'PROCESS_TAG=haproxy-059f5861-22ab-45f3-a914-fb801f3c71f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/059f5861-22ab-45f3-a914-fb801f3c71f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005466031 podman[326275]: 2025-10-02 13:13:54.53777744 +0000 UTC m=+0.116004812 container create 3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:13:54 np0005466031 podman[326275]: 2025-10-02 13:13:54.447777108 +0000 UTC m=+0.026004500 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:13:54 np0005466031 systemd[1]: Started libpod-conmon-3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327.scope.
Oct  2 09:13:54 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:13:54 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a5c5b0a7f5145e8f466484564af84b9225d422d4f5bab357cd9e2cb7990f9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:13:54 np0005466031 podman[326275]: 2025-10-02 13:13:54.667718611 +0000 UTC m=+0.245946013 container init 3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.668 2 DEBUG nova.compute.manager [req-d9320e04-651c-492c-b856-02585ad86ec7 req-b2b54115-27bd-437d-9779-5066b278392b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.668 2 DEBUG oslo_concurrency.lockutils [req-d9320e04-651c-492c-b856-02585ad86ec7 req-b2b54115-27bd-437d-9779-5066b278392b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.669 2 DEBUG oslo_concurrency.lockutils [req-d9320e04-651c-492c-b856-02585ad86ec7 req-b2b54115-27bd-437d-9779-5066b278392b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.669 2 DEBUG oslo_concurrency.lockutils [req-d9320e04-651c-492c-b856-02585ad86ec7 req-b2b54115-27bd-437d-9779-5066b278392b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.669 2 DEBUG nova.compute.manager [req-d9320e04-651c-492c-b856-02585ad86ec7 req-b2b54115-27bd-437d-9779-5066b278392b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Processing event network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:13:54 np0005466031 podman[326275]: 2025-10-02 13:13:54.674232099 +0000 UTC m=+0.252459471 container start 3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:13:54 np0005466031 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[326290]: [NOTICE]   (326294) : New worker (326296) forked
Oct  2 09:13:54 np0005466031 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[326290]: [NOTICE]   (326294) : Loading success.
Oct  2 09:13:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.772 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:54 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:54.774 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.858 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410834.8580806, f2511cad-2f04-468f-99b3-f7302be124b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.859 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] VM Started (Lifecycle Event)#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.861 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.865 2 DEBUG nova.virt.libvirt.driver [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.868 2 INFO nova.virt.libvirt.driver [-] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Instance spawned successfully.#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.868 2 INFO nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Took 6.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.868 2 DEBUG nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.900 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.904 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.933 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.934 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410834.859229, f2511cad-2f04-468f-99b3-f7302be124b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.934 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.950 2 INFO nova.compute.manager [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Took 7.98 seconds to build instance.#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.952 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.957 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410834.8643408, f2511cad-2f04-468f-99b3-f7302be124b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.958 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.973 2 DEBUG oslo_concurrency.lockutils [None req-036a2a0d-c900-4f2d-beae-340509e7e569 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.982 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:54 np0005466031 nova_compute[235803]: 2025-10-02 13:13:54.986 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:13:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:55.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:56 np0005466031 nova_compute[235803]: 2025-10-02 13:13:56.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:56.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:56 np0005466031 nova_compute[235803]: 2025-10-02 13:13:56.770 2 DEBUG nova.compute.manager [req-e65bf0ac-8192-4ec8-8ee3-7b4cd7dccff8 req-c394cc14-80ef-42d5-825f-910950f67f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:56 np0005466031 nova_compute[235803]: 2025-10-02 13:13:56.771 2 DEBUG oslo_concurrency.lockutils [req-e65bf0ac-8192-4ec8-8ee3-7b4cd7dccff8 req-c394cc14-80ef-42d5-825f-910950f67f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:56 np0005466031 nova_compute[235803]: 2025-10-02 13:13:56.772 2 DEBUG oslo_concurrency.lockutils [req-e65bf0ac-8192-4ec8-8ee3-7b4cd7dccff8 req-c394cc14-80ef-42d5-825f-910950f67f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:56 np0005466031 nova_compute[235803]: 2025-10-02 13:13:56.772 2 DEBUG oslo_concurrency.lockutils [req-e65bf0ac-8192-4ec8-8ee3-7b4cd7dccff8 req-c394cc14-80ef-42d5-825f-910950f67f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:56 np0005466031 nova_compute[235803]: 2025-10-02 13:13:56.772 2 DEBUG nova.compute.manager [req-e65bf0ac-8192-4ec8-8ee3-7b4cd7dccff8 req-c394cc14-80ef-42d5-825f-910950f67f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] No waiting events found dispatching network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:56 np0005466031 nova_compute[235803]: 2025-10-02 13:13:56.773 2 WARNING nova.compute.manager [req-e65bf0ac-8192-4ec8-8ee3-7b4cd7dccff8 req-c394cc14-80ef-42d5-825f-910950f67f2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received unexpected event network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:13:56 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:13:56.776 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:57.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:57 np0005466031 nova_compute[235803]: 2025-10-02 13:13:57.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:13:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:58.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:13:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:13:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:59.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:59 np0005466031 nova_compute[235803]: 2025-10-02 13:13:59.536 2 DEBUG nova.compute.manager [req-57250bd8-7780-40e1-9fa5-b7635fcd9f2f req-a69bb4af-90e2-44c8-bb7a-5e300d8fe1f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-changed-a0a28341-736a-4a2d-b80f-efe5fb1b2239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:59 np0005466031 nova_compute[235803]: 2025-10-02 13:13:59.537 2 DEBUG nova.compute.manager [req-57250bd8-7780-40e1-9fa5-b7635fcd9f2f req-a69bb4af-90e2-44c8-bb7a-5e300d8fe1f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Refreshing instance network info cache due to event network-changed-a0a28341-736a-4a2d-b80f-efe5fb1b2239. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:13:59 np0005466031 nova_compute[235803]: 2025-10-02 13:13:59.537 2 DEBUG oslo_concurrency.lockutils [req-57250bd8-7780-40e1-9fa5-b7635fcd9f2f req-a69bb4af-90e2-44c8-bb7a-5e300d8fe1f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:59 np0005466031 nova_compute[235803]: 2025-10-02 13:13:59.538 2 DEBUG oslo_concurrency.lockutils [req-57250bd8-7780-40e1-9fa5-b7635fcd9f2f req-a69bb4af-90e2-44c8-bb7a-5e300d8fe1f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:59 np0005466031 nova_compute[235803]: 2025-10-02 13:13:59.539 2 DEBUG nova.network.neutron [req-57250bd8-7780-40e1-9fa5-b7635fcd9f2f req-a69bb4af-90e2-44c8-bb7a-5e300d8fe1f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Refreshing network info cache for port a0a28341-736a-4a2d-b80f-efe5fb1b2239 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:13:59 np0005466031 podman[326357]: 2025-10-02 13:13:59.647585202 +0000 UTC m=+0.069213394 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:13:59 np0005466031 podman[326358]: 2025-10-02 13:13:59.6523893 +0000 UTC m=+0.071999214 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:14:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:00.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e382 e382: 3 total, 3 up, 3 in
Oct  2 09:14:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:01.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:01 np0005466031 nova_compute[235803]: 2025-10-02 13:14:01.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:01 np0005466031 nova_compute[235803]: 2025-10-02 13:14:01.419 2 DEBUG nova.network.neutron [req-57250bd8-7780-40e1-9fa5-b7635fcd9f2f req-a69bb4af-90e2-44c8-bb7a-5e300d8fe1f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updated VIF entry in instance network info cache for port a0a28341-736a-4a2d-b80f-efe5fb1b2239. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:14:01 np0005466031 nova_compute[235803]: 2025-10-02 13:14:01.420 2 DEBUG nova.network.neutron [req-57250bd8-7780-40e1-9fa5-b7635fcd9f2f req-a69bb4af-90e2-44c8-bb7a-5e300d8fe1f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updating instance_info_cache with network_info: [{"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:01 np0005466031 nova_compute[235803]: 2025-10-02 13:14:01.450 2 DEBUG oslo_concurrency.lockutils [req-57250bd8-7780-40e1-9fa5-b7635fcd9f2f req-a69bb4af-90e2-44c8-bb7a-5e300d8fe1f6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e383 e383: 3 total, 3 up, 3 in
Oct  2 09:14:02 np0005466031 nova_compute[235803]: 2025-10-02 13:14:02.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:02.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e384 e384: 3 total, 3 up, 3 in
Oct  2 09:14:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:03.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e385 e385: 3 total, 3 up, 3 in
Oct  2 09:14:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:04.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:14:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:05.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:14:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:06 np0005466031 nova_compute[235803]: 2025-10-02 13:14:06.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:06.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:07.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:07 np0005466031 nova_compute[235803]: 2025-10-02 13:14:07.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:08.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e386 e386: 3 total, 3 up, 3 in
Oct  2 09:14:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:09.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:10Z|00094|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.6
Oct  2 09:14:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:10Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:5d:7f:92 10.100.0.6
Oct  2 09:14:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:10.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 e387: 3 total, 3 up, 3 in
Oct  2 09:14:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:11.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:11 np0005466031 nova_compute[235803]: 2025-10-02 13:14:11.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:12 np0005466031 nova_compute[235803]: 2025-10-02 13:14:12.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:12.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:13.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:14Z|00096|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.6
Oct  2 09:14:14 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:14Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:5d:7f:92 10.100.0.6
Oct  2 09:14:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:14.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:15 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:15Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:7f:92 10.100.0.6
Oct  2 09:14:15 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:15Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:7f:92 10.100.0.6
Oct  2 09:14:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:15.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:16 np0005466031 nova_compute[235803]: 2025-10-02 13:14:16.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:16 np0005466031 nova_compute[235803]: 2025-10-02 13:14:16.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:16.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:17.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:17 np0005466031 nova_compute[235803]: 2025-10-02 13:14:17.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:18.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:19.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:20 np0005466031 nova_compute[235803]: 2025-10-02 13:14:20.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:20.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:21.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:14:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:14:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:14:21 np0005466031 nova_compute[235803]: 2025-10-02 13:14:21.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:22 np0005466031 nova_compute[235803]: 2025-10-02 13:14:22.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:22 np0005466031 podman[326587]: 2025-10-02 13:14:22.630227761 +0000 UTC m=+0.056268951 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:14:22 np0005466031 podman[326588]: 2025-10-02 13:14:22.654587633 +0000 UTC m=+0.081877049 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:14:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:22.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:23.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:24 np0005466031 nova_compute[235803]: 2025-10-02 13:14:24.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:25.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:25 np0005466031 nova_compute[235803]: 2025-10-02 13:14:25.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:25 np0005466031 nova_compute[235803]: 2025-10-02 13:14:25.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:14:25 np0005466031 nova_compute[235803]: 2025-10-02 13:14:25.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:14:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:25.882 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:25.882 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:25.883 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:26 np0005466031 nova_compute[235803]: 2025-10-02 13:14:26.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:14:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:27.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:14:27 np0005466031 nova_compute[235803]: 2025-10-02 13:14:27.355 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:14:27 np0005466031 nova_compute[235803]: 2025-10-02 13:14:27.356 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:14:27 np0005466031 nova_compute[235803]: 2025-10-02 13:14:27.356 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:14:27 np0005466031 nova_compute[235803]: 2025-10-02 13:14:27.356 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f2511cad-2f04-468f-99b3-f7302be124b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:27 np0005466031 nova_compute[235803]: 2025-10-02 13:14:27.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:29.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:30 np0005466031 podman[326636]: 2025-10-02 13:14:30.621319662 +0000 UTC m=+0.052370769 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 09:14:30 np0005466031 podman[326635]: 2025-10-02 13:14:30.62230034 +0000 UTC m=+0.056407545 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  2 09:14:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:30.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:31.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.373 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updating instance_info_cache with network_info: [{"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.397 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.398 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.398 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.398 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.398 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.425 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.426 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.426 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.426 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.427 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/420625770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.878 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.964 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:14:31 np0005466031 nova_compute[235803]: 2025-10-02 13:14:31.964 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.090 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.091 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3939MB free_disk=20.888511657714844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.091 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.091 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.245 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance f2511cad-2f04-468f-99b3-f7302be124b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.246 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.246 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.268 2 DEBUG oslo_concurrency.lockutils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.268 2 DEBUG oslo_concurrency.lockutils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.287 2 DEBUG nova.objects.instance [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'flavor' on Instance uuid f2511cad-2f04-468f-99b3-f7302be124b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.342 2 DEBUG oslo_concurrency.lockutils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.401 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.680 2 DEBUG oslo_concurrency.lockutils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.680 2 DEBUG oslo_concurrency.lockutils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.681 2 INFO nova.compute.manager [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Attaching volume 426f9568-783a-4516-9da0-22cf58dd8632 to /dev/vdb#033[00m
Oct  2 09:14:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:14:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:32.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:14:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3782613241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.842 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.848 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.871 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.925 2 DEBUG os_brick.utils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.926 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.928 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.928 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.937 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.937 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[a98bfdf8-169a-4189-a8f4-cf572a7292eb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.939 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.946 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.946 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[6295a51f-c8f1-4f9c-9db1-3b94ae2c9a16]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.948 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.956 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.956 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[df9869da-9540-429a-b590-69eba74eaa0f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.957 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6243ad-9e82-406e-a39e-33533981291e]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.958 2 DEBUG oslo_concurrency.processutils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.989 2 DEBUG oslo_concurrency.processutils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.991 2 DEBUG os_brick.initiator.connectors.lightos [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.992 2 DEBUG os_brick.initiator.connectors.lightos [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.992 2 DEBUG os_brick.initiator.connectors.lightos [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.992 2 DEBUG os_brick.utils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:14:32 np0005466031 nova_compute[235803]: 2025-10-02 13:14:32.993 2 DEBUG nova.virt.block_device [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updating existing volume attachment record: 509bf0b6-9f7f-4f97-a294-b13da3245f0f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:14:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:33.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:33 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:14:34 np0005466031 nova_compute[235803]: 2025-10-02 13:14:34.132 2 DEBUG nova.objects.instance [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'flavor' on Instance uuid f2511cad-2f04-468f-99b3-f7302be124b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:34 np0005466031 nova_compute[235803]: 2025-10-02 13:14:34.156 2 DEBUG nova.virt.libvirt.driver [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Attempting to attach volume 426f9568-783a-4516-9da0-22cf58dd8632 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:14:34 np0005466031 nova_compute[235803]: 2025-10-02 13:14:34.158 2 DEBUG nova.virt.libvirt.guest [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:14:34 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:14:34 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-426f9568-783a-4516-9da0-22cf58dd8632">
Oct  2 09:14:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:34 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:34 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:14:34 np0005466031 nova_compute[235803]:  <auth username="openstack">
Oct  2 09:14:34 np0005466031 nova_compute[235803]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:14:34 np0005466031 nova_compute[235803]:  </auth>
Oct  2 09:14:34 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:14:34 np0005466031 nova_compute[235803]:  <serial>426f9568-783a-4516-9da0-22cf58dd8632</serial>
Oct  2 09:14:34 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:14:34 np0005466031 nova_compute[235803]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:14:34 np0005466031 nova_compute[235803]: 2025-10-02 13:14:34.295 2 DEBUG nova.virt.libvirt.driver [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:34 np0005466031 nova_compute[235803]: 2025-10-02 13:14:34.295 2 DEBUG nova.virt.libvirt.driver [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:34 np0005466031 nova_compute[235803]: 2025-10-02 13:14:34.295 2 DEBUG nova.virt.libvirt.driver [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:34 np0005466031 nova_compute[235803]: 2025-10-02 13:14:34.296 2 DEBUG nova.virt.libvirt.driver [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] No VIF found with MAC fa:16:3e:5d:7f:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:14:34 np0005466031 nova_compute[235803]: 2025-10-02 13:14:34.515 2 DEBUG oslo_concurrency.lockutils [None req-e3a9e29a-eb4d-41bf-a070-a7e4d41de188 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:14:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:34.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:35 np0005466031 nova_compute[235803]: 2025-10-02 13:14:35.166 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:35 np0005466031 nova_compute[235803]: 2025-10-02 13:14:35.167 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:14:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:35.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:35 np0005466031 nova_compute[235803]: 2025-10-02 13:14:35.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:36 np0005466031 nova_compute[235803]: 2025-10-02 13:14:36.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:36.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:37.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.413 2 DEBUG oslo_concurrency.lockutils [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.414 2 DEBUG oslo_concurrency.lockutils [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.433 2 INFO nova.compute.manager [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Detaching volume 426f9568-783a-4516-9da0-22cf58dd8632#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.701 2 INFO nova.virt.block_device [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Attempting to driver detach volume 426f9568-783a-4516-9da0-22cf58dd8632 from mountpoint /dev/vdb#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.710 2 DEBUG nova.virt.libvirt.driver [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Attempting to detach device vdb from instance f2511cad-2f04-468f-99b3-f7302be124b8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.711 2 DEBUG nova.virt.libvirt.guest [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-426f9568-783a-4516-9da0-22cf58dd8632">
Oct  2 09:14:37 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <serial>426f9568-783a-4516-9da0-22cf58dd8632</serial>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:14:37 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.719 2 INFO nova.virt.libvirt.driver [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Successfully detached device vdb from instance f2511cad-2f04-468f-99b3-f7302be124b8 from the persistent domain config.#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.720 2 DEBUG nova.virt.libvirt.driver [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f2511cad-2f04-468f-99b3-f7302be124b8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.720 2 DEBUG nova.virt.libvirt.guest [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-426f9568-783a-4516-9da0-22cf58dd8632">
Oct  2 09:14:37 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <serial>426f9568-783a-4516-9da0-22cf58dd8632</serial>
Oct  2 09:14:37 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:14:37 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:14:37 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.911 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759410877.9106364, f2511cad-2f04-468f-99b3-f7302be124b8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.914 2 DEBUG nova.virt.libvirt.driver [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f2511cad-2f04-468f-99b3-f7302be124b8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:14:37 np0005466031 nova_compute[235803]: 2025-10-02 13:14:37.917 2 INFO nova.virt.libvirt.driver [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Successfully detached device vdb from instance f2511cad-2f04-468f-99b3-f7302be124b8 from the live domain config.#033[00m
Oct  2 09:14:38 np0005466031 nova_compute[235803]: 2025-10-02 13:14:38.153 2 DEBUG nova.objects.instance [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'flavor' on Instance uuid f2511cad-2f04-468f-99b3-f7302be124b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:38 np0005466031 nova_compute[235803]: 2025-10-02 13:14:38.220 2 DEBUG oslo_concurrency.lockutils [None req-03277557-2dbd-4cda-8265-07bbf08b9702 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:14:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:14:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:39.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.563 2 DEBUG nova.compute.manager [req-3466566f-1792-4548-9bd8-49a3aa6d142f req-55121b58-f14f-4337-a448-6aae779e3ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-changed-a0a28341-736a-4a2d-b80f-efe5fb1b2239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.564 2 DEBUG nova.compute.manager [req-3466566f-1792-4548-9bd8-49a3aa6d142f req-55121b58-f14f-4337-a448-6aae779e3ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Refreshing instance network info cache due to event network-changed-a0a28341-736a-4a2d-b80f-efe5fb1b2239. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.564 2 DEBUG oslo_concurrency.lockutils [req-3466566f-1792-4548-9bd8-49a3aa6d142f req-55121b58-f14f-4337-a448-6aae779e3ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.564 2 DEBUG oslo_concurrency.lockutils [req-3466566f-1792-4548-9bd8-49a3aa6d142f req-55121b58-f14f-4337-a448-6aae779e3ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.564 2 DEBUG nova.network.neutron [req-3466566f-1792-4548-9bd8-49a3aa6d142f req-55121b58-f14f-4337-a448-6aae779e3ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Refreshing network info cache for port a0a28341-736a-4a2d-b80f-efe5fb1b2239 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.638 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.638 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.639 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.639 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.639 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.641 2 INFO nova.compute.manager [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Terminating instance#033[00m
Oct  2 09:14:39 np0005466031 nova_compute[235803]: 2025-10-02 13:14:39.643 2 DEBUG nova.compute.manager [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:14:40 np0005466031 kernel: tapa0a28341-73 (unregistering): left promiscuous mode
Oct  2 09:14:40 np0005466031 NetworkManager[44907]: <info>  [1759410880.1584] device (tapa0a28341-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:40Z|00831|binding|INFO|Releasing lport a0a28341-736a-4a2d-b80f-efe5fb1b2239 from this chassis (sb_readonly=0)
Oct  2 09:14:40 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:40Z|00832|binding|INFO|Setting lport a0a28341-736a-4a2d-b80f-efe5fb1b2239 down in Southbound
Oct  2 09:14:40 np0005466031 ovn_controller[132413]: 2025-10-02T13:14:40Z|00833|binding|INFO|Removing iface tapa0a28341-73 ovn-installed in OVS
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.177 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:7f:92 10.100.0.6'], port_security=['fa:16:3e:5d:7f:92 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f2511cad-2f04-468f-99b3-f7302be124b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-059f5861-22ab-45f3-a914-fb801f3c71f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbaefa5c700c4ed495a5244732eed7e3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '858bdb72-cf27-4a78-a9f7-c4548894dc59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a904a34-19d9-4790-850b-39af4c509e92, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=a0a28341-736a-4a2d-b80f-efe5fb1b2239) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.178 141898 INFO neutron.agent.ovn.metadata.agent [-] Port a0a28341-736a-4a2d-b80f-efe5fb1b2239 in datapath 059f5861-22ab-45f3-a914-fb801f3c71f9 unbound from our chassis#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.179 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 059f5861-22ab-45f3-a914-fb801f3c71f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.180 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[68589b7b-dc76-4345-97e3-b333da7d5fc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.181 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 namespace which is not needed anymore#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005466031 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000cf.scope: Deactivated successfully.
Oct  2 09:14:40 np0005466031 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000cf.scope: Consumed 16.354s CPU time.
Oct  2 09:14:40 np0005466031 systemd-machined[192227]: Machine qemu-95-instance-000000cf terminated.
Oct  2 09:14:40 np0005466031 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[326290]: [NOTICE]   (326294) : haproxy version is 2.8.14-c23fe91
Oct  2 09:14:40 np0005466031 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[326290]: [NOTICE]   (326294) : path to executable is /usr/sbin/haproxy
Oct  2 09:14:40 np0005466031 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[326290]: [WARNING]  (326294) : Exiting Master process...
Oct  2 09:14:40 np0005466031 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[326290]: [WARNING]  (326294) : Exiting Master process...
Oct  2 09:14:40 np0005466031 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[326290]: [ALERT]    (326294) : Current worker (326296) exited with code 143 (Terminated)
Oct  2 09:14:40 np0005466031 neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9[326290]: [WARNING]  (326294) : All workers exited. Exiting... (0)
Oct  2 09:14:40 np0005466031 systemd[1]: libpod-3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327.scope: Deactivated successfully.
Oct  2 09:14:40 np0005466031 podman[326880]: 2025-10-02 13:14:40.336618779 +0000 UTC m=+0.044334577 container died 3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:14:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327-userdata-shm.mount: Deactivated successfully.
Oct  2 09:14:40 np0005466031 systemd[1]: var-lib-containers-storage-overlay-a7a5c5b0a7f5145e8f466484564af84b9225d422d4f5bab357cd9e2cb7990f9c-merged.mount: Deactivated successfully.
Oct  2 09:14:40 np0005466031 podman[326880]: 2025-10-02 13:14:40.370202966 +0000 UTC m=+0.077918734 container cleanup 3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 09:14:40 np0005466031 systemd[1]: libpod-conmon-3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327.scope: Deactivated successfully.
Oct  2 09:14:40 np0005466031 podman[326910]: 2025-10-02 13:14:40.42589339 +0000 UTC m=+0.036177203 container remove 3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.431 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e963dd-07fd-4968-b166-4af78f6f6e53]: (4, ('Thu Oct  2 01:14:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 (3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327)\n3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327\nThu Oct  2 01:14:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 (3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327)\n3fb41e4064d049b04b9c6fedb861ebb7cb29caddca1577930ee4ebcb6cba7327\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.433 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[29e12db9-0a95-4d01-bb1c-f3ba3304678e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.434 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap059f5861-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005466031 kernel: tap059f5861-20: left promiscuous mode
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.456 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cc376227-732c-436b-b9e5-530aadd3286a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.487 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a4a303-2e3f-40c7-8901-995bc3ec435d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.489 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7b414b2e-5699-4194-82bb-0fcf4b623ae0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.492 2 INFO nova.virt.libvirt.driver [-] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Instance destroyed successfully.#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.493 2 DEBUG nova.objects.instance [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lazy-loading 'resources' on Instance uuid f2511cad-2f04-468f-99b3-f7302be124b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.505 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1727f4-c991-4332-aeca-4c0c82e4a97e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868936, 'reachable_time': 18789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326938, 'error': None, 'target': 'ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466031 systemd[1]: run-netns-ovnmeta\x2d059f5861\x2d22ab\x2d45f3\x2da914\x2dfb801f3c71f9.mount: Deactivated successfully.
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.509 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-059f5861-22ab-45f3-a914-fb801f3c71f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:14:40 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:14:40.509 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[ea24b65a-9f1a-4397-853c-9f235f97ebfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.597 2 DEBUG nova.virt.libvirt.vif [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1860760860',display_name='tempest-TestStampPattern-server-1860760860',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-teststamppattern-server-1860760860',id=207,image_ref='23101426-fbb2-4946-b9ce-fdcd0b2d5391',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKUbc0knAwx6AjWLxEzN/Myua8DLnB1wbhcbmQ6eEauumE5/uQW0dSqivGfoQK/c14gwHJVzybj68xv4MB1iOou4+ZOgUXCtWGooPy7in3/oc/+fGSq5+qeVZlJgs3yxeQ==',key_name='tempest-TestStampPattern-2020443839',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:13:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cbaefa5c700c4ed495a5244732eed7e3',ramdisk_id='',reservation_id='r-nob92iki',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='97dd449c-87a7-4278-a819-3d412f587a4c',image_min_disk='1',image_min_ram='0',image_owner_id='cbaefa5c700c4ed495a5244732eed7e3',image_owner_project_name='tempest-TestStampPattern-1060565162',image_owner_user_name='tempest-TestStampPattern-1060565162-project-member',image_user_id='81db307ac1f846188ce19b644ebcc396',owner_project_name='tempest-TestStampPattern-1060565162',owner_user_name='tempest-TestStampPattern-1060565162-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:13:54Z,user_data=None,user_id='81db307ac1f846188ce19b644ebcc396',uuid=f2511cad-2f04-468f-99b3-f7302be124b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.598 2 DEBUG nova.network.os_vif_util [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converting VIF {"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.598 2 DEBUG nova.network.os_vif_util [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=a0a28341-736a-4a2d-b80f-efe5fb1b2239,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a28341-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.599 2 DEBUG os_vif [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=a0a28341-736a-4a2d-b80f-efe5fb1b2239,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a28341-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0a28341-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:14:40 np0005466031 nova_compute[235803]: 2025-10-02 13:14:40.607 2 INFO os_vif [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=a0a28341-736a-4a2d-b80f-efe5fb1b2239,network=Network(059f5861-22ab-45f3-a914-fb801f3c71f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a28341-73')#033[00m
Oct  2 09:14:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:40.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.119 2 DEBUG nova.compute.manager [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-vif-unplugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.120 2 DEBUG oslo_concurrency.lockutils [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.120 2 DEBUG oslo_concurrency.lockutils [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.120 2 DEBUG oslo_concurrency.lockutils [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.120 2 DEBUG nova.compute.manager [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] No waiting events found dispatching network-vif-unplugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.120 2 DEBUG nova.compute.manager [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-vif-unplugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.121 2 DEBUG nova.compute.manager [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.121 2 DEBUG oslo_concurrency.lockutils [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.121 2 DEBUG oslo_concurrency.lockutils [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.121 2 DEBUG oslo_concurrency.lockutils [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.121 2 DEBUG nova.compute.manager [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] No waiting events found dispatching network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.122 2 WARNING nova.compute.manager [req-e1be960e-8183-42f5-8895-90f2f99d100d req-d8e38c08-892b-4077-921b-d3ef9a56698d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received unexpected event network-vif-plugged-a0a28341-736a-4a2d-b80f-efe5fb1b2239 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:14:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:41.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.394 2 DEBUG nova.network.neutron [req-3466566f-1792-4548-9bd8-49a3aa6d142f req-55121b58-f14f-4337-a448-6aae779e3ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updated VIF entry in instance network info cache for port a0a28341-736a-4a2d-b80f-efe5fb1b2239. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.395 2 DEBUG nova.network.neutron [req-3466566f-1792-4548-9bd8-49a3aa6d142f req-55121b58-f14f-4337-a448-6aae779e3ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updating instance_info_cache with network_info: [{"id": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "address": "fa:16:3e:5d:7f:92", "network": {"id": "059f5861-22ab-45f3-a914-fb801f3c71f9", "bridge": "br-int", "label": "tempest-TestStampPattern-323103354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbaefa5c700c4ed495a5244732eed7e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a28341-73", "ovs_interfaceid": "a0a28341-736a-4a2d-b80f-efe5fb1b2239", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:41 np0005466031 nova_compute[235803]: 2025-10-02 13:14:41.424 2 DEBUG oslo_concurrency.lockutils [req-3466566f-1792-4548-9bd8-49a3aa6d142f req-55121b58-f14f-4337-a448-6aae779e3ec2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f2511cad-2f04-468f-99b3-f7302be124b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:42.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:43.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:44.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:45.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:45 np0005466031 nova_compute[235803]: 2025-10-02 13:14:45.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:45 np0005466031 nova_compute[235803]: 2025-10-02 13:14:45.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:46 np0005466031 nova_compute[235803]: 2025-10-02 13:14:46.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:46.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:47.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:48.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:49.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:49 np0005466031 nova_compute[235803]: 2025-10-02 13:14:49.443 2 INFO nova.virt.libvirt.driver [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Deleting instance files /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8_del#033[00m
Oct  2 09:14:49 np0005466031 nova_compute[235803]: 2025-10-02 13:14:49.444 2 INFO nova.virt.libvirt.driver [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Deletion of /var/lib/nova/instances/f2511cad-2f04-468f-99b3-f7302be124b8_del complete#033[00m
Oct  2 09:14:49 np0005466031 nova_compute[235803]: 2025-10-02 13:14:49.507 2 INFO nova.compute.manager [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Took 9.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:14:49 np0005466031 nova_compute[235803]: 2025-10-02 13:14:49.508 2 DEBUG oslo.service.loopingcall [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:14:49 np0005466031 nova_compute[235803]: 2025-10-02 13:14:49.508 2 DEBUG nova.compute.manager [-] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:14:49 np0005466031 nova_compute[235803]: 2025-10-02 13:14:49.508 2 DEBUG nova.network.neutron [-] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:14:50 np0005466031 nova_compute[235803]: 2025-10-02 13:14:50.423 2 DEBUG nova.network.neutron [-] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:50 np0005466031 nova_compute[235803]: 2025-10-02 13:14:50.449 2 INFO nova.compute.manager [-] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Took 0.94 seconds to deallocate network for instance.#033[00m
Oct  2 09:14:50 np0005466031 nova_compute[235803]: 2025-10-02 13:14:50.512 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:50 np0005466031 nova_compute[235803]: 2025-10-02 13:14:50.513 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:50 np0005466031 nova_compute[235803]: 2025-10-02 13:14:50.540 2 DEBUG nova.compute.manager [req-d1534493-f390-4e32-be6a-8ff10fc275e4 req-98a1cc38-e2f9-44fd-84a4-c075fdd9ce6e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Received event network-vif-deleted-a0a28341-736a-4a2d-b80f-efe5fb1b2239 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:50 np0005466031 nova_compute[235803]: 2025-10-02 13:14:50.583 2 DEBUG oslo_concurrency.processutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:50 np0005466031 nova_compute[235803]: 2025-10-02 13:14:50.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:50.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3137431195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:51 np0005466031 nova_compute[235803]: 2025-10-02 13:14:51.051 2 DEBUG oslo_concurrency.processutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:51 np0005466031 nova_compute[235803]: 2025-10-02 13:14:51.057 2 DEBUG nova.compute.provider_tree [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:14:51 np0005466031 nova_compute[235803]: 2025-10-02 13:14:51.073 2 DEBUG nova.scheduler.client.report [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:14:51 np0005466031 nova_compute[235803]: 2025-10-02 13:14:51.090 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:51 np0005466031 nova_compute[235803]: 2025-10-02 13:14:51.170 2 INFO nova.scheduler.client.report [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Deleted allocations for instance f2511cad-2f04-468f-99b3-f7302be124b8#033[00m
Oct  2 09:14:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:51.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:51 np0005466031 nova_compute[235803]: 2025-10-02 13:14:51.378 2 DEBUG oslo_concurrency.lockutils [None req-6716a446-a602-463f-afc1-de351d19ee00 81db307ac1f846188ce19b644ebcc396 cbaefa5c700c4ed495a5244732eed7e3 - - default default] Lock "f2511cad-2f04-468f-99b3-f7302be124b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:51 np0005466031 nova_compute[235803]: 2025-10-02 13:14:51.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:52.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:53.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:53 np0005466031 podman[326987]: 2025-10-02 13:14:53.63540829 +0000 UTC m=+0.060659038 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:14:53 np0005466031 podman[326988]: 2025-10-02 13:14:53.678725777 +0000 UTC m=+0.105435027 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:14:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:14:53 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1480746353' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:14:53 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:14:53 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1480746353' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:14:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:14:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:54.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:14:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e388 e388: 3 total, 3 up, 3 in
Oct  2 09:14:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:55.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:55 np0005466031 nova_compute[235803]: 2025-10-02 13:14:55.492 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410880.48947, f2511cad-2f04-468f-99b3-f7302be124b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:55 np0005466031 nova_compute[235803]: 2025-10-02 13:14:55.492 2 INFO nova.compute.manager [-] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:14:55 np0005466031 nova_compute[235803]: 2025-10-02 13:14:55.526 2 DEBUG nova.compute.manager [None req-31aecf88-0222-4807-a4b6-c1debdead6c8 - - - - - -] [instance: f2511cad-2f04-468f-99b3-f7302be124b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:55 np0005466031 nova_compute[235803]: 2025-10-02 13:14:55.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:56 np0005466031 nova_compute[235803]: 2025-10-02 13:14:56.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:56.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:57.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:14:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e389 e389: 3 total, 3 up, 3 in
Oct  2 09:14:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:58.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:14:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:14:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:59.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:00.405 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:15:00 np0005466031 nova_compute[235803]: 2025-10-02 13:15:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:00.406 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:15:00 np0005466031 nova_compute[235803]: 2025-10-02 13:15:00.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:00.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e390 e390: 3 total, 3 up, 3 in
Oct  2 09:15:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:01.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:01 np0005466031 nova_compute[235803]: 2025-10-02 13:15:01.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:01 np0005466031 podman[327089]: 2025-10-02 13:15:01.620628671 +0000 UTC m=+0.046575342 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:15:01 np0005466031 podman[327088]: 2025-10-02 13:15:01.625686546 +0000 UTC m=+0.052753160 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:15:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:02.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:03.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:03.407 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:04 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e391 e391: 3 total, 3 up, 3 in
Oct  2 09:15:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:04.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:05.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:05 np0005466031 nova_compute[235803]: 2025-10-02 13:15:05.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e392 e392: 3 total, 3 up, 3 in
Oct  2 09:15:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e393 e393: 3 total, 3 up, 3 in
Oct  2 09:15:06 np0005466031 nova_compute[235803]: 2025-10-02 13:15:06.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:06.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:07.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:09.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:10 np0005466031 nova_compute[235803]: 2025-10-02 13:15:10.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:10.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:10 np0005466031 nova_compute[235803]: 2025-10-02 13:15:10.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e394 e394: 3 total, 3 up, 3 in
Oct  2 09:15:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:11 np0005466031 nova_compute[235803]: 2025-10-02 13:15:11.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:11.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:11 np0005466031 nova_compute[235803]: 2025-10-02 13:15:11.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:12.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:13.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:14.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:15.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:15 np0005466031 nova_compute[235803]: 2025-10-02 13:15:15.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 e395: 3 total, 3 up, 3 in
Oct  2 09:15:16 np0005466031 nova_compute[235803]: 2025-10-02 13:15:16.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:16 np0005466031 nova_compute[235803]: 2025-10-02 13:15:16.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:16.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:17.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:18.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:19.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:20 np0005466031 nova_compute[235803]: 2025-10-02 13:15:20.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:20 np0005466031 nova_compute[235803]: 2025-10-02 13:15:20.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:20.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:21.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:21 np0005466031 nova_compute[235803]: 2025-10-02 13:15:21.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:22.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:23.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:24 np0005466031 podman[327192]: 2025-10-02 13:15:24.637009274 +0000 UTC m=+0.068257717 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:15:24 np0005466031 podman[327193]: 2025-10-02 13:15:24.66745274 +0000 UTC m=+0.095688096 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:15:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:24.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:25.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:25 np0005466031 nova_compute[235803]: 2025-10-02 13:15:25.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:25 np0005466031 nova_compute[235803]: 2025-10-02 13:15:25.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:25 np0005466031 nova_compute[235803]: 2025-10-02 13:15:25.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:15:25 np0005466031 nova_compute[235803]: 2025-10-02 13:15:25.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:15:25 np0005466031 nova_compute[235803]: 2025-10-02 13:15:25.660 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:15:25 np0005466031 nova_compute[235803]: 2025-10-02 13:15:25.661 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:25.883 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:25.884 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:25.884 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:26 np0005466031 nova_compute[235803]: 2025-10-02 13:15:26.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:26.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:27.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:27 np0005466031 nova_compute[235803]: 2025-10-02 13:15:27.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:27 np0005466031 nova_compute[235803]: 2025-10-02 13:15:27.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:27 np0005466031 nova_compute[235803]: 2025-10-02 13:15:27.663 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:27 np0005466031 nova_compute[235803]: 2025-10-02 13:15:27.663 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:27 np0005466031 nova_compute[235803]: 2025-10-02 13:15:27.664 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:27 np0005466031 nova_compute[235803]: 2025-10-02 13:15:27.664 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:15:27 np0005466031 nova_compute[235803]: 2025-10-02 13:15:27.664 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2661508965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.113 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.276 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.277 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4168MB free_disk=20.942684173583984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.278 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.278 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.328 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.328 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.346 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3669883439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:28.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.876 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.885 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.907 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.944 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:15:28 np0005466031 nova_compute[235803]: 2025-10-02 13:15:28.945 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1886680935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:30 np0005466031 nova_compute[235803]: 2025-10-02 13:15:30.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:30.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:31 np0005466031 nova_compute[235803]: 2025-10-02 13:15:31.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:31 np0005466031 nova_compute[235803]: 2025-10-02 13:15:31.945 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:32 np0005466031 podman[327285]: 2025-10-02 13:15:32.622563693 +0000 UTC m=+0.056265041 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:15:32 np0005466031 podman[327286]: 2025-10-02 13:15:32.623142999 +0000 UTC m=+0.055191930 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:15:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:32.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:34.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:15:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:15:35 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:15:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:35.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:35 np0005466031 nova_compute[235803]: 2025-10-02 13:15:35.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:35 np0005466031 nova_compute[235803]: 2025-10-02 13:15:35.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:35 np0005466031 nova_compute[235803]: 2025-10-02 13:15:35.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:35 np0005466031 nova_compute[235803]: 2025-10-02 13:15:35.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:15:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:36 np0005466031 nova_compute[235803]: 2025-10-02 13:15:36.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:36.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:38.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:39.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:40 np0005466031 nova_compute[235803]: 2025-10-02 13:15:40.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:40.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:41.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:41 np0005466031 nova_compute[235803]: 2025-10-02 13:15:41.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:15:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:15:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:42.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:43.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.778 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.779 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.808 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.870 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.871 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.877 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.878 2 INFO nova.compute.claims [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:15:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:44.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.956 2 DEBUG nova.scheduler.client.report [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.980 2 DEBUG nova.scheduler.client.report [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:15:44 np0005466031 nova_compute[235803]: 2025-10-02 13:15:44.981 2 DEBUG nova.compute.provider_tree [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.014 2 DEBUG nova.scheduler.client.report [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.037 2 DEBUG nova.scheduler.client.report [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.073 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/112591635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.544 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.551 2 DEBUG nova.compute.provider_tree [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.569 2 DEBUG nova.scheduler.client.report [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.595 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.596 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.641 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.642 2 DEBUG nova.network.neutron [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.671 2 INFO nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.687 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.804 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.805 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.806 2 INFO nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Creating image(s)#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.828 2 DEBUG nova.storage.rbd_utils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image ada17de8-afd7-427c-a0c2-43de01a22a93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.854 2 DEBUG nova.storage.rbd_utils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image ada17de8-afd7-427c-a0c2-43de01a22a93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.878 2 DEBUG nova.storage.rbd_utils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image ada17de8-afd7-427c-a0c2-43de01a22a93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.882 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.946 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.947 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.948 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.949 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.977 2 DEBUG nova.storage.rbd_utils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image ada17de8-afd7-427c-a0c2-43de01a22a93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:45 np0005466031 nova_compute[235803]: 2025-10-02 13:15:45.981 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ada17de8-afd7-427c-a0c2-43de01a22a93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:46 np0005466031 nova_compute[235803]: 2025-10-02 13:15:46.440 2 DEBUG nova.policy [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37083e5fd56c447cb409b86d6394dd43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f5376733aec4630998da8d11db76561', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:15:46 np0005466031 nova_compute[235803]: 2025-10-02 13:15:46.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:46.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:47.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.117 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ada17de8-afd7-427c-a0c2-43de01a22a93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.205 2 DEBUG nova.storage.rbd_utils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] resizing rbd image ada17de8-afd7-427c-a0c2-43de01a22a93_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.577 2 DEBUG nova.objects.instance [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'migration_context' on Instance uuid ada17de8-afd7-427c-a0c2-43de01a22a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.594 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.595 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Ensure instance console log exists: /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.596 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.596 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.596 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:48 np0005466031 nova_compute[235803]: 2025-10-02 13:15:48.778 2 DEBUG nova.network.neutron [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Successfully created port: 18de8da1-d885-4bcb-b4c7-2051e441b61c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:15:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:48.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:49.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:49 np0005466031 nova_compute[235803]: 2025-10-02 13:15:49.488 2 DEBUG nova.network.neutron [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Successfully updated port: 18de8da1-d885-4bcb-b4c7-2051e441b61c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:15:49 np0005466031 nova_compute[235803]: 2025-10-02 13:15:49.508 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:49 np0005466031 nova_compute[235803]: 2025-10-02 13:15:49.509 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquired lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:49 np0005466031 nova_compute[235803]: 2025-10-02 13:15:49.509 2 DEBUG nova.network.neutron [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:15:49 np0005466031 nova_compute[235803]: 2025-10-02 13:15:49.570 2 DEBUG nova.compute.manager [req-56c08225-7f95-4755-8637-bd746d87884f req-378d6519-7d20-478d-ab03-e87f5305dd7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received event network-changed-18de8da1-d885-4bcb-b4c7-2051e441b61c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:49 np0005466031 nova_compute[235803]: 2025-10-02 13:15:49.570 2 DEBUG nova.compute.manager [req-56c08225-7f95-4755-8637-bd746d87884f req-378d6519-7d20-478d-ab03-e87f5305dd7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Refreshing instance network info cache due to event network-changed-18de8da1-d885-4bcb-b4c7-2051e441b61c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:15:49 np0005466031 nova_compute[235803]: 2025-10-02 13:15:49.571 2 DEBUG oslo_concurrency.lockutils [req-56c08225-7f95-4755-8637-bd746d87884f req-378d6519-7d20-478d-ab03-e87f5305dd7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:49 np0005466031 nova_compute[235803]: 2025-10-02 13:15:49.647 2 DEBUG nova.network.neutron [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.443 2 DEBUG nova.network.neutron [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Updating instance_info_cache with network_info: [{"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.473 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Releasing lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.473 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Instance network_info: |[{"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.474 2 DEBUG oslo_concurrency.lockutils [req-56c08225-7f95-4755-8637-bd746d87884f req-378d6519-7d20-478d-ab03-e87f5305dd7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.474 2 DEBUG nova.network.neutron [req-56c08225-7f95-4755-8637-bd746d87884f req-378d6519-7d20-478d-ab03-e87f5305dd7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Refreshing network info cache for port 18de8da1-d885-4bcb-b4c7-2051e441b61c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.476 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Start _get_guest_xml network_info=[{"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.480 2 WARNING nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.484 2 DEBUG nova.virt.libvirt.host [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.484 2 DEBUG nova.virt.libvirt.host [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.487 2 DEBUG nova.virt.libvirt.host [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.488 2 DEBUG nova.virt.libvirt.host [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.489 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.490 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.490 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.490 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.490 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.491 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.491 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.491 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.491 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.491 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.492 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.492 2 DEBUG nova.virt.hardware [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.494 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:50.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:15:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1942650274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.930 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.955 2 DEBUG nova.storage.rbd_utils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image ada17de8-afd7-427c-a0c2-43de01a22a93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:50 np0005466031 nova_compute[235803]: 2025-10-02 13:15:50.959 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:15:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2329846800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.405 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.406 2 DEBUG nova.virt.libvirt.vif [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-295367902',display_name='tempest-AttachVolumeNegativeTest-server-295367902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-295367902',id=211,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCPvPkGvbokc/bTgOTHJ5oY2onDNwhF1qi0j6l73K3yGuYkc7yuTqLwiNnp29MXhZ/wOzH4yBBIkBjkn83ksF9F7CFU02MtNEnqjTlSqDEbGK/xa5JKU4qJlfDVRPCNIHA==',key_name='tempest-keypair-1212110004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-n2k6qlhk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:15:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=ada17de8-afd7-427c-a0c2-43de01a22a93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.407 2 DEBUG nova.network.os_vif_util [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.407 2 DEBUG nova.network.os_vif_util [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:13:8c,bridge_name='br-int',has_traffic_filtering=True,id=18de8da1-d885-4bcb-b4c7-2051e441b61c,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18de8da1-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.409 2 DEBUG nova.objects.instance [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'pci_devices' on Instance uuid ada17de8-afd7-427c-a0c2-43de01a22a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.426 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <uuid>ada17de8-afd7-427c-a0c2-43de01a22a93</uuid>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <name>instance-000000d3</name>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachVolumeNegativeTest-server-295367902</nova:name>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:15:50</nova:creationTime>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <nova:user uuid="37083e5fd56c447cb409b86d6394dd43">tempest-AttachVolumeNegativeTest-1084646737-project-member</nova:user>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <nova:project uuid="7f5376733aec4630998da8d11db76561">tempest-AttachVolumeNegativeTest-1084646737</nova:project>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <nova:port uuid="18de8da1-d885-4bcb-b4c7-2051e441b61c">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <entry name="serial">ada17de8-afd7-427c-a0c2-43de01a22a93</entry>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <entry name="uuid">ada17de8-afd7-427c-a0c2-43de01a22a93</entry>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/ada17de8-afd7-427c-a0c2-43de01a22a93_disk">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/ada17de8-afd7-427c-a0c2-43de01a22a93_disk.config">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:fb:13:8c"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <target dev="tap18de8da1-d8"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93/console.log" append="off"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:15:51 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:15:51 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:15:51 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:15:51 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.427 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Preparing to wait for external event network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.428 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.428 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.428 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.429 2 DEBUG nova.virt.libvirt.vif [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-295367902',display_name='tempest-AttachVolumeNegativeTest-server-295367902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-295367902',id=211,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCPvPkGvbokc/bTgOTHJ5oY2onDNwhF1qi0j6l73K3yGuYkc7yuTqLwiNnp29MXhZ/wOzH4yBBIkBjkn83ksF9F7CFU02MtNEnqjTlSqDEbGK/xa5JKU4qJlfDVRPCNIHA==',key_name='tempest-keypair-1212110004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-n2k6qlhk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:15:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=ada17de8-afd7-427c-a0c2-43de01a22a93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.429 2 DEBUG nova.network.os_vif_util [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.430 2 DEBUG nova.network.os_vif_util [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:13:8c,bridge_name='br-int',has_traffic_filtering=True,id=18de8da1-d885-4bcb-b4c7-2051e441b61c,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18de8da1-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.430 2 DEBUG os_vif [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:13:8c,bridge_name='br-int',has_traffic_filtering=True,id=18de8da1-d885-4bcb-b4c7-2051e441b61c,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18de8da1-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:15:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:15:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:51.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18de8da1-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18de8da1-d8, col_values=(('external_ids', {'iface-id': '18de8da1-d885-4bcb-b4c7-2051e441b61c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:13:8c', 'vm-uuid': 'ada17de8-afd7-427c-a0c2-43de01a22a93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:51 np0005466031 NetworkManager[44907]: <info>  [1759410951.5043] manager: (tap18de8da1-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.511 2 INFO os_vif [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:13:8c,bridge_name='br-int',has_traffic_filtering=True,id=18de8da1-d885-4bcb-b4c7-2051e441b61c,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18de8da1-d8')#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.555 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.556 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.556 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:fb:13:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.556 2 INFO nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Using config drive#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.580 2 DEBUG nova.storage.rbd_utils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image ada17de8-afd7-427c-a0c2-43de01a22a93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.884 2 DEBUG nova.network.neutron [req-56c08225-7f95-4755-8637-bd746d87884f req-378d6519-7d20-478d-ab03-e87f5305dd7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Updated VIF entry in instance network info cache for port 18de8da1-d885-4bcb-b4c7-2051e441b61c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.885 2 DEBUG nova.network.neutron [req-56c08225-7f95-4755-8637-bd746d87884f req-378d6519-7d20-478d-ab03-e87f5305dd7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Updating instance_info_cache with network_info: [{"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.911 2 DEBUG oslo_concurrency.lockutils [req-56c08225-7f95-4755-8637-bd746d87884f req-378d6519-7d20-478d-ab03-e87f5305dd7c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.970 2 INFO nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Creating config drive at /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93/disk.config#033[00m
Oct  2 09:15:51 np0005466031 nova_compute[235803]: 2025-10-02 13:15:51.976 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdy32715 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.111 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdy32715" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.141 2 DEBUG nova.storage.rbd_utils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image ada17de8-afd7-427c-a0c2-43de01a22a93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.144 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93/disk.config ada17de8-afd7-427c-a0c2-43de01a22a93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.604 2 DEBUG oslo_concurrency.processutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93/disk.config ada17de8-afd7-427c-a0c2-43de01a22a93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.605 2 INFO nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Deleting local config drive /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93/disk.config because it was imported into RBD.#033[00m
Oct  2 09:15:52 np0005466031 kernel: tap18de8da1-d8: entered promiscuous mode
Oct  2 09:15:52 np0005466031 NetworkManager[44907]: <info>  [1759410952.6592] manager: (tap18de8da1-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Oct  2 09:15:52 np0005466031 ovn_controller[132413]: 2025-10-02T13:15:52Z|00834|binding|INFO|Claiming lport 18de8da1-d885-4bcb-b4c7-2051e441b61c for this chassis.
Oct  2 09:15:52 np0005466031 ovn_controller[132413]: 2025-10-02T13:15:52Z|00835|binding|INFO|18de8da1-d885-4bcb-b4c7-2051e441b61c: Claiming fa:16:3e:fb:13:8c 10.100.0.13
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.675 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:13:8c 10.100.0.13'], port_security=['fa:16:3e:fb:13:8c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ada17de8-afd7-427c-a0c2-43de01a22a93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'da30c9e4-c393-4863-9fd9-1f0ed86bd3bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=18de8da1-d885-4bcb-b4c7-2051e441b61c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.676 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 18de8da1-d885-4bcb-b4c7-2051e441b61c in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 bound to our chassis#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.677 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc02aa54-d19f-4274-8d92-cbabe7917dd9#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.687 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[12b8c1ac-8cca-4092-824b-f95813ff1935]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.688 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc02aa54-d1 in ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.689 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc02aa54-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.690 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef89c78-0855-48ec-bd90-5107b2c2deea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.690 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0f741e34-139b-4c8c-878c-c288758b6ba9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 systemd-machined[192227]: New machine qemu-96-instance-000000d3.
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.705 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[328cdc26-1175-4bb6-9b12-4d62cb474c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:52 np0005466031 systemd[1]: Started Virtual Machine qemu-96-instance-000000d3.
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.729 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[84f303d4-f000-41fd-9adb-179998b92c0b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_controller[132413]: 2025-10-02T13:15:52Z|00836|binding|INFO|Setting lport 18de8da1-d885-4bcb-b4c7-2051e441b61c ovn-installed in OVS
Oct  2 09:15:52 np0005466031 ovn_controller[132413]: 2025-10-02T13:15:52Z|00837|binding|INFO|Setting lport 18de8da1-d885-4bcb-b4c7-2051e441b61c up in Southbound
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:52 np0005466031 systemd-udevd[327888]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:15:52 np0005466031 NetworkManager[44907]: <info>  [1759410952.7530] device (tap18de8da1-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:15:52 np0005466031 NetworkManager[44907]: <info>  [1759410952.7541] device (tap18de8da1-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.759 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[a41d17ab-4e5d-45b2-930b-02c77e366cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.764 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[254a290a-9700-43dd-9f54-5e2db0e8062f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 NetworkManager[44907]: <info>  [1759410952.7656] manager: (tapbc02aa54-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/375)
Oct  2 09:15:52 np0005466031 systemd-udevd[327892]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.806 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d312ca8f-ace9-494e-87f1-cd97511c4d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.809 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[5adc970a-03fd-4118-888e-4218e722ea18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 NetworkManager[44907]: <info>  [1759410952.8307] device (tapbc02aa54-d0): carrier: link connected
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.836 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[b08a8852-930f-43d3-a1d4-2cc840ce14eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.853 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b824f2-59df-4831-9743-857ccf60571e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880842, 'reachable_time': 35948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327918, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.871 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3be220cb-95c3-4e2d-a2e0-617da7f29869]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:fc0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 880842, 'tstamp': 880842}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327919, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.889 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[770aaffc-bca3-4ae0-a93f-8783e2d7cb14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880842, 'reachable_time': 35948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327920, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:52.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.922 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b828be-26ee-4677-871d-889907332ae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.980 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[057a1614-f661-468f-be5b-e464d3d86563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.981 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.981 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.981 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc02aa54-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:52 np0005466031 kernel: tapbc02aa54-d0: entered promiscuous mode
Oct  2 09:15:52 np0005466031 NetworkManager[44907]: <info>  [1759410952.9841] manager: (tapbc02aa54-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:52.986 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc02aa54-d0, col_values=(('external_ids', {'iface-id': '05da4d4e-44a6-4aa1-b470-d9ad03ff2e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:52 np0005466031 ovn_controller[132413]: 2025-10-02T13:15:52Z|00838|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:15:52 np0005466031 nova_compute[235803]: 2025-10-02 13:15:52.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:53.002 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:53.003 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[37ed2549-018e-4c13-ac00-11a523697f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:53.003 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:15:53 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:15:53.004 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'env', 'PROCESS_TAG=haproxy-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc02aa54-d19f-4274-8d92-cbabe7917dd9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:15:53 np0005466031 podman[327994]: 2025-10-02 13:15:53.383009787 +0000 UTC m=+0.045992205 container create 7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:15:53 np0005466031 systemd[1]: Started libpod-conmon-7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef.scope.
Oct  2 09:15:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:53.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:53 np0005466031 podman[327994]: 2025-10-02 13:15:53.360144139 +0000 UTC m=+0.023126577 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:15:53 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:15:53 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b7783496da5acececf032ec3866b577fddf5249a480cc75be375923234cdda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:15:53 np0005466031 podman[327994]: 2025-10-02 13:15:53.474050899 +0000 UTC m=+0.137033337 container init 7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:15:53 np0005466031 podman[327994]: 2025-10-02 13:15:53.479477175 +0000 UTC m=+0.142459583 container start 7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:15:53 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[328009]: [NOTICE]   (328013) : New worker (328015) forked
Oct  2 09:15:53 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[328009]: [NOTICE]   (328013) : Loading success.
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.591 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410953.590864, ada17de8-afd7-427c-a0c2-43de01a22a93 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.591 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] VM Started (Lifecycle Event)#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.623 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.629 2 DEBUG nova.compute.manager [req-bc830c06-cc49-4942-89ef-60ceb539644b req-ab6dea00-3674-4e40-bc05-eb2d60c08a50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received event network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.630 2 DEBUG oslo_concurrency.lockutils [req-bc830c06-cc49-4942-89ef-60ceb539644b req-ab6dea00-3674-4e40-bc05-eb2d60c08a50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.630 2 DEBUG oslo_concurrency.lockutils [req-bc830c06-cc49-4942-89ef-60ceb539644b req-ab6dea00-3674-4e40-bc05-eb2d60c08a50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.631 2 DEBUG oslo_concurrency.lockutils [req-bc830c06-cc49-4942-89ef-60ceb539644b req-ab6dea00-3674-4e40-bc05-eb2d60c08a50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.631 2 DEBUG nova.compute.manager [req-bc830c06-cc49-4942-89ef-60ceb539644b req-ab6dea00-3674-4e40-bc05-eb2d60c08a50 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Processing event network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.632 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.634 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410953.5916593, ada17de8-afd7-427c-a0c2-43de01a22a93 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.635 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.643 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.646 2 INFO nova.virt.libvirt.driver [-] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Instance spawned successfully.#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.646 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.655 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.657 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759410953.6359408, ada17de8-afd7-427c-a0c2-43de01a22a93 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.657 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.665 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.666 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.666 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.666 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.667 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.667 2 DEBUG nova.virt.libvirt.driver [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.673 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.676 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.729 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.765 2 INFO nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Took 7.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.766 2 DEBUG nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.848 2 INFO nova.compute.manager [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Took 9.00 seconds to build instance.#033[00m
Oct  2 09:15:53 np0005466031 nova_compute[235803]: 2025-10-02 13:15:53.869 2 DEBUG oslo_concurrency.lockutils [None req-32e228cd-8dbd-43b8-a551-313a1e2b575f 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:54.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:55 np0005466031 podman[328025]: 2025-10-02 13:15:55.634483696 +0000 UTC m=+0.063303493 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:15:55 np0005466031 podman[328026]: 2025-10-02 13:15:55.655174122 +0000 UTC m=+0.083021151 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 09:15:55 np0005466031 nova_compute[235803]: 2025-10-02 13:15:55.928 2 DEBUG nova.compute.manager [req-eefd3463-b09c-48a3-92c4-ed35271c4ca2 req-0303744e-8197-4a8e-9f3b-f01a4c31129c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received event network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:55 np0005466031 nova_compute[235803]: 2025-10-02 13:15:55.929 2 DEBUG oslo_concurrency.lockutils [req-eefd3463-b09c-48a3-92c4-ed35271c4ca2 req-0303744e-8197-4a8e-9f3b-f01a4c31129c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:55 np0005466031 nova_compute[235803]: 2025-10-02 13:15:55.929 2 DEBUG oslo_concurrency.lockutils [req-eefd3463-b09c-48a3-92c4-ed35271c4ca2 req-0303744e-8197-4a8e-9f3b-f01a4c31129c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:55 np0005466031 nova_compute[235803]: 2025-10-02 13:15:55.929 2 DEBUG oslo_concurrency.lockutils [req-eefd3463-b09c-48a3-92c4-ed35271c4ca2 req-0303744e-8197-4a8e-9f3b-f01a4c31129c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:55 np0005466031 nova_compute[235803]: 2025-10-02 13:15:55.930 2 DEBUG nova.compute.manager [req-eefd3463-b09c-48a3-92c4-ed35271c4ca2 req-0303744e-8197-4a8e-9f3b-f01a4c31129c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] No waiting events found dispatching network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:15:55 np0005466031 nova_compute[235803]: 2025-10-02 13:15:55.930 2 WARNING nova.compute.manager [req-eefd3463-b09c-48a3-92c4-ed35271c4ca2 req-0303744e-8197-4a8e-9f3b-f01a4c31129c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received unexpected event network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c for instance with vm_state active and task_state None.#033[00m
Oct  2 09:15:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:56 np0005466031 nova_compute[235803]: 2025-10-02 13:15:56.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:56 np0005466031 nova_compute[235803]: 2025-10-02 13:15:56.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:15:56 np0005466031 ovn_controller[132413]: 2025-10-02T13:15:56Z|00839|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:15:56 np0005466031 NetworkManager[44907]: <info>  [1759410956.8457] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Oct  2 09:15:56 np0005466031 NetworkManager[44907]: <info>  [1759410956.8476] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Oct  2 09:15:56 np0005466031 nova_compute[235803]: 2025-10-02 13:15:56.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:56 np0005466031 ovn_controller[132413]: 2025-10-02T13:15:56Z|00840|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:15:56 np0005466031 nova_compute[235803]: 2025-10-02 13:15:56.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:56 np0005466031 nova_compute[235803]: 2025-10-02 13:15:56.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:56.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:57 np0005466031 nova_compute[235803]: 2025-10-02 13:15:57.183 2 DEBUG nova.compute.manager [req-4cc6f097-ba68-48c2-bd71-6af23a5988d4 req-0d0dcbe5-2d7c-45c6-9c6e-8eaaa1d66bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received event network-changed-18de8da1-d885-4bcb-b4c7-2051e441b61c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:57 np0005466031 nova_compute[235803]: 2025-10-02 13:15:57.184 2 DEBUG nova.compute.manager [req-4cc6f097-ba68-48c2-bd71-6af23a5988d4 req-0d0dcbe5-2d7c-45c6-9c6e-8eaaa1d66bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Refreshing instance network info cache due to event network-changed-18de8da1-d885-4bcb-b4c7-2051e441b61c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:15:57 np0005466031 nova_compute[235803]: 2025-10-02 13:15:57.184 2 DEBUG oslo_concurrency.lockutils [req-4cc6f097-ba68-48c2-bd71-6af23a5988d4 req-0d0dcbe5-2d7c-45c6-9c6e-8eaaa1d66bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:57 np0005466031 nova_compute[235803]: 2025-10-02 13:15:57.184 2 DEBUG oslo_concurrency.lockutils [req-4cc6f097-ba68-48c2-bd71-6af23a5988d4 req-0d0dcbe5-2d7c-45c6-9c6e-8eaaa1d66bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:57 np0005466031 nova_compute[235803]: 2025-10-02 13:15:57.185 2 DEBUG nova.network.neutron [req-4cc6f097-ba68-48c2-bd71-6af23a5988d4 req-0d0dcbe5-2d7c-45c6-9c6e-8eaaa1d66bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Refreshing network info cache for port 18de8da1-d885-4bcb-b4c7-2051e441b61c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:15:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:57.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:58 np0005466031 nova_compute[235803]: 2025-10-02 13:15:58.444 2 DEBUG nova.network.neutron [req-4cc6f097-ba68-48c2-bd71-6af23a5988d4 req-0d0dcbe5-2d7c-45c6-9c6e-8eaaa1d66bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Updated VIF entry in instance network info cache for port 18de8da1-d885-4bcb-b4c7-2051e441b61c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:15:58 np0005466031 nova_compute[235803]: 2025-10-02 13:15:58.445 2 DEBUG nova.network.neutron [req-4cc6f097-ba68-48c2-bd71-6af23a5988d4 req-0d0dcbe5-2d7c-45c6-9c6e-8eaaa1d66bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Updating instance_info_cache with network_info: [{"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:58 np0005466031 nova_compute[235803]: 2025-10-02 13:15:58.466 2 DEBUG oslo_concurrency.lockutils [req-4cc6f097-ba68-48c2-bd71-6af23a5988d4 req-0d0dcbe5-2d7c-45c6-9c6e-8eaaa1d66bf7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ada17de8-afd7-427c-a0c2-43de01a22a93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:15:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:58.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:15:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:15:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:00.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:01.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:01 np0005466031 nova_compute[235803]: 2025-10-02 13:16:01.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:02.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:03.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:03 np0005466031 podman[328126]: 2025-10-02 13:16:03.651555354 +0000 UTC m=+0.066473715 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:16:03 np0005466031 podman[328125]: 2025-10-02 13:16:03.67018905 +0000 UTC m=+0.091127705 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:16:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:04.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:05.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:06 np0005466031 nova_compute[235803]: 2025-10-02 13:16:06.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:06 np0005466031 nova_compute[235803]: 2025-10-02 13:16:06.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:06 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:06Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:13:8c 10.100.0.13
Oct  2 09:16:06 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:06Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:13:8c 10.100.0.13
Oct  2 09:16:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:06.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:07.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:16:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 15K writes, 75K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1621 writes, 7773 keys, 1621 commit groups, 1.0 writes per commit group, ingest: 16.12 MB, 0.03 MB/s#012Interval WAL: 1621 writes, 1621 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     68.7      1.34              0.26        47    0.028       0      0       0.0       0.0#012  L6      1/0   11.52 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.1    119.3    101.7      4.56              1.30        46    0.099    327K    24K       0.0       0.0#012 Sum      1/0   11.52 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     92.2     94.2      5.90              1.56        93    0.063    327K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4    126.9    129.6      0.49              0.17        10    0.049     48K   2580       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0    119.3    101.7      4.56              1.30        46    0.099    327K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     68.8      1.33              0.26        46    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.090, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.54 GB write, 0.10 MB/s write, 0.53 GB read, 0.10 MB/s read, 5.9 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 59.99 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000373 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3460,57.59 MB,18.9449%) FilterBlock(93,913.05 KB,0.293305%) IndexBlock(93,1.51 MB,0.495318%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:16:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:08.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:09.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:10.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.160786) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971160836, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1816, "num_deletes": 262, "total_data_size": 4074853, "memory_usage": 4128144, "flush_reason": "Manual Compaction"}
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971185682, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 2675865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74017, "largest_seqno": 75828, "table_properties": {"data_size": 2668131, "index_size": 4611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16622, "raw_average_key_size": 20, "raw_value_size": 2652457, "raw_average_value_size": 3282, "num_data_blocks": 202, "num_entries": 808, "num_filter_entries": 808, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410829, "oldest_key_time": 1759410829, "file_creation_time": 1759410971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 24942 microseconds, and 6260 cpu microseconds.
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.185729) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 2675865 bytes OK
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.185753) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.187223) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.187239) EVENT_LOG_v1 {"time_micros": 1759410971187233, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.187256) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 4066487, prev total WAL file size 4066487, number of live WAL files 2.
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.188562) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373637' seq:72057594037927935, type:22 .. '6C6F676D0033303139' seq:0, type:0; will stop at (end)
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(2613KB)], [150(11MB)]
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971188588, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 14751000, "oldest_snapshot_seqno": -1}
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 9808 keys, 14611423 bytes, temperature: kUnknown
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971293160, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 14611423, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14545290, "index_size": 40484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 258049, "raw_average_key_size": 26, "raw_value_size": 14370795, "raw_average_value_size": 1465, "num_data_blocks": 1556, "num_entries": 9808, "num_filter_entries": 9808, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759410971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.293411) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14611423 bytes
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.296413) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.0 rd, 139.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.5 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 10347, records dropped: 539 output_compression: NoCompression
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.296432) EVENT_LOG_v1 {"time_micros": 1759410971296423, "job": 96, "event": "compaction_finished", "compaction_time_micros": 104643, "compaction_time_cpu_micros": 32611, "output_level": 6, "num_output_files": 1, "total_output_size": 14611423, "num_input_records": 10347, "num_output_records": 9808, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971297066, "job": 96, "event": "table_file_deletion", "file_number": 152}
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971299089, "job": 96, "event": "table_file_deletion", "file_number": 150}
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.188456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.299122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.299126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.299127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.299129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:16:11.299130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:11.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:11 np0005466031 nova_compute[235803]: 2025-10-02 13:16:11.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:11 np0005466031 nova_compute[235803]: 2025-10-02 13:16:11.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:12.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:13.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:14 np0005466031 nova_compute[235803]: 2025-10-02 13:16:14.604 2 DEBUG oslo_concurrency.lockutils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:14 np0005466031 nova_compute[235803]: 2025-10-02 13:16:14.605 2 DEBUG oslo_concurrency.lockutils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:14 np0005466031 nova_compute[235803]: 2025-10-02 13:16:14.619 2 DEBUG nova.objects.instance [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid ada17de8-afd7-427c-a0c2-43de01a22a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:16:14 np0005466031 nova_compute[235803]: 2025-10-02 13:16:14.654 2 DEBUG oslo_concurrency.lockutils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:14 np0005466031 nova_compute[235803]: 2025-10-02 13:16:14.876 2 DEBUG oslo_concurrency.lockutils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:14 np0005466031 nova_compute[235803]: 2025-10-02 13:16:14.877 2 DEBUG oslo_concurrency.lockutils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:14 np0005466031 nova_compute[235803]: 2025-10-02 13:16:14.877 2 INFO nova.compute.manager [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Attaching volume e2a88252-30d2-4853-bab0-09a8623bb9af to /dev/vdb#033[00m
Oct  2 09:16:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:14.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.042 2 DEBUG os_brick.utils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.044 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.056 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.056 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[04306249-a44c-485b-ab02-4312a8239cbb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.057 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.064 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.064 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[ff24c669-a0d8-437a-a03b-246b35cd78d5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.066 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.073 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.074 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[cb68fc2c-d2cc-4b01-ac5f-1df350357c32]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.075 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[29985dce-0286-487e-8da6-9df7c2158350]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.075 2 DEBUG oslo_concurrency.processutils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.103 2 DEBUG oslo_concurrency.processutils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.105 2 DEBUG os_brick.initiator.connectors.lightos [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.106 2 DEBUG os_brick.initiator.connectors.lightos [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.106 2 DEBUG os_brick.initiator.connectors.lightos [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.106 2 DEBUG os_brick.utils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.106 2 DEBUG nova.virt.block_device [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Updating existing volume attachment record: b292bc0c-9ab5-495c-9b23-3659729eb221 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:16:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:15.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.763 2 DEBUG nova.objects.instance [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid ada17de8-afd7-427c-a0c2-43de01a22a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.791 2 DEBUG nova.virt.libvirt.driver [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Attempting to attach volume e2a88252-30d2-4853-bab0-09a8623bb9af with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.794 2 DEBUG nova.virt.libvirt.guest [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:16:15 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:16:15 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-e2a88252-30d2-4853-bab0-09a8623bb9af">
Oct  2 09:16:15 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:16:15 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:16:15 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:16:15 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:16:15 np0005466031 nova_compute[235803]:  <auth username="openstack">
Oct  2 09:16:15 np0005466031 nova_compute[235803]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:16:15 np0005466031 nova_compute[235803]:  </auth>
Oct  2 09:16:15 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:16:15 np0005466031 nova_compute[235803]:  <serial>e2a88252-30d2-4853-bab0-09a8623bb9af</serial>
Oct  2 09:16:15 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:16:15 np0005466031 nova_compute[235803]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.909 2 DEBUG nova.virt.libvirt.driver [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.910 2 DEBUG nova.virt.libvirt.driver [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.910 2 DEBUG nova.virt.libvirt.driver [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:16:15 np0005466031 nova_compute[235803]: 2025-10-02 13:16:15.910 2 DEBUG nova.virt.libvirt.driver [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:fb:13:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:16:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:16 np0005466031 nova_compute[235803]: 2025-10-02 13:16:16.138 2 DEBUG oslo_concurrency.lockutils [None req-09b131ce-e4cf-4cc7-ae1a-2804cc87ac55 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:16 np0005466031 nova_compute[235803]: 2025-10-02 13:16:16.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:16.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:17.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:17 np0005466031 nova_compute[235803]: 2025-10-02 13:16:17.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.058 2 DEBUG oslo_concurrency.lockutils [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.059 2 DEBUG oslo_concurrency.lockutils [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.073 2 INFO nova.compute.manager [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Detaching volume e2a88252-30d2-4853-bab0-09a8623bb9af#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.217 2 INFO nova.virt.block_device [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Attempting to driver detach volume e2a88252-30d2-4853-bab0-09a8623bb9af from mountpoint /dev/vdb#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.225 2 DEBUG nova.virt.libvirt.driver [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Attempting to detach device vdb from instance ada17de8-afd7-427c-a0c2-43de01a22a93 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.225 2 DEBUG nova.virt.libvirt.guest [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-e2a88252-30d2-4853-bab0-09a8623bb9af">
Oct  2 09:16:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <serial>e2a88252-30d2-4853-bab0-09a8623bb9af</serial>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:16:18 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.231 2 INFO nova.virt.libvirt.driver [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully detached device vdb from instance ada17de8-afd7-427c-a0c2-43de01a22a93 from the persistent domain config.#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.231 2 DEBUG nova.virt.libvirt.driver [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ada17de8-afd7-427c-a0c2-43de01a22a93 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.232 2 DEBUG nova.virt.libvirt.guest [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-e2a88252-30d2-4853-bab0-09a8623bb9af">
Oct  2 09:16:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <serial>e2a88252-30d2-4853-bab0-09a8623bb9af</serial>
Oct  2 09:16:18 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:16:18 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:16:18 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.340 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759410978.3401396, ada17de8-afd7-427c-a0c2-43de01a22a93 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.342 2 DEBUG nova.virt.libvirt.driver [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ada17de8-afd7-427c-a0c2-43de01a22a93 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.344 2 INFO nova.virt.libvirt.driver [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully detached device vdb from instance ada17de8-afd7-427c-a0c2-43de01a22a93 from the live domain config.#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.520 2 DEBUG nova.objects.instance [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid ada17de8-afd7-427c-a0c2-43de01a22a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:16:18 np0005466031 nova_compute[235803]: 2025-10-02 13:16:18.703 2 DEBUG oslo_concurrency.lockutils [None req-b6feb46c-01e6-44e5-ac6f-dad5f8399a42 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:18.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:19.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.499 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.500 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.500 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.500 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.500 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.501 2 INFO nova.compute.manager [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Terminating instance#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.502 2 DEBUG nova.compute.manager [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:16:19 np0005466031 kernel: tap18de8da1-d8 (unregistering): left promiscuous mode
Oct  2 09:16:19 np0005466031 NetworkManager[44907]: <info>  [1759410979.5579] device (tap18de8da1-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:19Z|00841|binding|INFO|Releasing lport 18de8da1-d885-4bcb-b4c7-2051e441b61c from this chassis (sb_readonly=0)
Oct  2 09:16:19 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:19Z|00842|binding|INFO|Setting lport 18de8da1-d885-4bcb-b4c7-2051e441b61c down in Southbound
Oct  2 09:16:19 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:19Z|00843|binding|INFO|Removing iface tap18de8da1-d8 ovn-installed in OVS
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.577 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:13:8c 10.100.0.13'], port_security=['fa:16:3e:fb:13:8c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ada17de8-afd7-427c-a0c2-43de01a22a93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'da30c9e4-c393-4863-9fd9-1f0ed86bd3bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=18de8da1-d885-4bcb-b4c7-2051e441b61c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.578 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 18de8da1-d885-4bcb-b4c7-2051e441b61c in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 unbound from our chassis#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.579 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc02aa54-d19f-4274-8d92-cbabe7917dd9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.580 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb11e81-42af-4565-a034-e04d96294ee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.581 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 namespace which is not needed anymore#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Oct  2 09:16:19 np0005466031 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d3.scope: Consumed 13.517s CPU time.
Oct  2 09:16:19 np0005466031 systemd-machined[192227]: Machine qemu-96-instance-000000d3 terminated.
Oct  2 09:16:19 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[328009]: [NOTICE]   (328013) : haproxy version is 2.8.14-c23fe91
Oct  2 09:16:19 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[328009]: [NOTICE]   (328013) : path to executable is /usr/sbin/haproxy
Oct  2 09:16:19 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[328009]: [WARNING]  (328013) : Exiting Master process...
Oct  2 09:16:19 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[328009]: [ALERT]    (328013) : Current worker (328015) exited with code 143 (Terminated)
Oct  2 09:16:19 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[328009]: [WARNING]  (328013) : All workers exited. Exiting... (0)
Oct  2 09:16:19 np0005466031 systemd[1]: libpod-7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef.scope: Deactivated successfully.
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 podman[328279]: 2025-10-02 13:16:19.728963475 +0000 UTC m=+0.045904853 container died 7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.743 2 INFO nova.virt.libvirt.driver [-] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Instance destroyed successfully.#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.743 2 DEBUG nova.objects.instance [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'resources' on Instance uuid ada17de8-afd7-427c-a0c2-43de01a22a93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.765 2 DEBUG nova.virt.libvirt.vif [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-295367902',display_name='tempest-AttachVolumeNegativeTest-server-295367902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-295367902',id=211,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCPvPkGvbokc/bTgOTHJ5oY2onDNwhF1qi0j6l73K3yGuYkc7yuTqLwiNnp29MXhZ/wOzH4yBBIkBjkn83ksF9F7CFU02MtNEnqjTlSqDEbGK/xa5JKU4qJlfDVRPCNIHA==',key_name='tempest-keypair-1212110004',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:15:53Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-n2k6qlhk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=ada17de8-afd7-427c-a0c2-43de01a22a93,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.765 2 DEBUG nova.network.os_vif_util [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "address": "fa:16:3e:fb:13:8c", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18de8da1-d8", "ovs_interfaceid": "18de8da1-d885-4bcb-b4c7-2051e441b61c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.766 2 DEBUG nova.network.os_vif_util [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:13:8c,bridge_name='br-int',has_traffic_filtering=True,id=18de8da1-d885-4bcb-b4c7-2051e441b61c,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18de8da1-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.767 2 DEBUG os_vif [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:13:8c,bridge_name='br-int',has_traffic_filtering=True,id=18de8da1-d885-4bcb-b4c7-2051e441b61c,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18de8da1-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:16:19 np0005466031 systemd[1]: var-lib-containers-storage-overlay-00b7783496da5acececf032ec3866b577fddf5249a480cc75be375923234cdda-merged.mount: Deactivated successfully.
Oct  2 09:16:19 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef-userdata-shm.mount: Deactivated successfully.
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18de8da1-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.781 2 INFO os_vif [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:13:8c,bridge_name='br-int',has_traffic_filtering=True,id=18de8da1-d885-4bcb-b4c7-2051e441b61c,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18de8da1-d8')#033[00m
Oct  2 09:16:19 np0005466031 podman[328279]: 2025-10-02 13:16:19.799079144 +0000 UTC m=+0.116020522 container cleanup 7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:16:19 np0005466031 systemd[1]: libpod-conmon-7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef.scope: Deactivated successfully.
Oct  2 09:16:19 np0005466031 podman[328333]: 2025-10-02 13:16:19.859430742 +0000 UTC m=+0.038476209 container remove 7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.865 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0748e871-4ab5-4adf-adcc-60c79e767925]: (4, ('Thu Oct  2 01:16:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 (7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef)\n7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef\nThu Oct  2 01:16:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 (7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef)\n7af6d037c5042a286f7758491b4ac0958c919eb3c90f697883ada867cd56e8ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.866 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[49c50f2a-7447-462f-bb4e-d65a901b67fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.867 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:19 np0005466031 kernel: tapbc02aa54-d0: left promiscuous mode
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.872 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7371be07-a6d5-4c77-ab37-47c61aa0902d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:19 np0005466031 nova_compute[235803]: 2025-10-02 13:16:19.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.897 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3078d9ce-2144-4005-985e-0ff90c1e224f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.897 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae2c2a6-1b53-44e6-a492-f135bdc859e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.913 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c364c54a-32fa-4639-a940-f937bda91e00]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880834, 'reachable_time': 23518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328351, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.915 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:16:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:19.915 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[61ae73c7-fd43-43a3-a594-a8b25239a983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:19 np0005466031 systemd[1]: run-netns-ovnmeta\x2dbc02aa54\x2dd19f\x2d4274\x2d8d92\x2dcbabe7917dd9.mount: Deactivated successfully.
Oct  2 09:16:20 np0005466031 nova_compute[235803]: 2025-10-02 13:16:20.177 2 DEBUG nova.compute.manager [req-ad7753b0-3e54-4176-a4d7-45a455e1e0da req-a6c0ee56-812a-4c46-8181-ab3ab38eb602 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received event network-vif-unplugged-18de8da1-d885-4bcb-b4c7-2051e441b61c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:20 np0005466031 nova_compute[235803]: 2025-10-02 13:16:20.177 2 DEBUG oslo_concurrency.lockutils [req-ad7753b0-3e54-4176-a4d7-45a455e1e0da req-a6c0ee56-812a-4c46-8181-ab3ab38eb602 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:20 np0005466031 nova_compute[235803]: 2025-10-02 13:16:20.177 2 DEBUG oslo_concurrency.lockutils [req-ad7753b0-3e54-4176-a4d7-45a455e1e0da req-a6c0ee56-812a-4c46-8181-ab3ab38eb602 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:20 np0005466031 nova_compute[235803]: 2025-10-02 13:16:20.178 2 DEBUG oslo_concurrency.lockutils [req-ad7753b0-3e54-4176-a4d7-45a455e1e0da req-a6c0ee56-812a-4c46-8181-ab3ab38eb602 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:20 np0005466031 nova_compute[235803]: 2025-10-02 13:16:20.178 2 DEBUG nova.compute.manager [req-ad7753b0-3e54-4176-a4d7-45a455e1e0da req-a6c0ee56-812a-4c46-8181-ab3ab38eb602 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] No waiting events found dispatching network-vif-unplugged-18de8da1-d885-4bcb-b4c7-2051e441b61c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:16:20 np0005466031 nova_compute[235803]: 2025-10-02 13:16:20.178 2 DEBUG nova.compute.manager [req-ad7753b0-3e54-4176-a4d7-45a455e1e0da req-a6c0ee56-812a-4c46-8181-ab3ab38eb602 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received event network-vif-unplugged-18de8da1-d885-4bcb-b4c7-2051e441b61c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:16:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:20.298 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:16:20 np0005466031 nova_compute[235803]: 2025-10-02 13:16:20.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:20.299 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:16:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:20.299 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:20 np0005466031 nova_compute[235803]: 2025-10-02 13:16:20.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:20.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:21 np0005466031 nova_compute[235803]: 2025-10-02 13:16:21.063 2 INFO nova.virt.libvirt.driver [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Deleting instance files /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93_del#033[00m
Oct  2 09:16:21 np0005466031 nova_compute[235803]: 2025-10-02 13:16:21.064 2 INFO nova.virt.libvirt.driver [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Deletion of /var/lib/nova/instances/ada17de8-afd7-427c-a0c2-43de01a22a93_del complete#033[00m
Oct  2 09:16:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:21 np0005466031 nova_compute[235803]: 2025-10-02 13:16:21.126 2 INFO nova.compute.manager [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Took 1.62 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:16:21 np0005466031 nova_compute[235803]: 2025-10-02 13:16:21.126 2 DEBUG oslo.service.loopingcall [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:16:21 np0005466031 nova_compute[235803]: 2025-10-02 13:16:21.127 2 DEBUG nova.compute.manager [-] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:16:21 np0005466031 nova_compute[235803]: 2025-10-02 13:16:21.127 2 DEBUG nova.network.neutron [-] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:16:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:21.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:21 np0005466031 nova_compute[235803]: 2025-10-02 13:16:21.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.107 2 DEBUG nova.network.neutron [-] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.125 2 INFO nova.compute.manager [-] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.208 2 DEBUG nova.compute.manager [req-717a5bec-a209-49d7-923d-b87e43566e82 req-90bbe719-4259-40da-8b1d-bc5d6bcffa24 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received event network-vif-deleted-18de8da1-d885-4bcb-b4c7-2051e441b61c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.215 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.215 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.266 2 DEBUG nova.compute.manager [req-5d3ee22f-c96b-4bf6-b6ca-d48fae5e7cdf req-4079485e-52ed-46ee-95e4-ee5f6062683e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received event network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.266 2 DEBUG oslo_concurrency.lockutils [req-5d3ee22f-c96b-4bf6-b6ca-d48fae5e7cdf req-4079485e-52ed-46ee-95e4-ee5f6062683e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.267 2 DEBUG oslo_concurrency.lockutils [req-5d3ee22f-c96b-4bf6-b6ca-d48fae5e7cdf req-4079485e-52ed-46ee-95e4-ee5f6062683e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.267 2 DEBUG oslo_concurrency.lockutils [req-5d3ee22f-c96b-4bf6-b6ca-d48fae5e7cdf req-4079485e-52ed-46ee-95e4-ee5f6062683e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.267 2 DEBUG nova.compute.manager [req-5d3ee22f-c96b-4bf6-b6ca-d48fae5e7cdf req-4079485e-52ed-46ee-95e4-ee5f6062683e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] No waiting events found dispatching network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.268 2 WARNING nova.compute.manager [req-5d3ee22f-c96b-4bf6-b6ca-d48fae5e7cdf req-4079485e-52ed-46ee-95e4-ee5f6062683e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Received unexpected event network-vif-plugged-18de8da1-d885-4bcb-b4c7-2051e441b61c for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.271 2 DEBUG oslo_concurrency.processutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3683690967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.692 2 DEBUG oslo_concurrency.processutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.699 2 DEBUG nova.compute.provider_tree [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.715 2 DEBUG nova.scheduler.client.report [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.736 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.763 2 INFO nova.scheduler.client.report [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Deleted allocations for instance ada17de8-afd7-427c-a0c2-43de01a22a93#033[00m
Oct  2 09:16:22 np0005466031 nova_compute[235803]: 2025-10-02 13:16:22.875 2 DEBUG oslo_concurrency.lockutils [None req-1c99ce0e-4d16-4d5e-a33c-6d5917caf264 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "ada17de8-afd7-427c-a0c2-43de01a22a93" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:22.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:23.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:24 np0005466031 nova_compute[235803]: 2025-10-02 13:16:24.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:24.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:25.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:25 np0005466031 nova_compute[235803]: 2025-10-02 13:16:25.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:25.885 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:25.886 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:25.886 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:26 np0005466031 nova_compute[235803]: 2025-10-02 13:16:26.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:26 np0005466031 podman[328380]: 2025-10-02 13:16:26.617957871 +0000 UTC m=+0.049615109 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:16:26 np0005466031 nova_compute[235803]: 2025-10-02 13:16:26.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:26 np0005466031 nova_compute[235803]: 2025-10-02 13:16:26.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:16:26 np0005466031 nova_compute[235803]: 2025-10-02 13:16:26.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:16:26 np0005466031 nova_compute[235803]: 2025-10-02 13:16:26.653 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:16:26 np0005466031 podman[328381]: 2025-10-02 13:16:26.654492973 +0000 UTC m=+0.085038219 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:16:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:16:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:26.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:16:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:27.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:27 np0005466031 nova_compute[235803]: 2025-10-02 13:16:27.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:28 np0005466031 nova_compute[235803]: 2025-10-02 13:16:28.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:28 np0005466031 nova_compute[235803]: 2025-10-02 13:16:28.666 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:28 np0005466031 nova_compute[235803]: 2025-10-02 13:16:28.666 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:28 np0005466031 nova_compute[235803]: 2025-10-02 13:16:28.667 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:28 np0005466031 nova_compute[235803]: 2025-10-02 13:16:28.667 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:16:28 np0005466031 nova_compute[235803]: 2025-10-02 13:16:28.667 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:28.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/256858337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.109 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.285 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.286 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4121MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.287 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.287 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.404 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.405 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.421 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:29.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1660275554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.912 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.918 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.935 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.957 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:16:29 np0005466031 nova_compute[235803]: 2025-10-02 13:16:29.958 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:30.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:31.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:31 np0005466031 nova_compute[235803]: 2025-10-02 13:16:31.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:31 np0005466031 nova_compute[235803]: 2025-10-02 13:16:31.957 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:32.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:33.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.301 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.301 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.314 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.378 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.379 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.383 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.384 2 INFO nova.compute.claims [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.476 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:34 np0005466031 systemd[1]: Starting dnf makecache...
Oct  2 09:16:34 np0005466031 podman[328475]: 2025-10-02 13:16:34.639808274 +0000 UTC m=+0.067859256 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 09:16:34 np0005466031 podman[328476]: 2025-10-02 13:16:34.666606345 +0000 UTC m=+0.093354529 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.742 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410979.7403572, ada17de8-afd7-427c-a0c2-43de01a22a93 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.742 2 INFO nova.compute.manager [-] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.764 2 DEBUG nova.compute.manager [None req-e1487091-ae80-425a-b784-5c91b9ddf3ca - - - - - -] [instance: ada17de8-afd7-427c-a0c2-43de01a22a93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:34 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/207042207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:34 np0005466031 dnf[328477]: Metadata cache refreshed recently.
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.917 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.926 2 DEBUG nova.compute.provider_tree [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:16:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:34.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:34 np0005466031 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 09:16:34 np0005466031 systemd[1]: Finished dnf makecache.
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.946 2 DEBUG nova.scheduler.client.report [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.968 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:34 np0005466031 nova_compute[235803]: 2025-10-02 13:16:34.969 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.008 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.008 2 DEBUG nova.network.neutron [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.028 2 INFO nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.050 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.168 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.169 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.170 2 INFO nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Creating image(s)#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.204 2 DEBUG nova.storage.rbd_utils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image af020c32-e373-4276-a7f7-9cef906e2887_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.233 2 DEBUG nova.storage.rbd_utils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image af020c32-e373-4276-a7f7-9cef906e2887_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.255 2 DEBUG nova.storage.rbd_utils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image af020c32-e373-4276-a7f7-9cef906e2887_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.258 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.305 2 DEBUG nova.policy [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37083e5fd56c447cb409b86d6394dd43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f5376733aec4630998da8d11db76561', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.353 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.354 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.354 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.355 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.377 2 DEBUG nova.storage.rbd_utils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image af020c32-e373-4276-a7f7-9cef906e2887_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.381 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 af020c32-e373-4276-a7f7-9cef906e2887_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:35.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.892 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 af020c32-e373-4276-a7f7-9cef906e2887_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:35 np0005466031 nova_compute[235803]: 2025-10-02 13:16:35.986 2 DEBUG nova.storage.rbd_utils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] resizing rbd image af020c32-e373-4276-a7f7-9cef906e2887_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:16:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.181 2 DEBUG nova.network.neutron [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Successfully created port: 3f2008ce-4441-4c3d-ab11-7167851f2421 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.630 2 DEBUG nova.objects.instance [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'migration_context' on Instance uuid af020c32-e373-4276-a7f7-9cef906e2887 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.644 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.645 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Ensure instance console log exists: /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.645 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.646 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:36 np0005466031 nova_compute[235803]: 2025-10-02 13:16:36.646 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:36.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:37 np0005466031 nova_compute[235803]: 2025-10-02 13:16:37.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:38.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.112 2 DEBUG nova.network.neutron [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Successfully updated port: 3f2008ce-4441-4c3d-ab11-7167851f2421 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.132 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.132 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquired lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.133 2 DEBUG nova.network.neutron [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.222 2 DEBUG nova.compute.manager [req-a463a549-20d2-48ba-8795-af07fe973c29 req-4ddf7ac7-1be0-469a-92f7-83a2bd883f0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received event network-changed-3f2008ce-4441-4c3d-ab11-7167851f2421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.223 2 DEBUG nova.compute.manager [req-a463a549-20d2-48ba-8795-af07fe973c29 req-4ddf7ac7-1be0-469a-92f7-83a2bd883f0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Refreshing instance network info cache due to event network-changed-3f2008ce-4441-4c3d-ab11-7167851f2421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.223 2 DEBUG oslo_concurrency.lockutils [req-a463a549-20d2-48ba-8795-af07fe973c29 req-4ddf7ac7-1be0-469a-92f7-83a2bd883f0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.329 2 DEBUG nova.network.neutron [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:16:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:39 np0005466031 nova_compute[235803]: 2025-10-02 13:16:39.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.169 2 DEBUG nova.network.neutron [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Updating instance_info_cache with network_info: [{"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.188 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Releasing lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.189 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Instance network_info: |[{"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.189 2 DEBUG oslo_concurrency.lockutils [req-a463a549-20d2-48ba-8795-af07fe973c29 req-4ddf7ac7-1be0-469a-92f7-83a2bd883f0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.189 2 DEBUG nova.network.neutron [req-a463a549-20d2-48ba-8795-af07fe973c29 req-4ddf7ac7-1be0-469a-92f7-83a2bd883f0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Refreshing network info cache for port 3f2008ce-4441-4c3d-ab11-7167851f2421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.192 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Start _get_guest_xml network_info=[{"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.196 2 WARNING nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.201 2 DEBUG nova.virt.libvirt.host [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.202 2 DEBUG nova.virt.libvirt.host [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.207 2 DEBUG nova.virt.libvirt.host [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.208 2 DEBUG nova.virt.libvirt.host [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.209 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.209 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.209 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.210 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.210 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.210 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.210 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.211 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.211 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.211 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.211 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.212 2 DEBUG nova.virt.hardware [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.214 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:16:40 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/637609681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.668 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.692 2 DEBUG nova.storage.rbd_utils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image af020c32-e373-4276-a7f7-9cef906e2887_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:16:40 np0005466031 nova_compute[235803]: 2025-10-02 13:16:40.696 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:16:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2286483105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.128 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.130 2 DEBUG nova.virt.libvirt.vif [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1884209442',display_name='tempest-AttachVolumeNegativeTest-server-1884209442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1884209442',id=212,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhf2wHfqgtvFDTorTVcHBDxtewojCSMrxfDL/1/FVGT3dbHqIbuW6RQ66nsEV/sxD5b5Cbv5NB5Dxw8GhCgP4Lx+iX/EgMA3WUfIsAb2HlP+CoFd/N8SguqRWaaFi+xCA==',key_name='tempest-keypair-121463875',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-4zeys47s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:16:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=af020c32-e373-4276-a7f7-9cef906e2887,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.130 2 DEBUG nova.network.os_vif_util [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.131 2 DEBUG nova.network.os_vif_util [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:d9:48,bridge_name='br-int',has_traffic_filtering=True,id=3f2008ce-4441-4c3d-ab11-7167851f2421,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f2008ce-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.133 2 DEBUG nova.objects.instance [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'pci_devices' on Instance uuid af020c32-e373-4276-a7f7-9cef906e2887 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.150 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <uuid>af020c32-e373-4276-a7f7-9cef906e2887</uuid>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <name>instance-000000d4</name>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1884209442</nova:name>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:16:40</nova:creationTime>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <nova:user uuid="37083e5fd56c447cb409b86d6394dd43">tempest-AttachVolumeNegativeTest-1084646737-project-member</nova:user>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <nova:project uuid="7f5376733aec4630998da8d11db76561">tempest-AttachVolumeNegativeTest-1084646737</nova:project>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <nova:port uuid="3f2008ce-4441-4c3d-ab11-7167851f2421">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <entry name="serial">af020c32-e373-4276-a7f7-9cef906e2887</entry>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <entry name="uuid">af020c32-e373-4276-a7f7-9cef906e2887</entry>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/af020c32-e373-4276-a7f7-9cef906e2887_disk">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/af020c32-e373-4276-a7f7-9cef906e2887_disk.config">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:bc:d9:48"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <target dev="tap3f2008ce-44"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887/console.log" append="off"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:16:41 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:16:41 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:16:41 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:16:41 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.151 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Preparing to wait for external event network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.152 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.152 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.152 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.153 2 DEBUG nova.virt.libvirt.vif [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1884209442',display_name='tempest-AttachVolumeNegativeTest-server-1884209442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1884209442',id=212,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhf2wHfqgtvFDTorTVcHBDxtewojCSMrxfDL/1/FVGT3dbHqIbuW6RQ66nsEV/sxD5b5Cbv5NB5Dxw8GhCgP4Lx+iX/EgMA3WUfIsAb2HlP+CoFd/N8SguqRWaaFi+xCA==',key_name='tempest-keypair-121463875',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-4zeys47s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:16:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=af020c32-e373-4276-a7f7-9cef906e2887,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.153 2 DEBUG nova.network.os_vif_util [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.154 2 DEBUG nova.network.os_vif_util [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:d9:48,bridge_name='br-int',has_traffic_filtering=True,id=3f2008ce-4441-4c3d-ab11-7167851f2421,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f2008ce-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.155 2 DEBUG os_vif [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:d9:48,bridge_name='br-int',has_traffic_filtering=True,id=3f2008ce-4441-4c3d-ab11-7167851f2421,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f2008ce-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.156 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.156 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f2008ce-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f2008ce-44, col_values=(('external_ids', {'iface-id': '3f2008ce-4441-4c3d-ab11-7167851f2421', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:d9:48', 'vm-uuid': 'af020c32-e373-4276-a7f7-9cef906e2887'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:41 np0005466031 NetworkManager[44907]: <info>  [1759411001.1621] manager: (tap3f2008ce-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.168 2 INFO os_vif [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:d9:48,bridge_name='br-int',has_traffic_filtering=True,id=3f2008ce-4441-4c3d-ab11-7167851f2421,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f2008ce-44')#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.215 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.216 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.217 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:bc:d9:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.218 2 INFO nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Using config drive#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.240 2 DEBUG nova.storage.rbd_utils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image af020c32-e373-4276-a7f7-9cef906e2887_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:16:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.600 2 INFO nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Creating config drive at /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887/disk.config#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.609 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt5_tbzg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.752 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt5_tbzg" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.779 2 DEBUG nova.storage.rbd_utils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image af020c32-e373-4276-a7f7-9cef906e2887_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.784 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887/disk.config af020c32-e373-4276-a7f7-9cef906e2887_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.941 2 DEBUG oslo_concurrency.processutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887/disk.config af020c32-e373-4276-a7f7-9cef906e2887_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:41 np0005466031 nova_compute[235803]: 2025-10-02 13:16:41.942 2 INFO nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Deleting local config drive /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887/disk.config because it was imported into RBD.#033[00m
Oct  2 09:16:41 np0005466031 kernel: tap3f2008ce-44: entered promiscuous mode
Oct  2 09:16:42 np0005466031 NetworkManager[44907]: <info>  [1759411001.9991] manager: (tap3f2008ce-44): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Oct  2 09:16:42 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:42Z|00844|binding|INFO|Claiming lport 3f2008ce-4441-4c3d-ab11-7167851f2421 for this chassis.
Oct  2 09:16:42 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:42Z|00845|binding|INFO|3f2008ce-4441-4c3d-ab11-7167851f2421: Claiming fa:16:3e:bc:d9:48 10.100.0.5
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.041 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:d9:48 10.100.0.5'], port_security=['fa:16:3e:bc:d9:48 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'af020c32-e373-4276-a7f7-9cef906e2887', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'db8bf1c8-4916-40de-a9c2-cac66d40b035', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=3f2008ce-4441-4c3d-ab11-7167851f2421) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.042 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 3f2008ce-4441-4c3d-ab11-7167851f2421 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 bound to our chassis#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.042 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc02aa54-d19f-4274-8d92-cbabe7917dd9#033[00m
Oct  2 09:16:42 np0005466031 systemd-udevd[328885]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:16:42 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:42Z|00846|binding|INFO|Setting lport 3f2008ce-4441-4c3d-ab11-7167851f2421 ovn-installed in OVS
Oct  2 09:16:42 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:42Z|00847|binding|INFO|Setting lport 3f2008ce-4441-4c3d-ab11-7167851f2421 up in Southbound
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.057 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4293ef70-1be8-46ea-b428-ce74eee23bc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.057 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc02aa54-d1 in ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.060 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc02aa54-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.060 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e6992c47-fe17-494b-acb2-cd1689906f4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.061 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[52067996-f59d-4bb0-b3f8-4f6fcbd5d467]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 NetworkManager[44907]: <info>  [1759411002.0632] device (tap3f2008ce-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:16:42 np0005466031 NetworkManager[44907]: <info>  [1759411002.0641] device (tap3f2008ce-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:16:42 np0005466031 systemd-machined[192227]: New machine qemu-97-instance-000000d4.
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.076 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[32049a9b-943f-416d-9e77-4cdbc2502cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 systemd[1]: Started Virtual Machine qemu-97-instance-000000d4.
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.100 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9fd36f-811f-4c21-9ebd-ca169fafcc8d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.133 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ecded22e-206d-460c-9f15-55334922b21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 NetworkManager[44907]: <info>  [1759411002.1406] manager: (tapbc02aa54-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/381)
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.139 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5f3a2e-5d0e-4b47-b136-3fbb88473184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 systemd-udevd[328900]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.167 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d00dc091-7fa3-4901-9c51-b82229000662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.171 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[208b156f-d8dd-4dbe-b5a4-7dc9b8a5ae0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 NetworkManager[44907]: <info>  [1759411002.1978] device (tapbc02aa54-d0): carrier: link connected
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.203 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[398649b6-5961-4331-9014-db1c2c056de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.222 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[09318481-8e9b-443f-834f-e7d0f8cae776]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885778, 'reachable_time': 22172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328994, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.243 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5b27b850-9164-4164-95d5-4c1f315916e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:fc0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 885778, 'tstamp': 885778}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328996, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.264 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[385813ec-3889-45a7-9506-691998def75f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885778, 'reachable_time': 22172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329004, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.299 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[684088fb-fdad-4f15-b09f-83391385e039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.364 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f1bc1dbb-5f43-4743-8e00-7ac849f60522]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.365 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.365 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.366 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc02aa54-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:42 np0005466031 NetworkManager[44907]: <info>  [1759411002.3689] manager: (tapbc02aa54-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Oct  2 09:16:42 np0005466031 kernel: tapbc02aa54-d0: entered promiscuous mode
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.371 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc02aa54-d0, col_values=(('external_ids', {'iface-id': '05da4d4e-44a6-4aa1-b470-d9ad03ff2e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:42 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:42Z|00848|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.391 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.392 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[11d73317-b2e0-4bde-b67f-0a6b88013b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.393 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:16:42 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:16:42.396 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'env', 'PROCESS_TAG=haproxy-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc02aa54-d19f-4274-8d92-cbabe7917dd9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.464 2 DEBUG nova.compute.manager [req-59bd0bda-811f-425d-b7db-8b9c1a0b69ac req-04b182c1-566b-4803-9cdd-37d098e74a81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received event network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.464 2 DEBUG oslo_concurrency.lockutils [req-59bd0bda-811f-425d-b7db-8b9c1a0b69ac req-04b182c1-566b-4803-9cdd-37d098e74a81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.465 2 DEBUG oslo_concurrency.lockutils [req-59bd0bda-811f-425d-b7db-8b9c1a0b69ac req-04b182c1-566b-4803-9cdd-37d098e74a81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.465 2 DEBUG oslo_concurrency.lockutils [req-59bd0bda-811f-425d-b7db-8b9c1a0b69ac req-04b182c1-566b-4803-9cdd-37d098e74a81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.465 2 DEBUG nova.compute.manager [req-59bd0bda-811f-425d-b7db-8b9c1a0b69ac req-04b182c1-566b-4803-9cdd-37d098e74a81 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Processing event network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:16:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:16:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 72K writes, 294K keys, 72K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.06 MB/s#012Cumulative WAL: 72K writes, 26K syncs, 2.73 writes per sync, written: 0.30 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7949 writes, 30K keys, 7949 commit groups, 1.0 writes per commit group, ingest: 32.65 MB, 0.05 MB/s#012Interval WAL: 7949 writes, 3033 syncs, 2.62 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.596 2 DEBUG nova.network.neutron [req-a463a549-20d2-48ba-8795-af07fe973c29 req-4ddf7ac7-1be0-469a-92f7-83a2bd883f0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Updated VIF entry in instance network info cache for port 3f2008ce-4441-4c3d-ab11-7167851f2421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.597 2 DEBUG nova.network.neutron [req-a463a549-20d2-48ba-8795-af07fe973c29 req-4ddf7ac7-1be0-469a-92f7-83a2bd883f0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Updating instance_info_cache with network_info: [{"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:16:42 np0005466031 nova_compute[235803]: 2025-10-02 13:16:42.616 2 DEBUG oslo_concurrency.lockutils [req-a463a549-20d2-48ba-8795-af07fe973c29 req-4ddf7ac7-1be0-469a-92f7-83a2bd883f0f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:16:42 np0005466031 podman[329165]: 2025-10-02 13:16:42.776630108 +0000 UTC m=+0.045002147 container create 407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:16:42 np0005466031 podman[329152]: 2025-10-02 13:16:42.791933679 +0000 UTC m=+0.074049234 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 09:16:42 np0005466031 systemd[1]: Started libpod-conmon-407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00.scope.
Oct  2 09:16:42 np0005466031 podman[329165]: 2025-10-02 13:16:42.755706865 +0000 UTC m=+0.024078924 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:16:42 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:16:42 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8c0500ac45f7a805e71f17a1ef487d8fd57df749936bfd64e689518d5e133d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:16:42 np0005466031 podman[329165]: 2025-10-02 13:16:42.871314024 +0000 UTC m=+0.139686083 container init 407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 09:16:42 np0005466031 podman[329165]: 2025-10-02 13:16:42.876756621 +0000 UTC m=+0.145128670 container start 407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 09:16:42 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[329195]: [NOTICE]   (329199) : New worker (329201) forked
Oct  2 09:16:42 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[329195]: [NOTICE]   (329199) : Loading success.
Oct  2 09:16:42 np0005466031 podman[329152]: 2025-10-02 13:16:42.911838711 +0000 UTC m=+0.193954276 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:16:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:42.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.245 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411003.2451901, af020c32-e373-4276-a7f7-9cef906e2887 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.246 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] VM Started (Lifecycle Event)#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.248 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.251 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.254 2 INFO nova.virt.libvirt.driver [-] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Instance spawned successfully.#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.255 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.270 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.277 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.278 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.279 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.279 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.280 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.281 2 DEBUG nova.virt.libvirt.driver [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.286 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.332 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.333 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411003.2454388, af020c32-e373-4276-a7f7-9cef906e2887 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.333 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.357 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.362 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411003.2508855, af020c32-e373-4276-a7f7-9cef906e2887 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.363 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.373 2 INFO nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Took 8.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.374 2 DEBUG nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.384 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.390 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.411 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.449 2 INFO nova.compute.manager [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Took 9.09 seconds to build instance.#033[00m
Oct  2 09:16:43 np0005466031 podman[329326]: 2025-10-02 13:16:43.463810335 +0000 UTC m=+0.065886688 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 09:16:43 np0005466031 nova_compute[235803]: 2025-10-02 13:16:43.476 2 DEBUG oslo_concurrency.lockutils [None req-ce1baa30-dcfa-43a9-9870-2bf082eda570 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:43 np0005466031 podman[329326]: 2025-10-02 13:16:43.485992014 +0000 UTC m=+0.088068357 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 09:16:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:43 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 09:16:43 np0005466031 podman[329389]: 2025-10-02 13:16:43.720691292 +0000 UTC m=+0.059665679 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=keepalived for Ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, version=2.2.4)
Oct  2 09:16:43 np0005466031 podman[329389]: 2025-10-02 13:16:43.733787389 +0000 UTC m=+0.072761776 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container)
Oct  2 09:16:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466031 nova_compute[235803]: 2025-10-02 13:16:44.556 2 DEBUG nova.compute.manager [req-f3a52de2-00fc-4450-9105-4b8bfe201b8c req-7be1292a-4246-4b4a-9769-140efb3bf005 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received event network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:44 np0005466031 nova_compute[235803]: 2025-10-02 13:16:44.557 2 DEBUG oslo_concurrency.lockutils [req-f3a52de2-00fc-4450-9105-4b8bfe201b8c req-7be1292a-4246-4b4a-9769-140efb3bf005 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:44 np0005466031 nova_compute[235803]: 2025-10-02 13:16:44.557 2 DEBUG oslo_concurrency.lockutils [req-f3a52de2-00fc-4450-9105-4b8bfe201b8c req-7be1292a-4246-4b4a-9769-140efb3bf005 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:44 np0005466031 nova_compute[235803]: 2025-10-02 13:16:44.557 2 DEBUG oslo_concurrency.lockutils [req-f3a52de2-00fc-4450-9105-4b8bfe201b8c req-7be1292a-4246-4b4a-9769-140efb3bf005 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:44 np0005466031 nova_compute[235803]: 2025-10-02 13:16:44.558 2 DEBUG nova.compute.manager [req-f3a52de2-00fc-4450-9105-4b8bfe201b8c req-7be1292a-4246-4b4a-9769-140efb3bf005 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] No waiting events found dispatching network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:16:44 np0005466031 nova_compute[235803]: 2025-10-02 13:16:44.558 2 WARNING nova.compute.manager [req-f3a52de2-00fc-4450-9105-4b8bfe201b8c req-7be1292a-4246-4b4a-9769-140efb3bf005 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received unexpected event network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:16:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:44.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:16:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:45 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:16:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:46 np0005466031 nova_compute[235803]: 2025-10-02 13:16:46.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:46 np0005466031 nova_compute[235803]: 2025-10-02 13:16:46.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:46.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:47.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:48 np0005466031 nova_compute[235803]: 2025-10-02 13:16:48.782 2 DEBUG nova.compute.manager [req-f0223d6a-5ddb-4b33-ada7-fc65da682c13 req-fa806005-28bd-492e-80ac-31ebdd3878c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received event network-changed-3f2008ce-4441-4c3d-ab11-7167851f2421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:48 np0005466031 nova_compute[235803]: 2025-10-02 13:16:48.783 2 DEBUG nova.compute.manager [req-f0223d6a-5ddb-4b33-ada7-fc65da682c13 req-fa806005-28bd-492e-80ac-31ebdd3878c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Refreshing instance network info cache due to event network-changed-3f2008ce-4441-4c3d-ab11-7167851f2421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:16:48 np0005466031 nova_compute[235803]: 2025-10-02 13:16:48.783 2 DEBUG oslo_concurrency.lockutils [req-f0223d6a-5ddb-4b33-ada7-fc65da682c13 req-fa806005-28bd-492e-80ac-31ebdd3878c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:16:48 np0005466031 nova_compute[235803]: 2025-10-02 13:16:48.783 2 DEBUG oslo_concurrency.lockutils [req-f0223d6a-5ddb-4b33-ada7-fc65da682c13 req-fa806005-28bd-492e-80ac-31ebdd3878c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:16:48 np0005466031 nova_compute[235803]: 2025-10-02 13:16:48.783 2 DEBUG nova.network.neutron [req-f0223d6a-5ddb-4b33-ada7-fc65da682c13 req-fa806005-28bd-492e-80ac-31ebdd3878c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Refreshing network info cache for port 3f2008ce-4441-4c3d-ab11-7167851f2421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:16:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:48.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:49.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:50 np0005466031 nova_compute[235803]: 2025-10-02 13:16:50.460 2 DEBUG nova.network.neutron [req-f0223d6a-5ddb-4b33-ada7-fc65da682c13 req-fa806005-28bd-492e-80ac-31ebdd3878c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Updated VIF entry in instance network info cache for port 3f2008ce-4441-4c3d-ab11-7167851f2421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:16:50 np0005466031 nova_compute[235803]: 2025-10-02 13:16:50.462 2 DEBUG nova.network.neutron [req-f0223d6a-5ddb-4b33-ada7-fc65da682c13 req-fa806005-28bd-492e-80ac-31ebdd3878c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Updating instance_info_cache with network_info: [{"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:16:50 np0005466031 nova_compute[235803]: 2025-10-02 13:16:50.480 2 DEBUG oslo_concurrency.lockutils [req-f0223d6a-5ddb-4b33-ada7-fc65da682c13 req-fa806005-28bd-492e-80ac-31ebdd3878c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-af020c32-e373-4276-a7f7-9cef906e2887" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:16:50 np0005466031 nova_compute[235803]: 2025-10-02 13:16:50.646 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:50.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:51 np0005466031 nova_compute[235803]: 2025-10-02 13:16:51.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:51.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:51 np0005466031 nova_compute[235803]: 2025-10-02 13:16:51.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:52.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:54.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:56 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:56Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:d9:48 10.100.0.5
Oct  2 09:16:56 np0005466031 ovn_controller[132413]: 2025-10-02T13:16:56Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:d9:48 10.100.0.5
Oct  2 09:16:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:56 np0005466031 nova_compute[235803]: 2025-10-02 13:16:56.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:56 np0005466031 nova_compute[235803]: 2025-10-02 13:16:56.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:56.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:57 np0005466031 podman[329675]: 2025-10-02 13:16:57.643449198 +0000 UTC m=+0.064503208 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:16:57 np0005466031 podman[329676]: 2025-10-02 13:16:57.685441397 +0000 UTC m=+0.100506045 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:16:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:16:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:58.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:16:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:16:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:59.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:00.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:01 np0005466031 nova_compute[235803]: 2025-10-02 13:17:01.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:01.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:01 np0005466031 nova_compute[235803]: 2025-10-02 13:17:01.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:01 np0005466031 nova_compute[235803]: 2025-10-02 13:17:01.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:17:01 np0005466031 nova_compute[235803]: 2025-10-02 13:17:01.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:02.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:03.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:04.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:05.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:05 np0005466031 podman[329724]: 2025-10-02 13:17:05.643858495 +0000 UTC m=+0.067678360 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:17:05 np0005466031 podman[329723]: 2025-10-02 13:17:05.649649202 +0000 UTC m=+0.072991583 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:17:05 np0005466031 nova_compute[235803]: 2025-10-02 13:17:05.832 2 DEBUG oslo_concurrency.lockutils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:05 np0005466031 nova_compute[235803]: 2025-10-02 13:17:05.832 2 DEBUG oslo_concurrency.lockutils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:05 np0005466031 nova_compute[235803]: 2025-10-02 13:17:05.847 2 DEBUG nova.objects.instance [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid af020c32-e373-4276-a7f7-9cef906e2887 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:17:05 np0005466031 nova_compute[235803]: 2025-10-02 13:17:05.882 2 DEBUG oslo_concurrency.lockutils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.075 2 DEBUG oslo_concurrency.lockutils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.076 2 DEBUG oslo_concurrency.lockutils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.076 2 INFO nova.compute.manager [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Attaching volume 60bad5ae-8a46-4e2e-abef-fbab46d4d0c9 to /dev/vdb#033[00m
Oct  2 09:17:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.215 2 DEBUG os_brick.utils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.216 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.230 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.230 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[6f598115-4221-42bf-bdbf-d2895f94f237]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.232 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.243 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.243 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[3d09c1e1-1053-4713-8e96-0b224559c81c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.244 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.256 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.256 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[182f9b37-89ff-467d-90bd-0df5256e9e5c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.257 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[8305ffcc-d821-4da0-aabe-e7dd99220da7]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.257 2 DEBUG oslo_concurrency.processutils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.289 2 DEBUG oslo_concurrency.processutils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.292 2 DEBUG os_brick.initiator.connectors.lightos [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.292 2 DEBUG os_brick.initiator.connectors.lightos [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.292 2 DEBUG os_brick.initiator.connectors.lightos [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.292 2 DEBUG os_brick.utils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.293 2 DEBUG nova.virt.block_device [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Updating existing volume attachment record: 539e91b3-9c50-4c72-af75-a7fd852145f8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.655 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.655 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.674 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:17:06 np0005466031 nova_compute[235803]: 2025-10-02 13:17:06.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:17:06 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4015710855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:17:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:06.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:07 np0005466031 nova_compute[235803]: 2025-10-02 13:17:07.002 2 DEBUG nova.objects.instance [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid af020c32-e373-4276-a7f7-9cef906e2887 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:17:07 np0005466031 nova_compute[235803]: 2025-10-02 13:17:07.022 2 DEBUG nova.virt.libvirt.driver [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Attempting to attach volume 60bad5ae-8a46-4e2e-abef-fbab46d4d0c9 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:17:07 np0005466031 nova_compute[235803]: 2025-10-02 13:17:07.025 2 DEBUG nova.virt.libvirt.guest [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:17:07 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:17:07 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-60bad5ae-8a46-4e2e-abef-fbab46d4d0c9">
Oct  2 09:17:07 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:17:07 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:17:07 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:17:07 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:17:07 np0005466031 nova_compute[235803]:  <auth username="openstack">
Oct  2 09:17:07 np0005466031 nova_compute[235803]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:17:07 np0005466031 nova_compute[235803]:  </auth>
Oct  2 09:17:07 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:17:07 np0005466031 nova_compute[235803]:  <serial>60bad5ae-8a46-4e2e-abef-fbab46d4d0c9</serial>
Oct  2 09:17:07 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:17:07 np0005466031 nova_compute[235803]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:17:07 np0005466031 nova_compute[235803]: 2025-10-02 13:17:07.176 2 DEBUG nova.virt.libvirt.driver [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:17:07 np0005466031 nova_compute[235803]: 2025-10-02 13:17:07.177 2 DEBUG nova.virt.libvirt.driver [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:17:07 np0005466031 nova_compute[235803]: 2025-10-02 13:17:07.177 2 DEBUG nova.virt.libvirt.driver [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:17:07 np0005466031 nova_compute[235803]: 2025-10-02 13:17:07.177 2 DEBUG nova.virt.libvirt.driver [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:bc:d9:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:17:07 np0005466031 nova_compute[235803]: 2025-10-02 13:17:07.355 2 DEBUG oslo_concurrency.lockutils [None req-fb9c7cdc-ca27-4eb1-9f92-fae0198bebb9 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:07.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:08.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:08 np0005466031 nova_compute[235803]: 2025-10-02 13:17:08.992 2 DEBUG oslo_concurrency.lockutils [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:08 np0005466031 nova_compute[235803]: 2025-10-02 13:17:08.992 2 DEBUG oslo_concurrency.lockutils [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.013 2 INFO nova.compute.manager [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Detaching volume 60bad5ae-8a46-4e2e-abef-fbab46d4d0c9#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.141 2 INFO nova.virt.block_device [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Attempting to driver detach volume 60bad5ae-8a46-4e2e-abef-fbab46d4d0c9 from mountpoint /dev/vdb#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.149 2 DEBUG nova.virt.libvirt.driver [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Attempting to detach device vdb from instance af020c32-e373-4276-a7f7-9cef906e2887 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.149 2 DEBUG nova.virt.libvirt.guest [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-60bad5ae-8a46-4e2e-abef-fbab46d4d0c9">
Oct  2 09:17:09 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <serial>60bad5ae-8a46-4e2e-abef-fbab46d4d0c9</serial>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:17:09 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.160 2 INFO nova.virt.libvirt.driver [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully detached device vdb from instance af020c32-e373-4276-a7f7-9cef906e2887 from the persistent domain config.#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.161 2 DEBUG nova.virt.libvirt.driver [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance af020c32-e373-4276-a7f7-9cef906e2887 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.161 2 DEBUG nova.virt.libvirt.guest [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <source protocol="rbd" name="volumes/volume-60bad5ae-8a46-4e2e-abef-fbab46d4d0c9">
Oct  2 09:17:09 np0005466031 nova_compute[235803]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  </source>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <serial>60bad5ae-8a46-4e2e-abef-fbab46d4d0c9</serial>
Oct  2 09:17:09 np0005466031 nova_compute[235803]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:17:09 np0005466031 nova_compute[235803]: </disk>
Oct  2 09:17:09 np0005466031 nova_compute[235803]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.300 2 DEBUG nova.virt.libvirt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Received event <DeviceRemovedEvent: 1759411029.2988217, af020c32-e373-4276-a7f7-9cef906e2887 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.301 2 DEBUG nova.virt.libvirt.driver [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance af020c32-e373-4276-a7f7-9cef906e2887 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.303 2 INFO nova.virt.libvirt.driver [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully detached device vdb from instance af020c32-e373-4276-a7f7-9cef906e2887 from the live domain config.#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.466 2 DEBUG nova.objects.instance [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid af020c32-e373-4276-a7f7-9cef906e2887 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:17:09 np0005466031 nova_compute[235803]: 2025-10-02 13:17:09.500 2 DEBUG oslo_concurrency.lockutils [None req-d91ce10b-1182-4949-8405-26bef1c8f6bc 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:09.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.433 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.434 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.434 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.435 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.435 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.437 2 INFO nova.compute.manager [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Terminating instance#033[00m
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.438 2 DEBUG nova.compute.manager [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:17:10 np0005466031 kernel: tap3f2008ce-44 (unregistering): left promiscuous mode
Oct  2 09:17:10 np0005466031 NetworkManager[44907]: <info>  [1759411030.8132] device (tap3f2008ce-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:17:10Z|00849|binding|INFO|Releasing lport 3f2008ce-4441-4c3d-ab11-7167851f2421 from this chassis (sb_readonly=0)
Oct  2 09:17:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:17:10Z|00850|binding|INFO|Setting lport 3f2008ce-4441-4c3d-ab11-7167851f2421 down in Southbound
Oct  2 09:17:10 np0005466031 ovn_controller[132413]: 2025-10-02T13:17:10Z|00851|binding|INFO|Removing iface tap3f2008ce-44 ovn-installed in OVS
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:10.832 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:d9:48 10.100.0.5'], port_security=['fa:16:3e:bc:d9:48 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'af020c32-e373-4276-a7f7-9cef906e2887', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'db8bf1c8-4916-40de-a9c2-cac66d40b035', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=3f2008ce-4441-4c3d-ab11-7167851f2421) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:17:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:10.834 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 3f2008ce-4441-4c3d-ab11-7167851f2421 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 unbound from our chassis#033[00m
Oct  2 09:17:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:10.834 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc02aa54-d19f-4274-8d92-cbabe7917dd9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:17:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:10.836 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7240af3b-23e3-4d52-b4f9-6badc37bccee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:10 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:10.836 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 namespace which is not needed anymore#033[00m
Oct  2 09:17:10 np0005466031 nova_compute[235803]: 2025-10-02 13:17:10.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:10 np0005466031 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d4.scope: Deactivated successfully.
Oct  2 09:17:10 np0005466031 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d4.scope: Consumed 14.208s CPU time.
Oct  2 09:17:10 np0005466031 systemd-machined[192227]: Machine qemu-97-instance-000000d4 terminated.
Oct  2 09:17:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:10.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:10 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[329195]: [NOTICE]   (329199) : haproxy version is 2.8.14-c23fe91
Oct  2 09:17:10 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[329195]: [NOTICE]   (329199) : path to executable is /usr/sbin/haproxy
Oct  2 09:17:10 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[329195]: [WARNING]  (329199) : Exiting Master process...
Oct  2 09:17:10 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[329195]: [ALERT]    (329199) : Current worker (329201) exited with code 143 (Terminated)
Oct  2 09:17:10 np0005466031 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[329195]: [WARNING]  (329199) : All workers exited. Exiting... (0)
Oct  2 09:17:10 np0005466031 systemd[1]: libpod-407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00.scope: Deactivated successfully.
Oct  2 09:17:10 np0005466031 podman[329817]: 2025-10-02 13:17:10.99441222 +0000 UTC m=+0.070832291 container died 407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.076 2 INFO nova.virt.libvirt.driver [-] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Instance destroyed successfully.#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.076 2 DEBUG nova.objects.instance [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'resources' on Instance uuid af020c32-e373-4276-a7f7-9cef906e2887 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.091 2 DEBUG nova.virt.libvirt.vif [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1884209442',display_name='tempest-AttachVolumeNegativeTest-server-1884209442',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1884209442',id=212,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIhf2wHfqgtvFDTorTVcHBDxtewojCSMrxfDL/1/FVGT3dbHqIbuW6RQ66nsEV/sxD5b5Cbv5NB5Dxw8GhCgP4Lx+iX/EgMA3WUfIsAb2HlP+CoFd/N8SguqRWaaFi+xCA==',key_name='tempest-keypair-121463875',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:16:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-4zeys47s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:16:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=af020c32-e373-4276-a7f7-9cef906e2887,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.094 2 DEBUG nova.network.os_vif_util [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "3f2008ce-4441-4c3d-ab11-7167851f2421", "address": "fa:16:3e:bc:d9:48", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f2008ce-44", "ovs_interfaceid": "3f2008ce-4441-4c3d-ab11-7167851f2421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.096 2 DEBUG nova.network.os_vif_util [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:d9:48,bridge_name='br-int',has_traffic_filtering=True,id=3f2008ce-4441-4c3d-ab11-7167851f2421,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f2008ce-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.096 2 DEBUG os_vif [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:d9:48,bridge_name='br-int',has_traffic_filtering=True,id=3f2008ce-4441-4c3d-ab11-7167851f2421,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f2008ce-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f2008ce-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.105 2 INFO os_vif [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:d9:48,bridge_name='br-int',has_traffic_filtering=True,id=3f2008ce-4441-4c3d-ab11-7167851f2421,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f2008ce-44')#033[00m
Oct  2 09:17:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:11 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00-userdata-shm.mount: Deactivated successfully.
Oct  2 09:17:11 np0005466031 systemd[1]: var-lib-containers-storage-overlay-2b8c0500ac45f7a805e71f17a1ef487d8fd57df749936bfd64e689518d5e133d-merged.mount: Deactivated successfully.
Oct  2 09:17:11 np0005466031 podman[329817]: 2025-10-02 13:17:11.174490925 +0000 UTC m=+0.250911006 container cleanup 407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:17:11 np0005466031 systemd[1]: libpod-conmon-407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00.scope: Deactivated successfully.
Oct  2 09:17:11 np0005466031 podman[329874]: 2025-10-02 13:17:11.338020094 +0000 UTC m=+0.140599040 container remove 407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.343 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[aba672ca-4449-443c-a37d-aeab3426dcb3]: (4, ('Thu Oct  2 01:17:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 (407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00)\n407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00\nThu Oct  2 01:17:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 (407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00)\n407d55724555187c2d24c4863ab47d24b3babdf587c8b62777a652080019cd00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.345 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5176a0a8-49c7-480e-b08f-4f1c43e50b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.346 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:17:11 np0005466031 kernel: tapbc02aa54-d0: left promiscuous mode
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.355 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b0299b2e-1163-4cda-804e-edd80c00234d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.392 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f2abd7b0-8107-4edc-9d57-860b87ac8d46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.393 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ff0497-a620-4fff-8e5d-d4028801b565]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.407 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a3840866-dbb5-4775-97ad-e7ca79cdc506]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885771, 'reachable_time': 36107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329889, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.409 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:17:11 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:11.410 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d00d10c7-6fd9-4bbf-a3ea-c145c009517d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:17:11 np0005466031 systemd[1]: run-netns-ovnmeta\x2dbc02aa54\x2dd19f\x2d4274\x2d8d92\x2dcbabe7917dd9.mount: Deactivated successfully.
Oct  2 09:17:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:11.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.681 2 DEBUG nova.compute.manager [req-4d51b15c-1890-4e2d-85e0-d4fb46646ea4 req-1a0e8813-b118-407b-9ab8-074038c76c3d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received event network-vif-unplugged-3f2008ce-4441-4c3d-ab11-7167851f2421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.682 2 DEBUG oslo_concurrency.lockutils [req-4d51b15c-1890-4e2d-85e0-d4fb46646ea4 req-1a0e8813-b118-407b-9ab8-074038c76c3d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.682 2 DEBUG oslo_concurrency.lockutils [req-4d51b15c-1890-4e2d-85e0-d4fb46646ea4 req-1a0e8813-b118-407b-9ab8-074038c76c3d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.682 2 DEBUG oslo_concurrency.lockutils [req-4d51b15c-1890-4e2d-85e0-d4fb46646ea4 req-1a0e8813-b118-407b-9ab8-074038c76c3d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.682 2 DEBUG nova.compute.manager [req-4d51b15c-1890-4e2d-85e0-d4fb46646ea4 req-1a0e8813-b118-407b-9ab8-074038c76c3d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] No waiting events found dispatching network-vif-unplugged-3f2008ce-4441-4c3d-ab11-7167851f2421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.682 2 DEBUG nova.compute.manager [req-4d51b15c-1890-4e2d-85e0-d4fb46646ea4 req-1a0e8813-b118-407b-9ab8-074038c76c3d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received event network-vif-unplugged-3f2008ce-4441-4c3d-ab11-7167851f2421 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:17:11 np0005466031 nova_compute[235803]: 2025-10-02 13:17:11.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:12 np0005466031 nova_compute[235803]: 2025-10-02 13:17:12.623 2 INFO nova.virt.libvirt.driver [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Deleting instance files /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887_del#033[00m
Oct  2 09:17:12 np0005466031 nova_compute[235803]: 2025-10-02 13:17:12.624 2 INFO nova.virt.libvirt.driver [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Deletion of /var/lib/nova/instances/af020c32-e373-4276-a7f7-9cef906e2887_del complete#033[00m
Oct  2 09:17:12 np0005466031 nova_compute[235803]: 2025-10-02 13:17:12.683 2 INFO nova.compute.manager [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Took 2.24 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:17:12 np0005466031 nova_compute[235803]: 2025-10-02 13:17:12.684 2 DEBUG oslo.service.loopingcall [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:17:12 np0005466031 nova_compute[235803]: 2025-10-02 13:17:12.684 2 DEBUG nova.compute.manager [-] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:17:12 np0005466031 nova_compute[235803]: 2025-10-02 13:17:12.684 2 DEBUG nova.network.neutron [-] [instance: af020c32-e373-4276-a7f7-9cef906e2887] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:17:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:12.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.181153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033181185, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 899, "num_deletes": 251, "total_data_size": 1845751, "memory_usage": 1878352, "flush_reason": "Manual Compaction"}
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033208964, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 1207876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75833, "largest_seqno": 76727, "table_properties": {"data_size": 1203607, "index_size": 1984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9581, "raw_average_key_size": 19, "raw_value_size": 1195082, "raw_average_value_size": 2484, "num_data_blocks": 85, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410972, "oldest_key_time": 1759410972, "file_creation_time": 1759411033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 27866 microseconds, and 3597 cpu microseconds.
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.209013) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 1207876 bytes OK
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.209038) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.229477) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.229517) EVENT_LOG_v1 {"time_micros": 1759411033229509, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.229566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1841163, prev total WAL file size 1841163, number of live WAL files 2.
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.230212) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(1179KB)], [153(13MB)]
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033230239, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15819299, "oldest_snapshot_seqno": -1}
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 9770 keys, 13945932 bytes, temperature: kUnknown
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033392120, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 13945932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13880660, "index_size": 39751, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 257952, "raw_average_key_size": 26, "raw_value_size": 13707406, "raw_average_value_size": 1403, "num_data_blocks": 1520, "num_entries": 9770, "num_filter_entries": 9770, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392388) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13945932 bytes
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.403973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.7 rd, 86.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.9 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(24.6) write-amplify(11.5) OK, records in: 10289, records dropped: 519 output_compression: NoCompression
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.404005) EVENT_LOG_v1 {"time_micros": 1759411033403993, "job": 98, "event": "compaction_finished", "compaction_time_micros": 161962, "compaction_time_cpu_micros": 30122, "output_level": 6, "num_output_files": 1, "total_output_size": 13945932, "num_input_records": 10289, "num_output_records": 9770, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033404379, "job": 98, "event": "table_file_deletion", "file_number": 155}
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033407028, "job": 98, "event": "table_file_deletion", "file_number": 153}
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.230166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.407166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.407172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.407175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.407177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:13.407179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:13.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:13 np0005466031 nova_compute[235803]: 2025-10-02 13:17:13.793 2 DEBUG nova.compute.manager [req-ef5c00c3-13f8-44fd-8782-7cb52740a0b3 req-6a254607-9fbb-49f8-be91-4f4151eb02cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received event network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:17:13 np0005466031 nova_compute[235803]: 2025-10-02 13:17:13.793 2 DEBUG oslo_concurrency.lockutils [req-ef5c00c3-13f8-44fd-8782-7cb52740a0b3 req-6a254607-9fbb-49f8-be91-4f4151eb02cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "af020c32-e373-4276-a7f7-9cef906e2887-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:13 np0005466031 nova_compute[235803]: 2025-10-02 13:17:13.793 2 DEBUG oslo_concurrency.lockutils [req-ef5c00c3-13f8-44fd-8782-7cb52740a0b3 req-6a254607-9fbb-49f8-be91-4f4151eb02cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:13 np0005466031 nova_compute[235803]: 2025-10-02 13:17:13.794 2 DEBUG oslo_concurrency.lockutils [req-ef5c00c3-13f8-44fd-8782-7cb52740a0b3 req-6a254607-9fbb-49f8-be91-4f4151eb02cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:13 np0005466031 nova_compute[235803]: 2025-10-02 13:17:13.794 2 DEBUG nova.compute.manager [req-ef5c00c3-13f8-44fd-8782-7cb52740a0b3 req-6a254607-9fbb-49f8-be91-4f4151eb02cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] No waiting events found dispatching network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:17:13 np0005466031 nova_compute[235803]: 2025-10-02 13:17:13.794 2 WARNING nova.compute.manager [req-ef5c00c3-13f8-44fd-8782-7cb52740a0b3 req-6a254607-9fbb-49f8-be91-4f4151eb02cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received unexpected event network-vif-plugged-3f2008ce-4441-4c3d-ab11-7167851f2421 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.073 2 DEBUG nova.network.neutron [-] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.088 2 INFO nova.compute.manager [-] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Took 1.40 seconds to deallocate network for instance.#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.122 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.122 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.171 2 DEBUG nova.compute.manager [req-b11308e4-2f6f-46c0-9837-353b10b8d802 req-cfe6af09-5257-4654-8b50-8f6b2853fe09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Received event network-vif-deleted-3f2008ce-4441-4c3d-ab11-7167851f2421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.174 2 DEBUG oslo_concurrency.processutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:17:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3499506024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.772 2 DEBUG oslo_concurrency.processutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.779 2 DEBUG nova.compute.provider_tree [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.797 2 DEBUG nova.scheduler.client.report [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.820 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.849 2 INFO nova.scheduler.client.report [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Deleted allocations for instance af020c32-e373-4276-a7f7-9cef906e2887#033[00m
Oct  2 09:17:14 np0005466031 nova_compute[235803]: 2025-10-02 13:17:14.909 2 DEBUG oslo_concurrency.lockutils [None req-7caab7b3-b1c0-4ed5-8445-9b2d946b07a4 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "af020c32-e373-4276-a7f7-9cef906e2887" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:14.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:15.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:16 np0005466031 nova_compute[235803]: 2025-10-02 13:17:16.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.378865) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036378909, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 283, "num_deletes": 251, "total_data_size": 76467, "memory_usage": 81968, "flush_reason": "Manual Compaction"}
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036380925, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 49349, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76732, "largest_seqno": 77010, "table_properties": {"data_size": 47462, "index_size": 115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5451, "raw_average_key_size": 20, "raw_value_size": 43715, "raw_average_value_size": 162, "num_data_blocks": 5, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411034, "oldest_key_time": 1759411034, "file_creation_time": 1759411036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 2279 microseconds, and 753 cpu microseconds.
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.381151) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 49349 bytes OK
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.381169) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383100) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383115) EVENT_LOG_v1 {"time_micros": 1759411036383109, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 74343, prev total WAL file size 74343, number of live WAL files 2.
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(48KB)], [156(13MB)]
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036383568, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 13995281, "oldest_snapshot_seqno": -1}
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 9529 keys, 10147144 bytes, temperature: kUnknown
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036477160, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10147144, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10088377, "index_size": 33822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 253143, "raw_average_key_size": 26, "raw_value_size": 9924184, "raw_average_value_size": 1041, "num_data_blocks": 1272, "num_entries": 9529, "num_filter_entries": 9529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.477465) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10147144 bytes
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.479246) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.4 rd, 108.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(489.2) write-amplify(205.6) OK, records in: 10039, records dropped: 510 output_compression: NoCompression
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.479285) EVENT_LOG_v1 {"time_micros": 1759411036479270, "job": 100, "event": "compaction_finished", "compaction_time_micros": 93689, "compaction_time_cpu_micros": 26500, "output_level": 6, "num_output_files": 1, "total_output_size": 10147144, "num_input_records": 10039, "num_output_records": 9529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036479496, "job": 100, "event": "table_file_deletion", "file_number": 158}
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036482277, "job": 100, "event": "table_file_deletion", "file_number": 156}
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.482344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.482348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.482350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.482351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:17:16.482353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466031 nova_compute[235803]: 2025-10-02 13:17:16.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:16.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:17.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:17 np0005466031 nova_compute[235803]: 2025-10-02 13:17:17.656 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 09:17:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:18.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 09:17:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:19.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:20.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:21 np0005466031 nova_compute[235803]: 2025-10-02 13:17:21.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:21.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:21 np0005466031 nova_compute[235803]: 2025-10-02 13:17:21.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:21 np0005466031 nova_compute[235803]: 2025-10-02 13:17:21.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:22.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:23.465 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:17:23 np0005466031 nova_compute[235803]: 2025-10-02 13:17:23.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:23.466 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:17:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:23.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:25.468 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:17:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:25 np0005466031 nova_compute[235803]: 2025-10-02 13:17:25.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:25 np0005466031 nova_compute[235803]: 2025-10-02 13:17:25.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:25.886 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:25.886 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:17:25.886 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:26 np0005466031 nova_compute[235803]: 2025-10-02 13:17:26.074 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411031.073505, af020c32-e373-4276-a7f7-9cef906e2887 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:17:26 np0005466031 nova_compute[235803]: 2025-10-02 13:17:26.075 2 INFO nova.compute.manager [-] [instance: af020c32-e373-4276-a7f7-9cef906e2887] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:17:26 np0005466031 nova_compute[235803]: 2025-10-02 13:17:26.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:26 np0005466031 nova_compute[235803]: 2025-10-02 13:17:26.109 2 DEBUG nova.compute.manager [None req-03776e08-c1a8-4a5c-a048-309c5daacbb3 - - - - - -] [instance: af020c32-e373-4276-a7f7-9cef906e2887] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:17:26 np0005466031 nova_compute[235803]: 2025-10-02 13:17:26.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:26.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:27.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:27 np0005466031 nova_compute[235803]: 2025-10-02 13:17:27.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:28 np0005466031 nova_compute[235803]: 2025-10-02 13:17:28.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:28 np0005466031 nova_compute[235803]: 2025-10-02 13:17:28.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:17:28 np0005466031 nova_compute[235803]: 2025-10-02 13:17:28.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:17:28 np0005466031 nova_compute[235803]: 2025-10-02 13:17:28.652 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:17:28 np0005466031 nova_compute[235803]: 2025-10-02 13:17:28.653 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:28 np0005466031 podman[329974]: 2025-10-02 13:17:28.655748308 +0000 UTC m=+0.086474241 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:17:28 np0005466031 podman[329975]: 2025-10-02 13:17:28.661915065 +0000 UTC m=+0.091931768 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:17:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:28.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:29.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:29 np0005466031 nova_compute[235803]: 2025-10-02 13:17:29.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:29 np0005466031 nova_compute[235803]: 2025-10-02 13:17:29.668 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:29 np0005466031 nova_compute[235803]: 2025-10-02 13:17:29.669 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:29 np0005466031 nova_compute[235803]: 2025-10-02 13:17:29.669 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:29 np0005466031 nova_compute[235803]: 2025-10-02 13:17:29.669 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:17:29 np0005466031 nova_compute[235803]: 2025-10-02 13:17:29.669 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:17:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/610322793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.122 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.269 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.270 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4116MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.270 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.270 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.337 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.337 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.357 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:17:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2563700799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.792 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.797 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.811 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.836 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:17:30 np0005466031 nova_compute[235803]: 2025-10-02 13:17:30.837 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:30.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:31 np0005466031 nova_compute[235803]: 2025-10-02 13:17:31.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:17:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:31.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:17:31 np0005466031 nova_compute[235803]: 2025-10-02 13:17:31.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:31 np0005466031 nova_compute[235803]: 2025-10-02 13:17:31.837 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e396 e396: 3 total, 3 up, 3 in
Oct  2 09:17:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:32.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:33.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e397 e397: 3 total, 3 up, 3 in
Oct  2 09:17:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:35.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:35.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:36 np0005466031 nova_compute[235803]: 2025-10-02 13:17:36.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:36 np0005466031 podman[330093]: 2025-10-02 13:17:36.615657118 +0000 UTC m=+0.061700118 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:17:36 np0005466031 podman[330092]: 2025-10-02 13:17:36.643612673 +0000 UTC m=+0.090679913 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:17:36 np0005466031 nova_compute[235803]: 2025-10-02 13:17:36.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:37.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:37 np0005466031 nova_compute[235803]: 2025-10-02 13:17:37.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:38 np0005466031 nova_compute[235803]: 2025-10-02 13:17:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:38 np0005466031 nova_compute[235803]: 2025-10-02 13:17:38.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:17:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:39.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:39.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:41.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:41 np0005466031 nova_compute[235803]: 2025-10-02 13:17:41.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e398 e398: 3 total, 3 up, 3 in
Oct  2 09:17:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:41.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:41 np0005466031 nova_compute[235803]: 2025-10-02 13:17:41.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:42 np0005466031 nova_compute[235803]: 2025-10-02 13:17:42.095 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:43.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:43.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:45.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:46 np0005466031 nova_compute[235803]: 2025-10-02 13:17:46.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:46 np0005466031 nova_compute[235803]: 2025-10-02 13:17:46.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:47.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:49.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:49.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:51.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:51 np0005466031 nova_compute[235803]: 2025-10-02 13:17:51.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:51.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:51 np0005466031 nova_compute[235803]: 2025-10-02 13:17:51.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:53.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:17:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:55.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:17:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:55.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:17:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:56 np0005466031 nova_compute[235803]: 2025-10-02 13:17:56.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:56 np0005466031 nova_compute[235803]: 2025-10-02 13:17:56.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:57.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:57.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:59.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:17:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:59.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:59 np0005466031 podman[330469]: 2025-10-02 13:17:59.633127182 +0000 UTC m=+0.058444594 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:17:59 np0005466031 podman[330470]: 2025-10-02 13:17:59.670430016 +0000 UTC m=+0.093144793 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 09:18:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e399 e399: 3 total, 3 up, 3 in
Oct  2 09:18:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:18:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:01.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:18:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:01 np0005466031 nova_compute[235803]: 2025-10-02 13:18:01.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:01.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:01 np0005466031 nova_compute[235803]: 2025-10-02 13:18:01.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:18:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:18:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:03.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:03.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:05.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:18:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080546147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:18:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:18:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080546147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:18:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:05.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:06 np0005466031 nova_compute[235803]: 2025-10-02 13:18:06.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:06 np0005466031 nova_compute[235803]: 2025-10-02 13:18:06.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:07.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:07.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:07 np0005466031 podman[330569]: 2025-10-02 13:18:07.618151263 +0000 UTC m=+0.054109039 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:18:07 np0005466031 podman[330570]: 2025-10-02 13:18:07.642370941 +0000 UTC m=+0.074887228 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:18:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:09.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:09.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:11.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:11 np0005466031 nova_compute[235803]: 2025-10-02 13:18:11.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:11.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:11 np0005466031 nova_compute[235803]: 2025-10-02 13:18:11.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:13.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:15.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:15.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:16 np0005466031 ovn_controller[132413]: 2025-10-02T13:18:16Z|00852|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 09:18:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:16 np0005466031 nova_compute[235803]: 2025-10-02 13:18:16.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:16 np0005466031 nova_compute[235803]: 2025-10-02 13:18:16.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:17.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:18 np0005466031 nova_compute[235803]: 2025-10-02 13:18:18.656 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:18:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:19.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:18:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:19.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:21 np0005466031 nova_compute[235803]: 2025-10-02 13:18:21.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:21.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:21 np0005466031 nova_compute[235803]: 2025-10-02 13:18:21.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:18:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/67111490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:18:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:23.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:23 np0005466031 nova_compute[235803]: 2025-10-02 13:18:23.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:18:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:25.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:18:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e400 e400: 3 total, 3 up, 3 in
Oct  2 09:18:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:25.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:18:25.886 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:18:25.886 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:18:25.886 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:26 np0005466031 nova_compute[235803]: 2025-10-02 13:18:26.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:26 np0005466031 nova_compute[235803]: 2025-10-02 13:18:26.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:18:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:27.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:18:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:18:27.318 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:18:27 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:18:27.319 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:18:27 np0005466031 nova_compute[235803]: 2025-10-02 13:18:27.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:27.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:27 np0005466031 nova_compute[235803]: 2025-10-02 13:18:27.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:28 np0005466031 nova_compute[235803]: 2025-10-02 13:18:28.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:29.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:29.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:29 np0005466031 nova_compute[235803]: 2025-10-02 13:18:29.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:29 np0005466031 nova_compute[235803]: 2025-10-02 13:18:29.689 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:29 np0005466031 nova_compute[235803]: 2025-10-02 13:18:29.689 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:29 np0005466031 nova_compute[235803]: 2025-10-02 13:18:29.689 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:29 np0005466031 nova_compute[235803]: 2025-10-02 13:18:29.689 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:18:29 np0005466031 nova_compute[235803]: 2025-10-02 13:18:29.690 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:18:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:18:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1475968903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:18:30 np0005466031 nova_compute[235803]: 2025-10-02 13:18:30.132 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:18:30 np0005466031 nova_compute[235803]: 2025-10-02 13:18:30.282 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:18:30 np0005466031 nova_compute[235803]: 2025-10-02 13:18:30.284 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4138MB free_disk=20.966087341308594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:18:30 np0005466031 nova_compute[235803]: 2025-10-02 13:18:30.284 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:30 np0005466031 nova_compute[235803]: 2025-10-02 13:18:30.284 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:30 np0005466031 podman[330689]: 2025-10-02 13:18:30.61694328 +0000 UTC m=+0.044351528 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:18:30 np0005466031 podman[330690]: 2025-10-02 13:18:30.656428127 +0000 UTC m=+0.079256303 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.019 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.020 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.040 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:18:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:31.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:18:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2399959325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.501 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.511 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.541 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.543 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.543 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:31.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:31 np0005466031 nova_compute[235803]: 2025-10-02 13:18:31.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:32 np0005466031 nova_compute[235803]: 2025-10-02 13:18:32.545 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:32 np0005466031 nova_compute[235803]: 2025-10-02 13:18:32.545 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:18:32 np0005466031 nova_compute[235803]: 2025-10-02 13:18:32.546 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:18:32 np0005466031 nova_compute[235803]: 2025-10-02 13:18:32.565 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:18:32 np0005466031 nova_compute[235803]: 2025-10-02 13:18:32.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:33.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:33.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:35.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:35.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:36 np0005466031 nova_compute[235803]: 2025-10-02 13:18:36.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:36 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:18:36.321 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:18:36 np0005466031 nova_compute[235803]: 2025-10-02 13:18:36.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:37.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:38 np0005466031 podman[330813]: 2025-10-02 13:18:38.62747953 +0000 UTC m=+0.049811375 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct  2 09:18:38 np0005466031 nova_compute[235803]: 2025-10-02 13:18:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:38 np0005466031 podman[330814]: 2025-10-02 13:18:38.657494704 +0000 UTC m=+0.078717978 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct  2 09:18:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:39 np0005466031 nova_compute[235803]: 2025-10-02 13:18:39.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:39 np0005466031 nova_compute[235803]: 2025-10-02 13:18:39.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:18:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:39.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:41 np0005466031 nova_compute[235803]: 2025-10-02 13:18:41.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:41.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:41 np0005466031 nova_compute[235803]: 2025-10-02 13:18:41.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:43.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e401 e401: 3 total, 3 up, 3 in
Oct  2 09:18:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:45.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:45.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:46 np0005466031 nova_compute[235803]: 2025-10-02 13:18:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:46 np0005466031 nova_compute[235803]: 2025-10-02 13:18:46.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:47.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:47.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:18:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:49.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:18:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:18:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546321456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:18:50 np0005466031 nova_compute[235803]: 2025-10-02 13:18:50.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:51.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:51 np0005466031 nova_compute[235803]: 2025-10-02 13:18:51.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 e402: 3 total, 3 up, 3 in
Oct  2 09:18:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:51.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:51 np0005466031 nova_compute[235803]: 2025-10-02 13:18:51.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:53.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:53.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:18:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:55.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:18:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:56 np0005466031 nova_compute[235803]: 2025-10-02 13:18:56.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:56 np0005466031 nova_compute[235803]: 2025-10-02 13:18:56.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:18:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:57.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:18:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:18:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:59.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:01 np0005466031 nova_compute[235803]: 2025-10-02 13:19:01.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:01 np0005466031 podman[330911]: 2025-10-02 13:19:01.635873163 +0000 UTC m=+0.061516402 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:19:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:01.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:01 np0005466031 podman[330912]: 2025-10-02 13:19:01.671905861 +0000 UTC m=+0.096160700 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 09:19:01 np0005466031 nova_compute[235803]: 2025-10-02 13:19:01.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:03 np0005466031 podman[331228]: 2025-10-02 13:19:03.429417688 +0000 UTC m=+0.047675984 container create aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 09:19:03 np0005466031 systemd[1]: Started libpod-conmon-aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc.scope.
Oct  2 09:19:03 np0005466031 podman[331228]: 2025-10-02 13:19:03.406439696 +0000 UTC m=+0.024698012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 09:19:03 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:19:03 np0005466031 podman[331228]: 2025-10-02 13:19:03.535253996 +0000 UTC m=+0.153512322 container init aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 09:19:03 np0005466031 podman[331228]: 2025-10-02 13:19:03.542978238 +0000 UTC m=+0.161236534 container start aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 09:19:03 np0005466031 podman[331228]: 2025-10-02 13:19:03.546998404 +0000 UTC m=+0.165256700 container attach aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:19:03 np0005466031 wonderful_blackwell[331245]: 167 167
Oct  2 09:19:03 np0005466031 systemd[1]: libpod-aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc.scope: Deactivated successfully.
Oct  2 09:19:03 np0005466031 podman[331228]: 2025-10-02 13:19:03.549394393 +0000 UTC m=+0.167652689 container died aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:19:03 np0005466031 systemd[1]: var-lib-containers-storage-overlay-73d35803202e02aa5fab8a1ca007b1fa02a50d2f6af3224154cddc9888fc4784-merged.mount: Deactivated successfully.
Oct  2 09:19:03 np0005466031 podman[331228]: 2025-10-02 13:19:03.589301172 +0000 UTC m=+0.207559468 container remove aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 09:19:03 np0005466031 systemd[1]: libpod-conmon-aa94e831fd6ca8d53c6f3e1189d15f73da798dfe063af19e1cc69328465a4bdc.scope: Deactivated successfully.
Oct  2 09:19:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:19:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:03.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:19:03 np0005466031 podman[331269]: 2025-10-02 13:19:03.767707959 +0000 UTC m=+0.054582483 container create 6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 09:19:03 np0005466031 systemd[1]: Started libpod-conmon-6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db.scope.
Oct  2 09:19:03 np0005466031 podman[331269]: 2025-10-02 13:19:03.746273192 +0000 UTC m=+0.033147736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 09:19:03 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:19:03 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c17c68be2f006317c2ac6f0fc169494726aba79c40a7b5bf540bf5b802bc71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 09:19:03 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c17c68be2f006317c2ac6f0fc169494726aba79c40a7b5bf540bf5b802bc71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 09:19:03 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c17c68be2f006317c2ac6f0fc169494726aba79c40a7b5bf540bf5b802bc71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 09:19:03 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c17c68be2f006317c2ac6f0fc169494726aba79c40a7b5bf540bf5b802bc71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 09:19:03 np0005466031 podman[331269]: 2025-10-02 13:19:03.864385353 +0000 UTC m=+0.151259897 container init 6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mendeleev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 09:19:03 np0005466031 podman[331269]: 2025-10-02 13:19:03.870844609 +0000 UTC m=+0.157719133 container start 6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mendeleev, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 09:19:03 np0005466031 podman[331269]: 2025-10-02 13:19:03.877301215 +0000 UTC m=+0.164175749 container attach 6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]: [
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:    {
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        "available": false,
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        "ceph_device": false,
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        "lsm_data": {},
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        "lvs": [],
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        "path": "/dev/sr0",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        "rejected_reasons": [
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "Insufficient space (<5GB)",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "Has a FileSystem"
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        ],
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        "sys_api": {
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "actuators": null,
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "device_nodes": "sr0",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "devname": "sr0",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "human_readable_size": "482.00 KB",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "id_bus": "ata",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "model": "QEMU DVD-ROM",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "nr_requests": "2",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "parent": "/dev/sr0",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "partitions": {},
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "path": "/dev/sr0",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "removable": "1",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "rev": "2.5+",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "ro": "0",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "rotational": "0",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "sas_address": "",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "sas_device_handle": "",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "scheduler_mode": "mq-deadline",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "sectors": 0,
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "sectorsize": "2048",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "size": 493568.0,
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "support_discard": "2048",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "type": "disk",
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:            "vendor": "QEMU"
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:        }
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]:    }
Oct  2 09:19:05 np0005466031 ecstatic_mendeleev[331286]: ]
Oct  2 09:19:05 np0005466031 systemd[1]: libpod-6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db.scope: Deactivated successfully.
Oct  2 09:19:05 np0005466031 systemd[1]: libpod-6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db.scope: Consumed 1.198s CPU time.
Oct  2 09:19:05 np0005466031 podman[331269]: 2025-10-02 13:19:05.069161333 +0000 UTC m=+1.356035857 container died 6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 09:19:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:05.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:05 np0005466031 systemd[1]: var-lib-containers-storage-overlay-d6c17c68be2f006317c2ac6f0fc169494726aba79c40a7b5bf540bf5b802bc71-merged.mount: Deactivated successfully.
Oct  2 09:19:05 np0005466031 podman[331269]: 2025-10-02 13:19:05.125496025 +0000 UTC m=+1.412370549 container remove 6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 09:19:05 np0005466031 systemd[1]: libpod-conmon-6ccb0f034bbb4e3e1197803a13f7ffa3eb5e913aa99cfbcec2135814328520db.scope: Deactivated successfully.
Oct  2 09:19:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:05.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:06 np0005466031 nova_compute[235803]: 2025-10-02 13:19:06.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:19:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:06 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:19:06 np0005466031 nova_compute[235803]: 2025-10-02 13:19:06.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:07.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:09.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:09 np0005466031 podman[332427]: 2025-10-02 13:19:09.650568022 +0000 UTC m=+0.075679990 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:19:09 np0005466031 podman[332426]: 2025-10-02 13:19:09.652816836 +0000 UTC m=+0.078343146 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:19:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:11.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:11 np0005466031 nova_compute[235803]: 2025-10-02 13:19:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:11.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:11 np0005466031 nova_compute[235803]: 2025-10-02 13:19:11.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:13.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:13.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:15.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:19:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:15.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:19:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:16 np0005466031 nova_compute[235803]: 2025-10-02 13:19:16.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:16 np0005466031 nova_compute[235803]: 2025-10-02 13:19:16.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:17.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:19:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:19.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:19:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:19:19.344 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:19:19 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:19:19.345 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:19:19 np0005466031 nova_compute[235803]: 2025-10-02 13:19:19.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:19:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:19.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:19:20 np0005466031 nova_compute[235803]: 2025-10-02 13:19:20.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:21.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:21 np0005466031 nova_compute[235803]: 2025-10-02 13:19:21.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:21.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:21 np0005466031 nova_compute[235803]: 2025-10-02 13:19:21.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:19:22.347 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:19:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:23.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:23.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:24 np0005466031 nova_compute[235803]: 2025-10-02 13:19:24.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:25.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:25.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:19:25.887 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:19:25.887 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:19:25.887 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:26 np0005466031 nova_compute[235803]: 2025-10-02 13:19:26.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:26 np0005466031 nova_compute[235803]: 2025-10-02 13:19:26.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:27.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:27.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:28 np0005466031 nova_compute[235803]: 2025-10-02 13:19:28.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:29.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:29 np0005466031 nova_compute[235803]: 2025-10-02 13:19:29.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:29.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:19:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:31.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:19:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:31 np0005466031 nova_compute[235803]: 2025-10-02 13:19:31.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:31 np0005466031 nova_compute[235803]: 2025-10-02 13:19:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:31 np0005466031 nova_compute[235803]: 2025-10-02 13:19:31.659 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:31 np0005466031 nova_compute[235803]: 2025-10-02 13:19:31.659 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:31 np0005466031 nova_compute[235803]: 2025-10-02 13:19:31.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:31 np0005466031 nova_compute[235803]: 2025-10-02 13:19:31.660 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:19:31 np0005466031 nova_compute[235803]: 2025-10-02 13:19:31.660 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:19:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:31.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:19:31 np0005466031 nova_compute[235803]: 2025-10-02 13:19:31.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e403 e403: 3 total, 3 up, 3 in
Oct  2 09:19:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:19:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3859766783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:19:32 np0005466031 nova_compute[235803]: 2025-10-02 13:19:32.122 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:32 np0005466031 nova_compute[235803]: 2025-10-02 13:19:32.283 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:19:32 np0005466031 nova_compute[235803]: 2025-10-02 13:19:32.284 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4132MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:19:32 np0005466031 nova_compute[235803]: 2025-10-02 13:19:32.284 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:32 np0005466031 nova_compute[235803]: 2025-10-02 13:19:32.285 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:32 np0005466031 nova_compute[235803]: 2025-10-02 13:19:32.444 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:19:32 np0005466031 nova_compute[235803]: 2025-10-02 13:19:32.445 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:19:32 np0005466031 nova_compute[235803]: 2025-10-02 13:19:32.606 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:32 np0005466031 podman[332602]: 2025-10-02 13:19:32.632467202 +0000 UTC m=+0.059722921 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:19:32 np0005466031 podman[332603]: 2025-10-02 13:19:32.675412848 +0000 UTC m=+0.091351401 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller)
Oct  2 09:19:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:19:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2050096160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:19:33 np0005466031 nova_compute[235803]: 2025-10-02 13:19:33.075 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:33 np0005466031 nova_compute[235803]: 2025-10-02 13:19:33.082 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:19:33 np0005466031 nova_compute[235803]: 2025-10-02 13:19:33.098 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:19:33 np0005466031 nova_compute[235803]: 2025-10-02 13:19:33.100 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:19:33 np0005466031 nova_compute[235803]: 2025-10-02 13:19:33.100 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:19:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:33.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:19:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:19:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:33.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:19:34 np0005466031 nova_compute[235803]: 2025-10-02 13:19:34.100 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:34 np0005466031 nova_compute[235803]: 2025-10-02 13:19:34.101 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:19:34 np0005466031 nova_compute[235803]: 2025-10-02 13:19:34.101 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:19:34 np0005466031 nova_compute[235803]: 2025-10-02 13:19:34.124 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:19:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e404 e404: 3 total, 3 up, 3 in
Oct  2 09:19:34 np0005466031 nova_compute[235803]: 2025-10-02 13:19:34.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:35.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:35.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:36 np0005466031 nova_compute[235803]: 2025-10-02 13:19:36.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:36 np0005466031 nova_compute[235803]: 2025-10-02 13:19:36.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:19:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:37.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:19:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:37.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:38 np0005466031 nova_compute[235803]: 2025-10-02 13:19:38.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:19:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:39.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:19:39 np0005466031 nova_compute[235803]: 2025-10-02 13:19:39.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:39 np0005466031 nova_compute[235803]: 2025-10-02 13:19:39.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:19:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:39.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:40 np0005466031 podman[332722]: 2025-10-02 13:19:40.628853901 +0000 UTC m=+0.060041760 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:19:40 np0005466031 podman[332723]: 2025-10-02 13:19:40.653662056 +0000 UTC m=+0.084529185 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:19:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 09:19:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:41.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 09:19:41 np0005466031 nova_compute[235803]: 2025-10-02 13:19:41.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 e405: 3 total, 3 up, 3 in
Oct  2 09:19:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:41.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:41 np0005466031 nova_compute[235803]: 2025-10-02 13:19:41.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:43.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:43.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:45.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:45.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:46 np0005466031 nova_compute[235803]: 2025-10-02 13:19:46.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:46 np0005466031 nova_compute[235803]: 2025-10-02 13:19:46.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:47.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:19:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:47.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:19:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:49.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:49.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:19:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:51.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:19:51 np0005466031 nova_compute[235803]: 2025-10-02 13:19:51.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:51.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:51 np0005466031 nova_compute[235803]: 2025-10-02 13:19:51.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:53.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:53.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:55.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:55.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:56 np0005466031 nova_compute[235803]: 2025-10-02 13:19:56.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:56 np0005466031 nova_compute[235803]: 2025-10-02 13:19:56.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:57.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:57.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:59.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:19:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:59.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:01.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:01 np0005466031 nova_compute[235803]: 2025-10-02 13:20:01.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:01 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 09:20:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:01 np0005466031 nova_compute[235803]: 2025-10-02 13:20:01.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:03.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:03 np0005466031 podman[332820]: 2025-10-02 13:20:03.611382091 +0000 UTC m=+0.046923102 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:20:03 np0005466031 podman[332821]: 2025-10-02 13:20:03.652610619 +0000 UTC m=+0.081335183 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:20:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:05.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:05.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:06 np0005466031 nova_compute[235803]: 2025-10-02 13:20:06.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:06 np0005466031 nova_compute[235803]: 2025-10-02 13:20:06.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:20:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:07.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:20:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:07.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:20:08.019 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:20:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:20:08.020 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:20:08 np0005466031 nova_compute[235803]: 2025-10-02 13:20:08.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:09.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:09.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:11.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:11 np0005466031 nova_compute[235803]: 2025-10-02 13:20:11.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:11 np0005466031 podman[332871]: 2025-10-02 13:20:11.665479724 +0000 UTC m=+0.053712227 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:20:11 np0005466031 podman[332872]: 2025-10-02 13:20:11.673425643 +0000 UTC m=+0.057065644 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:20:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:11.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:11 np0005466031 nova_compute[235803]: 2025-10-02 13:20:11.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:13.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:13.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:15.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:15.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:16 np0005466031 nova_compute[235803]: 2025-10-02 13:20:16.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:16 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:16 np0005466031 nova_compute[235803]: 2025-10-02 13:20:16.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:17 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:20:17.021 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:20:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:17.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:17.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:20:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:20:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:20:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:19.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:20:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:19.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:21.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:21 np0005466031 nova_compute[235803]: 2025-10-02 13:20:21.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:21 np0005466031 nova_compute[235803]: 2025-10-02 13:20:21.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:21 np0005466031 nova_compute[235803]: 2025-10-02 13:20:21.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:23.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:20:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:23.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:20:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:25 np0005466031 nova_compute[235803]: 2025-10-02 13:20:25.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:20:25.888 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:20:25.888 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:20:25.888 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:26 np0005466031 nova_compute[235803]: 2025-10-02 13:20:26.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:26 np0005466031 nova_compute[235803]: 2025-10-02 13:20:26.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:27.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:27.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:29.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:29 np0005466031 nova_compute[235803]: 2025-10-02 13:20:29.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:29.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:30 np0005466031 nova_compute[235803]: 2025-10-02 13:20:30.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:31.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:31 np0005466031 nova_compute[235803]: 2025-10-02 13:20:31.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:31 np0005466031 nova_compute[235803]: 2025-10-02 13:20:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:31 np0005466031 nova_compute[235803]: 2025-10-02 13:20:31.676 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:31 np0005466031 nova_compute[235803]: 2025-10-02 13:20:31.677 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:31 np0005466031 nova_compute[235803]: 2025-10-02 13:20:31.677 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:31 np0005466031 nova_compute[235803]: 2025-10-02 13:20:31.678 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:20:31 np0005466031 nova_compute[235803]: 2025-10-02 13:20:31.678 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:31.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:31 np0005466031 nova_compute[235803]: 2025-10-02 13:20:31.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:20:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/89902622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.106 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.275 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.276 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4150MB free_disk=20.942710876464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.276 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.276 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.350 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.351 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.372 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:20:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/366028002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.904 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.912 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.928 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.929 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:20:32 np0005466031 nova_compute[235803]: 2025-10-02 13:20:32.930 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:33.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:33.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:34 np0005466031 podman[333195]: 2025-10-02 13:20:34.624343033 +0000 UTC m=+0.049805835 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 09:20:34 np0005466031 podman[333196]: 2025-10-02 13:20:34.654226144 +0000 UTC m=+0.080144489 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:20:34 np0005466031 nova_compute[235803]: 2025-10-02 13:20:34.930 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:34 np0005466031 nova_compute[235803]: 2025-10-02 13:20:34.930 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:20:34 np0005466031 nova_compute[235803]: 2025-10-02 13:20:34.931 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:20:34 np0005466031 nova_compute[235803]: 2025-10-02 13:20:34.953 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:20:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:35.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:35.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:36 np0005466031 nova_compute[235803]: 2025-10-02 13:20:36.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:36.396102) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236396194, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2385, "num_deletes": 254, "total_data_size": 5873076, "memory_usage": 5943736, "flush_reason": "Manual Compaction"}
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236602000, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3812293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77015, "largest_seqno": 79395, "table_properties": {"data_size": 3802420, "index_size": 6302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20243, "raw_average_key_size": 20, "raw_value_size": 3782831, "raw_average_value_size": 3856, "num_data_blocks": 273, "num_entries": 981, "num_filter_entries": 981, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411037, "oldest_key_time": 1759411037, "file_creation_time": 1759411236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 205953 microseconds, and 7538 cpu microseconds.
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:20:36 np0005466031 nova_compute[235803]: 2025-10-02 13:20:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:36.602052) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3812293 bytes OK
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:36.602076) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:36.770333) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:36.770416) EVENT_LOG_v1 {"time_micros": 1759411236770398, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:36.770457) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5862523, prev total WAL file size 5864055, number of live WAL files 2.
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:36.773957) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3722KB)], [159(9909KB)]
Oct  2 09:20:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236773997, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 13959437, "oldest_snapshot_seqno": -1}
Oct  2 09:20:36 np0005466031 nova_compute[235803]: 2025-10-02 13:20:36.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 9982 keys, 12021929 bytes, temperature: kUnknown
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411237179450, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12021929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11958454, "index_size": 37442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263365, "raw_average_key_size": 26, "raw_value_size": 11784737, "raw_average_value_size": 1180, "num_data_blocks": 1420, "num_entries": 9982, "num_filter_entries": 9982, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:20:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:20:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:37.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:37.180131) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12021929 bytes
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:37.448783) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 34.4 rd, 29.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 10510, records dropped: 528 output_compression: NoCompression
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:37.448852) EVENT_LOG_v1 {"time_micros": 1759411237448825, "job": 102, "event": "compaction_finished", "compaction_time_micros": 405688, "compaction_time_cpu_micros": 33187, "output_level": 6, "num_output_files": 1, "total_output_size": 12021929, "num_input_records": 10510, "num_output_records": 9982, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411237450474, "job": 102, "event": "table_file_deletion", "file_number": 161}
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411237453274, "job": 102, "event": "table_file_deletion", "file_number": 159}
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:36.773849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:37.453324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:37.453330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:37.453332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:37.453336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:20:37.453338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:20:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:37.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:20:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:20:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:39.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:20:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:39.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:40 np0005466031 nova_compute[235803]: 2025-10-02 13:20:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:41.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:41 np0005466031 nova_compute[235803]: 2025-10-02 13:20:41.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:41 np0005466031 nova_compute[235803]: 2025-10-02 13:20:41.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:41 np0005466031 nova_compute[235803]: 2025-10-02 13:20:41.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:20:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:41.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:41 np0005466031 nova_compute[235803]: 2025-10-02 13:20:41.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:42 np0005466031 podman[333293]: 2025-10-02 13:20:42.622085014 +0000 UTC m=+0.052301177 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 09:20:42 np0005466031 podman[333292]: 2025-10-02 13:20:42.623609328 +0000 UTC m=+0.057478546 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:20:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:43.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:43.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:20:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:45.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:20:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:45.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:46 np0005466031 nova_compute[235803]: 2025-10-02 13:20:46.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:46 np0005466031 nova_compute[235803]: 2025-10-02 13:20:46.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:47.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:20:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:20:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:49.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:51 np0005466031 nova_compute[235803]: 2025-10-02 13:20:51.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:51.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:51 np0005466031 nova_compute[235803]: 2025-10-02 13:20:51.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:53.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:54 np0005466031 nova_compute[235803]: 2025-10-02 13:20:54.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:55.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:55.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:56 np0005466031 nova_compute[235803]: 2025-10-02 13:20:56.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:56 np0005466031 nova_compute[235803]: 2025-10-02 13:20:56.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:57.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:57.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:59.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:20:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:59.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:01 np0005466031 nova_compute[235803]: 2025-10-02 13:21:01.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:01.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:01 np0005466031 nova_compute[235803]: 2025-10-02 13:21:01.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:21:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:03.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:21:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:05.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:21:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1005916527' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:21:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:21:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1005916527' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:21:05 np0005466031 podman[333393]: 2025-10-02 13:21:05.623346681 +0000 UTC m=+0.050864486 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:21:05 np0005466031 podman[333394]: 2025-10-02 13:21:05.658724079 +0000 UTC m=+0.083376431 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:21:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:05.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:06 np0005466031 nova_compute[235803]: 2025-10-02 13:21:06.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:06 np0005466031 nova_compute[235803]: 2025-10-02 13:21:06.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:07.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:09.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:09.375 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:21:09 np0005466031 nova_compute[235803]: 2025-10-02 13:21:09.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:09.377 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:21:09 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:09.378 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:21:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:09.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:11.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:11 np0005466031 nova_compute[235803]: 2025-10-02 13:21:11.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:11 np0005466031 nova_compute[235803]: 2025-10-02 13:21:11.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:13 np0005466031 podman[333440]: 2025-10-02 13:21:13.618652538 +0000 UTC m=+0.046624353 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:21:13 np0005466031 podman[333441]: 2025-10-02 13:21:13.632516077 +0000 UTC m=+0.056324323 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:21:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:13.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:15.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:16 np0005466031 nova_compute[235803]: 2025-10-02 13:21:16.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:16 np0005466031 nova_compute[235803]: 2025-10-02 13:21:16.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:21:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:17.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:21:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:19.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:19.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e406 e406: 3 total, 3 up, 3 in
Oct  2 09:21:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:21 np0005466031 nova_compute[235803]: 2025-10-02 13:21:21.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:21 np0005466031 nova_compute[235803]: 2025-10-02 13:21:21.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:21.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:21 np0005466031 nova_compute[235803]: 2025-10-02 13:21:21.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:21:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:23.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:21:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:23.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:21:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:25.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:21:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:25.889 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:25.890 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:25.890 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:26 np0005466031 nova_compute[235803]: 2025-10-02 13:21:26.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:26 np0005466031 nova_compute[235803]: 2025-10-02 13:21:26.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:26 np0005466031 nova_compute[235803]: 2025-10-02 13:21:26.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 e407: 3 total, 3 up, 3 in
Oct  2 09:21:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:21:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:27.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:21:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:27.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:29.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:21:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:29.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:21:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:31.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.659 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.660 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.660 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:21:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:31.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:31 np0005466031 nova_compute[235803]: 2025-10-02 13:21:31.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:21:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1089819506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.187 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.391 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.392 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4143MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.393 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.393 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.617 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.618 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.634 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.657 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.657 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.672 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.698 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:21:32 np0005466031 nova_compute[235803]: 2025-10-02 13:21:32.713 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:21:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:33.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:21:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2728240605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:21:33 np0005466031 nova_compute[235803]: 2025-10-02 13:21:33.317 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:21:33 np0005466031 nova_compute[235803]: 2025-10-02 13:21:33.323 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:21:33 np0005466031 nova_compute[235803]: 2025-10-02 13:21:33.339 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:21:33 np0005466031 nova_compute[235803]: 2025-10-02 13:21:33.341 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:21:33 np0005466031 nova_compute[235803]: 2025-10-02 13:21:33.341 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:33.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:34 np0005466031 nova_compute[235803]: 2025-10-02 13:21:34.343 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:21:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:35.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:21:35 np0005466031 nova_compute[235803]: 2025-10-02 13:21:35.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:35 np0005466031 nova_compute[235803]: 2025-10-02 13:21:35.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:21:35 np0005466031 nova_compute[235803]: 2025-10-02 13:21:35.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:21:35 np0005466031 nova_compute[235803]: 2025-10-02 13:21:35.678 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:21:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:35.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:36 np0005466031 nova_compute[235803]: 2025-10-02 13:21:36.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:36 np0005466031 podman[333834]: 2025-10-02 13:21:36.615754388 +0000 UTC m=+0.046723456 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 09:21:36 np0005466031 podman[333835]: 2025-10-02 13:21:36.648255644 +0000 UTC m=+0.075921037 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:21:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:36 np0005466031 nova_compute[235803]: 2025-10-02 13:21:36.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:37.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:37 np0005466031 nova_compute[235803]: 2025-10-02 13:21:37.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:37.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:39.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:39.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:40 np0005466031 nova_compute[235803]: 2025-10-02 13:21:40.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:41 np0005466031 nova_compute[235803]: 2025-10-02 13:21:41.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:41 np0005466031 nova_compute[235803]: 2025-10-02 13:21:41.650 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:41 np0005466031 nova_compute[235803]: 2025-10-02 13:21:41.651 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:41 np0005466031 nova_compute[235803]: 2025-10-02 13:21:41.651 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:21:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:41.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:41 np0005466031 nova_compute[235803]: 2025-10-02 13:21:41.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:43.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:43.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:44 np0005466031 podman[333980]: 2025-10-02 13:21:44.626338399 +0000 UTC m=+0.057536028 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:21:44 np0005466031 podman[333979]: 2025-10-02 13:21:44.650818723 +0000 UTC m=+0.081433555 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:21:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:45.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:45.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:46 np0005466031 nova_compute[235803]: 2025-10-02 13:21:46.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:46 np0005466031 nova_compute[235803]: 2025-10-02 13:21:46.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:47.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:47.670 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:21:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:47.671 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:21:47 np0005466031 nova_compute[235803]: 2025-10-02 13:21:47.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:47 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:21:47.672 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:21:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:47.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:49.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:49.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:51 np0005466031 nova_compute[235803]: 2025-10-02 13:21:51.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:21:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:51.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:21:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:21:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:51.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:21:51 np0005466031 nova_compute[235803]: 2025-10-02 13:21:51.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:53.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:53.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:55.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:56 np0005466031 nova_compute[235803]: 2025-10-02 13:21:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:56 np0005466031 nova_compute[235803]: 2025-10-02 13:21:56.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:57.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:59.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:21:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:59.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:01 np0005466031 nova_compute[235803]: 2025-10-02 13:22:01.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:01.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:01.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:01 np0005466031 nova_compute[235803]: 2025-10-02 13:22:01.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:03.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:03.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:22:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:05.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:22:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:05.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:06 np0005466031 nova_compute[235803]: 2025-10-02 13:22:06.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:06 np0005466031 nova_compute[235803]: 2025-10-02 13:22:06.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:07.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:07 np0005466031 podman[334080]: 2025-10-02 13:22:07.611395161 +0000 UTC m=+0.044803911 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:22:07 np0005466031 podman[334081]: 2025-10-02 13:22:07.649519749 +0000 UTC m=+0.079541661 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct  2 09:22:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:07.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:09.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:09.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:11 np0005466031 nova_compute[235803]: 2025-10-02 13:22:11.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:11.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:11.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:11 np0005466031 nova_compute[235803]: 2025-10-02 13:22:11.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:12 np0005466031 nova_compute[235803]: 2025-10-02 13:22:12.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:12 np0005466031 nova_compute[235803]: 2025-10-02 13:22:12.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:22:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:13.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:15 np0005466031 podman[334129]: 2025-10-02 13:22:15.621399024 +0000 UTC m=+0.049459865 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 09:22:15 np0005466031 podman[334128]: 2025-10-02 13:22:15.62230307 +0000 UTC m=+0.055202651 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:22:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:15.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:16 np0005466031 nova_compute[235803]: 2025-10-02 13:22:16.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:16 np0005466031 nova_compute[235803]: 2025-10-02 13:22:16.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:17.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:17.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:18 np0005466031 nova_compute[235803]: 2025-10-02 13:22:18.648 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:18 np0005466031 nova_compute[235803]: 2025-10-02 13:22:18.648 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:22:18 np0005466031 nova_compute[235803]: 2025-10-02 13:22:18.665 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:22:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:19.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:19.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:21.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:21 np0005466031 nova_compute[235803]: 2025-10-02 13:22:21.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:21 np0005466031 nova_compute[235803]: 2025-10-02 13:22:21.654 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:21.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:21 np0005466031 nova_compute[235803]: 2025-10-02 13:22:21.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:23.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:23.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:25.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:22:25.890 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:22:25.891 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:22:25.891 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:25.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:26 np0005466031 nova_compute[235803]: 2025-10-02 13:22:26.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:26 np0005466031 nova_compute[235803]: 2025-10-02 13:22:26.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:27.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:28 np0005466031 nova_compute[235803]: 2025-10-02 13:22:28.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:29.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:29.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:31.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.662 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.663 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.663 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:22:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:31.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:31 np0005466031 nova_compute[235803]: 2025-10-02 13:22:31.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:22:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2038774104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.100 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.289 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.290 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4173MB free_disk=20.967288970947266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.291 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.291 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.356 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.356 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.373 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:22:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:22:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/364948547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.819 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.826 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.843 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.845 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:22:32 np0005466031 nova_compute[235803]: 2025-10-02 13:22:32.845 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:33.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:34 np0005466031 nova_compute[235803]: 2025-10-02 13:22:34.846 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:35.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:35.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:36 np0005466031 nova_compute[235803]: 2025-10-02 13:22:36.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:36 np0005466031 nova_compute[235803]: 2025-10-02 13:22:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:36 np0005466031 nova_compute[235803]: 2025-10-02 13:22:36.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:22:36 np0005466031 nova_compute[235803]: 2025-10-02 13:22:36.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:22:36 np0005466031 nova_compute[235803]: 2025-10-02 13:22:36.696 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:22:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:36 np0005466031 nova_compute[235803]: 2025-10-02 13:22:36.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:37.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:37 np0005466031 nova_compute[235803]: 2025-10-02 13:22:37.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:38 np0005466031 podman[334275]: 2025-10-02 13:22:38.619392769 +0000 UTC m=+0.053188662 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:22:38 np0005466031 podman[334276]: 2025-10-02 13:22:38.650358961 +0000 UTC m=+0.080788647 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 09:22:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:39.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:22:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:22:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:22:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:41 np0005466031 nova_compute[235803]: 2025-10-02 13:22:41.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:41.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:41 np0005466031 nova_compute[235803]: 2025-10-02 13:22:41.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:43.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:43 np0005466031 nova_compute[235803]: 2025-10-02 13:22:43.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:43 np0005466031 nova_compute[235803]: 2025-10-02 13:22:43.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:43 np0005466031 nova_compute[235803]: 2025-10-02 13:22:43.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:22:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:22:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:43.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:22:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:45.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:46 np0005466031 nova_compute[235803]: 2025-10-02 13:22:46.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:46 np0005466031 podman[334505]: 2025-10-02 13:22:46.631456622 +0000 UTC m=+0.055716115 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:22:46 np0005466031 podman[334504]: 2025-10-02 13:22:46.639304648 +0000 UTC m=+0.064085816 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:22:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:46 np0005466031 nova_compute[235803]: 2025-10-02 13:22:46.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:47.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:22:47 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:22:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:49.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:51.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:51 np0005466031 nova_compute[235803]: 2025-10-02 13:22:51.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:22:51.904 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:22:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:22:51.905 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:22:51 np0005466031 nova_compute[235803]: 2025-10-02 13:22:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:51.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:51 np0005466031 nova_compute[235803]: 2025-10-02 13:22:51.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:22:52.907 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:22:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:53.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:53.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:55.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:55.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:56 np0005466031 nova_compute[235803]: 2025-10-02 13:22:56.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:56 np0005466031 nova_compute[235803]: 2025-10-02 13:22:56.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 09:22:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:57.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 09:22:57 np0005466031 nova_compute[235803]: 2025-10-02 13:22:57.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:57.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:59.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:22:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:22:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:22:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:59.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:23:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:23:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:01.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:23:01 np0005466031 nova_compute[235803]: 2025-10-02 13:23:01.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:23:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:01.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:23:01 np0005466031 nova_compute[235803]: 2025-10-02 13:23:01.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:03.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:05.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:05.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:06 np0005466031 nova_compute[235803]: 2025-10-02 13:23:06.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:06 np0005466031 nova_compute[235803]: 2025-10-02 13:23:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:07.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:08.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:09.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:09 np0005466031 podman[334656]: 2025-10-02 13:23:09.620467569 +0000 UTC m=+0.052011199 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:23:09 np0005466031 podman[334657]: 2025-10-02 13:23:09.652428949 +0000 UTC m=+0.084546415 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:23:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:10.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:11 np0005466031 nova_compute[235803]: 2025-10-02 13:23:11.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:11.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:12.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:12 np0005466031 nova_compute[235803]: 2025-10-02 13:23:12.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:13.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:14.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:15.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:16.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:16 np0005466031 nova_compute[235803]: 2025-10-02 13:23:16.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:17 np0005466031 nova_compute[235803]: 2025-10-02 13:23:17.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:17 np0005466031 podman[334704]: 2025-10-02 13:23:17.621072699 +0000 UTC m=+0.051973227 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:23:17 np0005466031 podman[334705]: 2025-10-02 13:23:17.629721488 +0000 UTC m=+0.058436163 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:23:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:18.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:19.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:20.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:21 np0005466031 nova_compute[235803]: 2025-10-02 13:23:21.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:21.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:21.732453) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401732495, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1846, "num_deletes": 258, "total_data_size": 4366026, "memory_usage": 4433136, "flush_reason": "Manual Compaction"}
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401830864, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2870936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79401, "largest_seqno": 81241, "table_properties": {"data_size": 2863262, "index_size": 4552, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15894, "raw_average_key_size": 19, "raw_value_size": 2847854, "raw_average_value_size": 3582, "num_data_blocks": 200, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411236, "oldest_key_time": 1759411236, "file_creation_time": 1759411401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 98450 microseconds, and 6022 cpu microseconds.
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:21.830905) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2870936 bytes OK
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:21.830923) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:21.843316) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:21.843343) EVENT_LOG_v1 {"time_micros": 1759411401843335, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:21.843366) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 4357706, prev total WAL file size 4357706, number of live WAL files 2.
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:21.844485) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303138' seq:72057594037927935, type:22 .. '6C6F676D0033323731' seq:0, type:0; will stop at (end)
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2803KB)], [162(11MB)]
Oct  2 09:23:21 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401844516, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 14892865, "oldest_snapshot_seqno": -1}
Oct  2 09:23:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:22.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:22 np0005466031 nova_compute[235803]: 2025-10-02 13:23:22.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10242 keys, 14750101 bytes, temperature: kUnknown
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411402183703, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 14750101, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14681938, "index_size": 41426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 269736, "raw_average_key_size": 26, "raw_value_size": 14500880, "raw_average_value_size": 1415, "num_data_blocks": 1589, "num_entries": 10242, "num_filter_entries": 10242, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:22.183956) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14750101 bytes
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:22.222898) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 43.9 rd, 43.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.5 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 10777, records dropped: 535 output_compression: NoCompression
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:22.222941) EVENT_LOG_v1 {"time_micros": 1759411402222924, "job": 104, "event": "compaction_finished", "compaction_time_micros": 339285, "compaction_time_cpu_micros": 33571, "output_level": 6, "num_output_files": 1, "total_output_size": 14750101, "num_input_records": 10777, "num_output_records": 10242, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411402223635, "job": 104, "event": "table_file_deletion", "file_number": 164}
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411402225785, "job": 104, "event": "table_file_deletion", "file_number": 162}
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:21.844444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:22.225889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:22.225895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:22.225897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:22.225898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:22 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:23:22.225900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:22 np0005466031 nova_compute[235803]: 2025-10-02 13:23:22.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:23.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:24.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:25.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:23:25.892 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:23:25.892 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:23:25.892 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:26.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:26 np0005466031 nova_compute[235803]: 2025-10-02 13:23:26.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:27 np0005466031 nova_compute[235803]: 2025-10-02 13:23:27.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:27.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:28.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:29.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:30.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:30 np0005466031 nova_compute[235803]: 2025-10-02 13:23:30.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:31 np0005466031 nova_compute[235803]: 2025-10-02 13:23:31.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:31.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:32.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:32 np0005466031 nova_compute[235803]: 2025-10-02 13:23:32.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:32 np0005466031 nova_compute[235803]: 2025-10-02 13:23:32.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:32 np0005466031 nova_compute[235803]: 2025-10-02 13:23:32.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:32 np0005466031 nova_compute[235803]: 2025-10-02 13:23:32.668 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:32 np0005466031 nova_compute[235803]: 2025-10-02 13:23:32.669 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:32 np0005466031 nova_compute[235803]: 2025-10-02 13:23:32.669 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:32 np0005466031 nova_compute[235803]: 2025-10-02 13:23:32.669 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:23:32 np0005466031 nova_compute[235803]: 2025-10-02 13:23:32.669 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:23:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:23:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1469962813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.102 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.251 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.252 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4167MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.252 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.252 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.311 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.311 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.325 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:23:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:23:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:33.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:23:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:23:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2695112454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.795 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.802 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.821 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.823 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:23:33 np0005466031 nova_compute[235803]: 2025-10-02 13:23:33.824 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:34.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:34 np0005466031 nova_compute[235803]: 2025-10-02 13:23:34.825 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:36.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:36 np0005466031 nova_compute[235803]: 2025-10-02 13:23:36.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:37 np0005466031 nova_compute[235803]: 2025-10-02 13:23:37.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:37 np0005466031 nova_compute[235803]: 2025-10-02 13:23:37.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:38.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:38 np0005466031 nova_compute[235803]: 2025-10-02 13:23:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:38 np0005466031 nova_compute[235803]: 2025-10-02 13:23:38.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:23:38 np0005466031 nova_compute[235803]: 2025-10-02 13:23:38.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:23:38 np0005466031 nova_compute[235803]: 2025-10-02 13:23:38.654 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:23:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:40.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:40 np0005466031 podman[334900]: 2025-10-02 13:23:40.632289732 +0000 UTC m=+0.069196663 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:23:40 np0005466031 podman[334901]: 2025-10-02 13:23:40.632843568 +0000 UTC m=+0.066897007 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:23:41 np0005466031 nova_compute[235803]: 2025-10-02 13:23:41.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:23:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:41.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:23:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:23:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:42.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:23:42 np0005466031 nova_compute[235803]: 2025-10-02 13:23:42.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:43 np0005466031 nova_compute[235803]: 2025-10-02 13:23:43.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:43 np0005466031 nova_compute[235803]: 2025-10-02 13:23:43.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:23:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:44.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:45.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:45 np0005466031 nova_compute[235803]: 2025-10-02 13:23:45.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:46.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:46 np0005466031 nova_compute[235803]: 2025-10-02 13:23:46.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:47 np0005466031 nova_compute[235803]: 2025-10-02 13:23:47.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:47.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:48.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:48 np0005466031 podman[335077]: 2025-10-02 13:23:48.624930366 +0000 UTC m=+0.054183101 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd)
Oct  2 09:23:48 np0005466031 podman[335078]: 2025-10-02 13:23:48.627755227 +0000 UTC m=+0.054518021 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  2 09:23:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:49 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:49.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:51 np0005466031 nova_compute[235803]: 2025-10-02 13:23:51.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:51.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:23:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:23:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:52 np0005466031 nova_compute[235803]: 2025-10-02 13:23:52.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:23:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:52.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:23:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:53.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:55.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:56 np0005466031 nova_compute[235803]: 2025-10-02 13:23:56.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:57 np0005466031 nova_compute[235803]: 2025-10-02 13:23:57.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:57.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:58.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:23:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:59.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:00.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:24:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:24:01 np0005466031 nova_compute[235803]: 2025-10-02 13:24:01.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:24:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:01.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:24:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:02 np0005466031 nova_compute[235803]: 2025-10-02 13:24:02.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:02.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:03.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:04.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:05.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:06.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:06 np0005466031 nova_compute[235803]: 2025-10-02 13:24:06.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:07 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Oct  2 09:24:07 np0005466031 nova_compute[235803]: 2025-10-02 13:24:07.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:07.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:08.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:09.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.060678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450060745, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 771, "num_deletes": 251, "total_data_size": 1411033, "memory_usage": 1441760, "flush_reason": "Manual Compaction"}
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450071477, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 920236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81246, "largest_seqno": 82012, "table_properties": {"data_size": 916494, "index_size": 1521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8595, "raw_average_key_size": 19, "raw_value_size": 909023, "raw_average_value_size": 2080, "num_data_blocks": 66, "num_entries": 437, "num_filter_entries": 437, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411402, "oldest_key_time": 1759411402, "file_creation_time": 1759411450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 10843 microseconds, and 3812 cpu microseconds.
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.071520) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 920236 bytes OK
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.071558) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.077871) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.077917) EVENT_LOG_v1 {"time_micros": 1759411450077906, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.077941) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1406957, prev total WAL file size 1406957, number of live WAL files 2.
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.078763) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(898KB)], [165(14MB)]
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450078834, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 15670337, "oldest_snapshot_seqno": -1}
Oct  2 09:24:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:10.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10162 keys, 13668745 bytes, temperature: kUnknown
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450196300, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 13668745, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13602263, "index_size": 40001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 268761, "raw_average_key_size": 26, "raw_value_size": 13423570, "raw_average_value_size": 1320, "num_data_blocks": 1523, "num_entries": 10162, "num_filter_entries": 10162, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.196606) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13668745 bytes
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.204270) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.3 rd, 116.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.1 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(31.9) write-amplify(14.9) OK, records in: 10679, records dropped: 517 output_compression: NoCompression
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.204308) EVENT_LOG_v1 {"time_micros": 1759411450204295, "job": 106, "event": "compaction_finished", "compaction_time_micros": 117530, "compaction_time_cpu_micros": 33153, "output_level": 6, "num_output_files": 1, "total_output_size": 13668745, "num_input_records": 10679, "num_output_records": 10162, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450204659, "job": 106, "event": "table_file_deletion", "file_number": 167}
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450207230, "job": 106, "event": "table_file_deletion", "file_number": 165}
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.078591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.207344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.207349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.207350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.207352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:24:10.207353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:11 np0005466031 nova_compute[235803]: 2025-10-02 13:24:11.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:11.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:11 np0005466031 podman[335227]: 2025-10-02 13:24:11.645644331 +0000 UTC m=+0.068115202 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:24:11 np0005466031 podman[335228]: 2025-10-02 13:24:11.673406181 +0000 UTC m=+0.091398683 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:24:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:12 np0005466031 nova_compute[235803]: 2025-10-02 13:24:12.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:12.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:13.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:14.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:15.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e408 e408: 3 total, 3 up, 3 in
Oct  2 09:24:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:16.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:16 np0005466031 nova_compute[235803]: 2025-10-02 13:24:16.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:17 np0005466031 nova_compute[235803]: 2025-10-02 13:24:17.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:17.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:18.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e409 e409: 3 total, 3 up, 3 in
Oct  2 09:24:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:24:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:19.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:24:19 np0005466031 podman[335296]: 2025-10-02 13:24:19.623746105 +0000 UTC m=+0.052018588 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:24:19 np0005466031 podman[335283]: 2025-10-02 13:24:19.673721594 +0000 UTC m=+0.095926413 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 09:24:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:20.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e410 e410: 3 total, 3 up, 3 in
Oct  2 09:24:21 np0005466031 nova_compute[235803]: 2025-10-02 13:24:21.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:21.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:24:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:22.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:24:22 np0005466031 nova_compute[235803]: 2025-10-02 13:24:22.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:24:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:23.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:24:23 np0005466031 nova_compute[235803]: 2025-10-02 13:24:23.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:24.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:25.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:25.893 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:25.894 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:25.894 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:24:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.134 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.134 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.181 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.264 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.265 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.272 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.273 2 INFO nova.compute.claims [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.359 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 e411: 3 total, 3 up, 3 in
Oct  2 09:24:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:24:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/348882763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.822 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.828 2 DEBUG nova.compute.provider_tree [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.844 2 DEBUG nova.scheduler.client.report [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.867 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.868 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.944 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.944 2 DEBUG nova.network.neutron [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:24:26 np0005466031 nova_compute[235803]: 2025-10-02 13:24:26.965 2 INFO nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.002 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.103 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.105 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.105 2 INFO nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Creating image(s)#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.276 2 DEBUG nova.storage.rbd_utils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.305 2 DEBUG nova.storage.rbd_utils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.333 2 DEBUG nova.storage.rbd_utils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.336 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "f44c866c97e1b648360c56fc3199ee0f87396c0f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.337 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "f44c866c97e1b648360c56fc3199ee0f87396c0f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:27.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.600 2 DEBUG nova.virt.libvirt.imagebackend [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/4dbf986e-53ef-4e53-875d-c9d73e683338/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/4dbf986e-53ef-4e53-875d-c9d73e683338/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.658 2 DEBUG nova.virt.libvirt.imagebackend [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/4dbf986e-53ef-4e53-875d-c9d73e683338/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.659 2 DEBUG nova.storage.rbd_utils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] cloning images/4dbf986e-53ef-4e53-875d-c9d73e683338@snap to None/265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.800 2 DEBUG nova.policy [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7eba544d42c8426295f2a88f0e85d446', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ecfaf38d20784d06a43ced1560cede11', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:24:27 np0005466031 nova_compute[235803]: 2025-10-02 13:24:27.856 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "f44c866c97e1b648360c56fc3199ee0f87396c0f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:28 np0005466031 nova_compute[235803]: 2025-10-02 13:24:28.033 2 DEBUG nova.objects.instance [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lazy-loading 'migration_context' on Instance uuid 265d1e00-c92e-483b-8e0b-153ca45f2fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:24:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:28.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:28 np0005466031 nova_compute[235803]: 2025-10-02 13:24:28.192 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:24:28 np0005466031 nova_compute[235803]: 2025-10-02 13:24:28.192 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Ensure instance console log exists: /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:24:28 np0005466031 nova_compute[235803]: 2025-10-02 13:24:28.193 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:28 np0005466031 nova_compute[235803]: 2025-10-02 13:24:28.193 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:28 np0005466031 nova_compute[235803]: 2025-10-02 13:24:28.193 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:29.068 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:24:29 np0005466031 nova_compute[235803]: 2025-10-02 13:24:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:29 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:29.069 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:24:29 np0005466031 nova_compute[235803]: 2025-10-02 13:24:29.418 2 DEBUG nova.network.neutron [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Successfully created port: 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:24:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:29.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.086 2 DEBUG nova.network.neutron [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Successfully updated port: 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.102 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.102 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquired lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.102 2 DEBUG nova.network.neutron [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:24:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:30.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.175 2 DEBUG nova.compute.manager [req-964fced5-32cc-4520-aa17-af89a4ca369b req-94528f07-f926-4f01-87d9-b0fc6e7f279f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-changed-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.175 2 DEBUG nova.compute.manager [req-964fced5-32cc-4520-aa17-af89a4ca369b req-94528f07-f926-4f01-87d9-b0fc6e7f279f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Refreshing instance network info cache due to event network-changed-50d2c8f3-84cf-4e4a-8919-ec4a87bef603. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.175 2 DEBUG oslo_concurrency.lockutils [req-964fced5-32cc-4520-aa17-af89a4ca369b req-94528f07-f926-4f01-87d9-b0fc6e7f279f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.630 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:30 np0005466031 nova_compute[235803]: 2025-10-02 13:24:30.782 2 DEBUG nova.network.neutron [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:31.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.950 2 DEBUG nova.network.neutron [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updating instance_info_cache with network_info: [{"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.971 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Releasing lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.971 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Instance network_info: |[{"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.971 2 DEBUG oslo_concurrency.lockutils [req-964fced5-32cc-4520-aa17-af89a4ca369b req-94528f07-f926-4f01-87d9-b0fc6e7f279f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.972 2 DEBUG nova.network.neutron [req-964fced5-32cc-4520-aa17-af89a4ca369b req-94528f07-f926-4f01-87d9-b0fc6e7f279f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Refreshing network info cache for port 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.974 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Start _get_guest_xml network_info=[{"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:24:13Z,direct_url=<?>,disk_format='raw',id=4dbf986e-53ef-4e53-875d-c9d73e683338,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-180760656',owner='ecfaf38d20784d06a43ced1560cede11',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:24:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '4dbf986e-53ef-4e53-875d-c9d73e683338'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.978 2 WARNING nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.987 2 DEBUG nova.virt.libvirt.host [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.988 2 DEBUG nova.virt.libvirt.host [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.991 2 DEBUG nova.virt.libvirt.host [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.991 2 DEBUG nova.virt.libvirt.host [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.992 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.993 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:24:13Z,direct_url=<?>,disk_format='raw',id=4dbf986e-53ef-4e53-875d-c9d73e683338,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-180760656',owner='ecfaf38d20784d06a43ced1560cede11',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:24:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.993 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.993 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.993 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.994 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.994 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.994 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.994 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.995 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.995 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.995 2 DEBUG nova.virt.hardware [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:24:31 np0005466031 nova_compute[235803]: 2025-10-02 13:24:31.998 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:32.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:24:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1390501743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.443 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.473 2 DEBUG nova.storage.rbd_utils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.476 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:24:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/277055431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.991 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.992 2 DEBUG nova.virt.libvirt.vif [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-289182303',display_name='tempest-TestSnapshotPattern-server-289182303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-289182303',id=217,image_ref='4dbf986e-53ef-4e53-875d-c9d73e683338',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUvYaec5qv58Rt+7etIfxdZk5jlQmcuvdY+kvpbop0QcPd4KDRUXca759VWr6i4CfOCrn9td/XvE0cFAdnY7gxedUw6zaigljtueTe5IN5w0sb9JVzLt/cY3hOst2Sg1A==',key_name='tempest-TestSnapshotPattern-1796131382',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ecfaf38d20784d06a43ced1560cede11',ramdisk_id='',reservation_id='r-g3ri0pc2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5bc23980-d45b-4acb-9b37-858b821d2252',image_min_disk='1',image_min_ram='0',image_owner_id='ecfaf38d20784d06a43ced1560cede11',image_owner_project_name='tempest-TestSnapshotPattern-1671292510',image_owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member',image_user_id='7eba544d42c8426295f2a88f0e85d446',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1671292510',owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:24:27Z,user_data=None,user_id='7eba544d42c8426295f2a88f0e85d446',uuid=265d1e00-c92e-483b-8e0b-153ca45f2fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.993 2 DEBUG nova.network.os_vif_util [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converting VIF {"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.993 2 DEBUG nova.network.os_vif_util [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:16:87,bridge_name='br-int',has_traffic_filtering=True,id=50d2c8f3-84cf-4e4a-8919-ec4a87bef603,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50d2c8f3-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:24:32 np0005466031 nova_compute[235803]: 2025-10-02 13:24:32.994 2 DEBUG nova.objects.instance [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lazy-loading 'pci_devices' on Instance uuid 265d1e00-c92e-483b-8e0b-153ca45f2fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.010 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <uuid>265d1e00-c92e-483b-8e0b-153ca45f2fa5</uuid>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <name>instance-000000d9</name>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestSnapshotPattern-server-289182303</nova:name>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:24:31</nova:creationTime>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <nova:user uuid="7eba544d42c8426295f2a88f0e85d446">tempest-TestSnapshotPattern-1671292510-project-member</nova:user>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <nova:project uuid="ecfaf38d20784d06a43ced1560cede11">tempest-TestSnapshotPattern-1671292510</nova:project>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <nova:root type="image" uuid="4dbf986e-53ef-4e53-875d-c9d73e683338"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <nova:port uuid="50d2c8f3-84cf-4e4a-8919-ec4a87bef603">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <entry name="serial">265d1e00-c92e-483b-8e0b-153ca45f2fa5</entry>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <entry name="uuid">265d1e00-c92e-483b-8e0b-153ca45f2fa5</entry>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk.config">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:8f:16:87"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <target dev="tap50d2c8f3-84"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5/console.log" append="off"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <input type="keyboard" bus="usb"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:24:33 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:24:33 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:24:33 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:24:33 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.011 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Preparing to wait for external event network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.011 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.012 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.012 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.013 2 DEBUG nova.virt.libvirt.vif [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-289182303',display_name='tempest-TestSnapshotPattern-server-289182303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-289182303',id=217,image_ref='4dbf986e-53ef-4e53-875d-c9d73e683338',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUvYaec5qv58Rt+7etIfxdZk5jlQmcuvdY+kvpbop0QcPd4KDRUXca759VWr6i4CfOCrn9td/XvE0cFAdnY7gxedUw6zaigljtueTe5IN5w0sb9JVzLt/cY3hOst2Sg1A==',key_name='tempest-TestSnapshotPattern-1796131382',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ecfaf38d20784d06a43ced1560cede11',ramdisk_id='',reservation_id='r-g3ri0pc2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5bc23980-d45b-4acb-9b37-858b821d2252',image_min_disk='1',image_min_ram='0',image_owner_id='ecfaf38d20784d06a43ced1560cede11',image_owner_project_name='tempest-TestSnapshotPattern-1671292510',image_owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member',image_user_id='7eba544d42c8426295f2a88f0e85d446',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1671292510',owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:24:27Z,user_data=None,user_id='7eba544d42c8426295f2a88f0e85d446',uuid=265d1e00-c92e-483b-8e0b-153ca45f2fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.013 2 DEBUG nova.network.os_vif_util [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converting VIF {"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.013 2 DEBUG nova.network.os_vif_util [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:16:87,bridge_name='br-int',has_traffic_filtering=True,id=50d2c8f3-84cf-4e4a-8919-ec4a87bef603,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50d2c8f3-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.014 2 DEBUG os_vif [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:16:87,bridge_name='br-int',has_traffic_filtering=True,id=50d2c8f3-84cf-4e4a-8919-ec4a87bef603,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50d2c8f3-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50d2c8f3-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50d2c8f3-84, col_values=(('external_ids', {'iface-id': '50d2c8f3-84cf-4e4a-8919-ec4a87bef603', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:16:87', 'vm-uuid': '265d1e00-c92e-483b-8e0b-153ca45f2fa5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:33 np0005466031 NetworkManager[44907]: <info>  [1759411473.0229] manager: (tap50d2c8f3-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.030 2 INFO os_vif [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:16:87,bridge_name='br-int',has_traffic_filtering=True,id=50d2c8f3-84cf-4e4a-8919-ec4a87bef603,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50d2c8f3-84')#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.048 2 DEBUG nova.network.neutron [req-964fced5-32cc-4520-aa17-af89a4ca369b req-94528f07-f926-4f01-87d9-b0fc6e7f279f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updated VIF entry in instance network info cache for port 50d2c8f3-84cf-4e4a-8919-ec4a87bef603. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.048 2 DEBUG nova.network.neutron [req-964fced5-32cc-4520-aa17-af89a4ca369b req-94528f07-f926-4f01-87d9-b0fc6e7f279f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updating instance_info_cache with network_info: [{"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.062 2 DEBUG oslo_concurrency.lockutils [req-964fced5-32cc-4520-aa17-af89a4ca369b req-94528f07-f926-4f01-87d9-b0fc6e7f279f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.080 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.081 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.081 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] No VIF found with MAC fa:16:3e:8f:16:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.082 2 INFO nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Using config drive#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.113 2 DEBUG nova.storage.rbd_utils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:24:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:33.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.671 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.672 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.826 2 INFO nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Creating config drive at /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5/disk.config#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.832 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqebv1jiw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:33 np0005466031 nova_compute[235803]: 2025-10-02 13:24:33.970 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqebv1jiw" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.001 2 DEBUG nova.storage.rbd_utils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] rbd image 265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.007 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5/disk.config 265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.071 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:24:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:24:34 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/634568430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.122 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:34.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.175 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.176 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.241 2 DEBUG oslo_concurrency.processutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5/disk.config 265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.242 2 INFO nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Deleting local config drive /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5/disk.config because it was imported into RBD.#033[00m
Oct  2 09:24:34 np0005466031 kernel: tap50d2c8f3-84: entered promiscuous mode
Oct  2 09:24:34 np0005466031 NetworkManager[44907]: <info>  [1759411474.3001] manager: (tap50d2c8f3-84): new Tun device (/org/freedesktop/NetworkManager/Devices/384)
Oct  2 09:24:34 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:34Z|00853|binding|INFO|Claiming lport 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 for this chassis.
Oct  2 09:24:34 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:34Z|00854|binding|INFO|50d2c8f3-84cf-4e4a-8919-ec4a87bef603: Claiming fa:16:3e:8f:16:87 10.100.0.13
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 NetworkManager[44907]: <info>  [1759411474.3172] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 NetworkManager[44907]: <info>  [1759411474.3183] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.321 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:16:87 10.100.0.13'], port_security=['fa:16:3e:8f:16:87 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '265d1e00-c92e-483b-8e0b-153ca45f2fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecfaf38d20784d06a43ced1560cede11', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd43d4ef4-4d2c-4cee-ba2f-defd20425090', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7f2f940-ba8f-4f9e-ad7e-a0f8c2d0d84f, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=50d2c8f3-84cf-4e4a-8919-ec4a87bef603) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.322 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 in datapath 769e2243-bbee-45a3-8fea-c44f0ea9a1e8 bound to our chassis#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.323 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 769e2243-bbee-45a3-8fea-c44f0ea9a1e8#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.335 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[250e1335-3b9e-4c5b-9fb2-e9e3388bfd21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.336 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap769e2243-b1 in ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:24:34 np0005466031 systemd-machined[192227]: New machine qemu-98-instance-000000d9.
Oct  2 09:24:34 np0005466031 systemd-udevd[335731]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.338 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap769e2243-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.338 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9a644261-d3a8-401d-89e4-1c284a3631bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.339 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0b085a-7da7-4560-9f2f-409fdf05dc79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 NetworkManager[44907]: <info>  [1759411474.3527] device (tap50d2c8f3-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:24:34 np0005466031 NetworkManager[44907]: <info>  [1759411474.3534] device (tap50d2c8f3-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.352 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[73e3e946-2cd2-4f26-8cd2-4d8012bc50cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 systemd[1]: Started Virtual Machine qemu-98-instance-000000d9.
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.380 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.381 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc7911d-1967-4cc4-8ee7-70a42cd84899]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.382 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4133MB free_disk=20.942703247070312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.382 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.382 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.409 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[ff980375-7ca7-4e13-81b6-01c26f4813c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 systemd-udevd[335734]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.419 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[34794f27-c032-4de9-8faf-a9ec406ac3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 NetworkManager[44907]: <info>  [1759411474.4242] manager: (tap769e2243-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.456 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fcc676-7514-4f11-b575-c02a57f247d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.460 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[0f712896-21fa-4483-bec1-249554098c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:34Z|00855|binding|INFO|Setting lport 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 ovn-installed in OVS
Oct  2 09:24:34 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:34Z|00856|binding|INFO|Setting lport 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 up in Southbound
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 NetworkManager[44907]: <info>  [1759411474.4836] device (tap769e2243-b0): carrier: link connected
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.489 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[72e3e3a1-974b-4c74-9e7c-1203269094be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.510 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2f854f59-ffc0-44c6-a848-c77a13d196d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap769e2243-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:fb:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 933007, 'reachable_time': 16602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335763, 'error': None, 'target': 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.528 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2077d3-9e1e-438c-a081-ef9193d4de73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3f:fbe0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 933007, 'tstamp': 933007}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335764, 'error': None, 'target': 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.536 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 265d1e00-c92e-483b-8e0b-153ca45f2fa5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.537 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.537 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.549 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f34eac03-faf6-469c-b077-62e05a4a3b36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap769e2243-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:fb:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 933007, 'reachable_time': 16602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335765, 'error': None, 'target': 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.581 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[454d73c8-02b0-4a49-87f0-054f32f71406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.614 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.641 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5935d4-1365-4949-99f0-bb5f001401f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.644 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769e2243-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.644 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.645 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap769e2243-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 kernel: tap769e2243-b0: entered promiscuous mode
Oct  2 09:24:34 np0005466031 NetworkManager[44907]: <info>  [1759411474.6478] manager: (tap769e2243-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.651 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap769e2243-b0, col_values=(('external_ids', {'iface-id': '866389a0-d931-4b6e-8a65-5bb4c8a2b7c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:24:34 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:34Z|00857|binding|INFO|Releasing lport 866389a0-d931-4b6e-8a65-5bb4c8a2b7c2 from this chassis (sb_readonly=0)
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.655 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/769e2243-bbee-45a3-8fea-c44f0ea9a1e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/769e2243-bbee-45a3-8fea-c44f0ea9a1e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.656 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[434ddfbb-4185-421c-8c81-eb33bc46386d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.656 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-769e2243-bbee-45a3-8fea-c44f0ea9a1e8
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/769e2243-bbee-45a3-8fea-c44f0ea9a1e8.pid.haproxy
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 769e2243-bbee-45a3-8fea-c44f0ea9a1e8
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:24:34 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:24:34.657 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'env', 'PROCESS_TAG=haproxy-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/769e2243-bbee-45a3-8fea-c44f0ea9a1e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.947 2 DEBUG nova.compute.manager [req-f2ea1dbc-0133-43b7-9e09-0b9cca0e61b5 req-e15d32df-639c-4f2a-8246-b2683babefb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.948 2 DEBUG oslo_concurrency.lockutils [req-f2ea1dbc-0133-43b7-9e09-0b9cca0e61b5 req-e15d32df-639c-4f2a-8246-b2683babefb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.949 2 DEBUG oslo_concurrency.lockutils [req-f2ea1dbc-0133-43b7-9e09-0b9cca0e61b5 req-e15d32df-639c-4f2a-8246-b2683babefb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.949 2 DEBUG oslo_concurrency.lockutils [req-f2ea1dbc-0133-43b7-9e09-0b9cca0e61b5 req-e15d32df-639c-4f2a-8246-b2683babefb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:34 np0005466031 nova_compute[235803]: 2025-10-02 13:24:34.949 2 DEBUG nova.compute.manager [req-f2ea1dbc-0133-43b7-9e09-0b9cca0e61b5 req-e15d32df-639c-4f2a-8246-b2683babefb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Processing event network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:24:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:24:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3827424755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.086 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.093 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:24:35 np0005466031 podman[335859]: 2025-10-02 13:24:35.09783307 +0000 UTC m=+0.110930025 container create 7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:24:35 np0005466031 podman[335859]: 2025-10-02 13:24:35.018483895 +0000 UTC m=+0.031580870 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.115 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:24:35 np0005466031 systemd[1]: Started libpod-conmon-7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930.scope.
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.143 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.144 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:35 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:24:35 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe751809ef80f3cfc63615782884bfbfb8ea10dd242382edcb27b2cca67e11cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:24:35 np0005466031 podman[335859]: 2025-10-02 13:24:35.181266552 +0000 UTC m=+0.194363527 container init 7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:24:35 np0005466031 podman[335859]: 2025-10-02 13:24:35.186481572 +0000 UTC m=+0.199578527 container start 7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:24:35 np0005466031 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[335876]: [NOTICE]   (335880) : New worker (335882) forked
Oct  2 09:24:35 np0005466031 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[335876]: [NOTICE]   (335880) : Loading success.
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.276 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411475.2759664, 265d1e00-c92e-483b-8e0b-153ca45f2fa5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.277 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] VM Started (Lifecycle Event)#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.279 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.282 2 DEBUG nova.virt.libvirt.driver [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.285 2 INFO nova.virt.libvirt.driver [-] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Instance spawned successfully.#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.286 2 INFO nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Took 8.18 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.286 2 DEBUG nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.313 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.316 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:24:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:35.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.514 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.515 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411475.2761555, 265d1e00-c92e-483b-8e0b-153ca45f2fa5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.515 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.522 2 INFO nova.compute.manager [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Took 9.28 seconds to build instance.#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.552 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.553 2 DEBUG oslo_concurrency.lockutils [None req-de8ce9c2-7116-4207-a767-08e0017cf08b 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.556 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411475.2820954, 265d1e00-c92e-483b-8e0b-153ca45f2fa5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.556 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.578 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:24:35 np0005466031 nova_compute[235803]: 2025-10-02 13:24:35.581 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:24:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:36 np0005466031 nova_compute[235803]: 2025-10-02 13:24:36.144 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:36 np0005466031 nova_compute[235803]: 2025-10-02 13:24:36.145 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:37 np0005466031 nova_compute[235803]: 2025-10-02 13:24:37.084 2 DEBUG nova.compute.manager [req-d482c66c-9a92-464a-a1a0-a8950fbf60a4 req-5dfd9338-23a5-4743-a373-14e1809e0512 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:24:37 np0005466031 nova_compute[235803]: 2025-10-02 13:24:37.085 2 DEBUG oslo_concurrency.lockutils [req-d482c66c-9a92-464a-a1a0-a8950fbf60a4 req-5dfd9338-23a5-4743-a373-14e1809e0512 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:37 np0005466031 nova_compute[235803]: 2025-10-02 13:24:37.085 2 DEBUG oslo_concurrency.lockutils [req-d482c66c-9a92-464a-a1a0-a8950fbf60a4 req-5dfd9338-23a5-4743-a373-14e1809e0512 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:37 np0005466031 nova_compute[235803]: 2025-10-02 13:24:37.085 2 DEBUG oslo_concurrency.lockutils [req-d482c66c-9a92-464a-a1a0-a8950fbf60a4 req-5dfd9338-23a5-4743-a373-14e1809e0512 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:37 np0005466031 nova_compute[235803]: 2025-10-02 13:24:37.086 2 DEBUG nova.compute.manager [req-d482c66c-9a92-464a-a1a0-a8950fbf60a4 req-5dfd9338-23a5-4743-a373-14e1809e0512 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] No waiting events found dispatching network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:24:37 np0005466031 nova_compute[235803]: 2025-10-02 13:24:37.086 2 WARNING nova.compute.manager [req-d482c66c-9a92-464a-a1a0-a8950fbf60a4 req-5dfd9338-23a5-4743-a373-14e1809e0512 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received unexpected event network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:24:37 np0005466031 nova_compute[235803]: 2025-10-02 13:24:37.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:37.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:38 np0005466031 nova_compute[235803]: 2025-10-02 13:24:38.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:38.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:39 np0005466031 nova_compute[235803]: 2025-10-02 13:24:39.350 2 DEBUG nova.compute.manager [req-fb3bdc1d-996e-448b-b3e7-ae8e0de470b6 req-6490a171-8152-4469-9663-5bcecb4fbf86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-changed-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:24:39 np0005466031 nova_compute[235803]: 2025-10-02 13:24:39.351 2 DEBUG nova.compute.manager [req-fb3bdc1d-996e-448b-b3e7-ae8e0de470b6 req-6490a171-8152-4469-9663-5bcecb4fbf86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Refreshing instance network info cache due to event network-changed-50d2c8f3-84cf-4e4a-8919-ec4a87bef603. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:24:39 np0005466031 nova_compute[235803]: 2025-10-02 13:24:39.351 2 DEBUG oslo_concurrency.lockutils [req-fb3bdc1d-996e-448b-b3e7-ae8e0de470b6 req-6490a171-8152-4469-9663-5bcecb4fbf86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:24:39 np0005466031 nova_compute[235803]: 2025-10-02 13:24:39.351 2 DEBUG oslo_concurrency.lockutils [req-fb3bdc1d-996e-448b-b3e7-ae8e0de470b6 req-6490a171-8152-4469-9663-5bcecb4fbf86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:24:39 np0005466031 nova_compute[235803]: 2025-10-02 13:24:39.352 2 DEBUG nova.network.neutron [req-fb3bdc1d-996e-448b-b3e7-ae8e0de470b6 req-6490a171-8152-4469-9663-5bcecb4fbf86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Refreshing network info cache for port 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:24:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:39.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:39 np0005466031 nova_compute[235803]: 2025-10-02 13:24:39.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:24:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:24:40 np0005466031 nova_compute[235803]: 2025-10-02 13:24:40.252 2 DEBUG nova.network.neutron [req-fb3bdc1d-996e-448b-b3e7-ae8e0de470b6 req-6490a171-8152-4469-9663-5bcecb4fbf86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updated VIF entry in instance network info cache for port 50d2c8f3-84cf-4e4a-8919-ec4a87bef603. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:24:40 np0005466031 nova_compute[235803]: 2025-10-02 13:24:40.253 2 DEBUG nova.network.neutron [req-fb3bdc1d-996e-448b-b3e7-ae8e0de470b6 req-6490a171-8152-4469-9663-5bcecb4fbf86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updating instance_info_cache with network_info: [{"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:24:40 np0005466031 nova_compute[235803]: 2025-10-02 13:24:40.270 2 DEBUG oslo_concurrency.lockutils [req-fb3bdc1d-996e-448b-b3e7-ae8e0de470b6 req-6490a171-8152-4469-9663-5bcecb4fbf86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:24:40 np0005466031 nova_compute[235803]: 2025-10-02 13:24:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:40 np0005466031 nova_compute[235803]: 2025-10-02 13:24:40.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:24:40 np0005466031 nova_compute[235803]: 2025-10-02 13:24:40.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:24:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:41.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:41 np0005466031 nova_compute[235803]: 2025-10-02 13:24:41.797 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:24:41 np0005466031 nova_compute[235803]: 2025-10-02 13:24:41.798 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:24:41 np0005466031 nova_compute[235803]: 2025-10-02 13:24:41.798 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:24:41 np0005466031 nova_compute[235803]: 2025-10-02 13:24:41.798 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 265d1e00-c92e-483b-8e0b-153ca45f2fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:24:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:42.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:42 np0005466031 nova_compute[235803]: 2025-10-02 13:24:42.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:42 np0005466031 podman[335945]: 2025-10-02 13:24:42.62739956 +0000 UTC m=+0.050550337 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:24:42 np0005466031 podman[335946]: 2025-10-02 13:24:42.661393618 +0000 UTC m=+0.079997074 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:24:43 np0005466031 nova_compute[235803]: 2025-10-02 13:24:43.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:43.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:44.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:44 np0005466031 nova_compute[235803]: 2025-10-02 13:24:44.816 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updating instance_info_cache with network_info: [{"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:24:44 np0005466031 nova_compute[235803]: 2025-10-02 13:24:44.828 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:24:44 np0005466031 nova_compute[235803]: 2025-10-02 13:24:44.828 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:24:44 np0005466031 nova_compute[235803]: 2025-10-02 13:24:44.829 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:44 np0005466031 nova_compute[235803]: 2025-10-02 13:24:44.829 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:24:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:45.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:46.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:47 np0005466031 nova_compute[235803]: 2025-10-02 13:24:47.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:47.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:47 np0005466031 nova_compute[235803]: 2025-10-02 13:24:47.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:48 np0005466031 nova_compute[235803]: 2025-10-02 13:24:48.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:48 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:48Z|00104|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.13
Oct  2 09:24:48 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:48Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:8f:16:87 10.100.0.13
Oct  2 09:24:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:49.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:24:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:50.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:24:50 np0005466031 podman[335995]: 2025-10-02 13:24:50.629751153 +0000 UTC m=+0.050988529 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  2 09:24:50 np0005466031 podman[335996]: 2025-10-02 13:24:50.64352769 +0000 UTC m=+0.061021908 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:24:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:51.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:52 np0005466031 nova_compute[235803]: 2025-10-02 13:24:52.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:52.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:53 np0005466031 nova_compute[235803]: 2025-10-02 13:24:53.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:53.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:53 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:53Z|00106|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.13
Oct  2 09:24:53 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:53Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:8f:16:87 10.100.0.13
Oct  2 09:24:53 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:53Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:16:87 10.100.0.13
Oct  2 09:24:53 np0005466031 ovn_controller[132413]: 2025-10-02T13:24:53Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:16:87 10.100.0.13
Oct  2 09:24:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:24:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:54.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:24:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:55.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:56.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:57 np0005466031 nova_compute[235803]: 2025-10-02 13:24:57.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:57.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:58 np0005466031 nova_compute[235803]: 2025-10-02 13:24:58.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:58.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:24:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:59.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:00.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:01.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:02 np0005466031 nova_compute[235803]: 2025-10-02 13:25:02.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:02.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:25:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:25:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:25:02 np0005466031 nova_compute[235803]: 2025-10-02 13:25:02.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:03 np0005466031 nova_compute[235803]: 2025-10-02 13:25:03.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:03.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:04.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:05.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:06.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:07 np0005466031 nova_compute[235803]: 2025-10-02 13:25:07.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:07.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:08 np0005466031 nova_compute[235803]: 2025-10-02 13:25:08.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:08.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:09 np0005466031 ovn_controller[132413]: 2025-10-02T13:25:09Z|00858|memory_trim|INFO|Detected inactivity (last active 30028 ms ago): trimming memory
Oct  2 09:25:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:25:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:09.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:25:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:25:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:25:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:25:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:11.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:25:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:11 np0005466031 nova_compute[235803]: 2025-10-02 13:25:11.877 2 DEBUG nova.compute.manager [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:25:11 np0005466031 nova_compute[235803]: 2025-10-02 13:25:11.930 2 INFO nova.compute.manager [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] instance snapshotting#033[00m
Oct  2 09:25:12 np0005466031 nova_compute[235803]: 2025-10-02 13:25:12.146 2 INFO nova.virt.libvirt.driver [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Beginning live snapshot process#033[00m
Oct  2 09:25:12 np0005466031 nova_compute[235803]: 2025-10-02 13:25:12.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:12.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:12 np0005466031 nova_compute[235803]: 2025-10-02 13:25:12.286 2 DEBUG nova.storage.rbd_utils [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] creating snapshot(ef205d9079ce43f5a5920e3d3d48204e) on rbd image(265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:25:13 np0005466031 nova_compute[235803]: 2025-10-02 13:25:13.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e412 e412: 3 total, 3 up, 3 in
Oct  2 09:25:13 np0005466031 nova_compute[235803]: 2025-10-02 13:25:13.322 2 DEBUG nova.storage.rbd_utils [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] cloning vms/265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk@ef205d9079ce43f5a5920e3d3d48204e to images/7337ab36-547f-4549-871f-2f16bbc93bb9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 09:25:13 np0005466031 nova_compute[235803]: 2025-10-02 13:25:13.454 2 DEBUG nova.storage.rbd_utils [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] flattening images/7337ab36-547f-4549-871f-2f16bbc93bb9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 09:25:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:13.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:13 np0005466031 podman[336381]: 2025-10-02 13:25:13.656920607 +0000 UTC m=+0.081986002 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:25:13 np0005466031 podman[336382]: 2025-10-02 13:25:13.670521609 +0000 UTC m=+0.089799137 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:25:13 np0005466031 nova_compute[235803]: 2025-10-02 13:25:13.992 2 DEBUG nova.storage.rbd_utils [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] removing snapshot(ef205d9079ce43f5a5920e3d3d48204e) on rbd image(265d1e00-c92e-483b-8e0b-153ca45f2fa5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 09:25:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:14.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e413 e413: 3 total, 3 up, 3 in
Oct  2 09:25:14 np0005466031 nova_compute[235803]: 2025-10-02 13:25:14.319 2 DEBUG nova.storage.rbd_utils [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] creating snapshot(snap) on rbd image(7337ab36-547f-4549-871f-2f16bbc93bb9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:25:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e414 e414: 3 total, 3 up, 3 in
Oct  2 09:25:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:15.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:16.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:16 np0005466031 nova_compute[235803]: 2025-10-02 13:25:16.784 2 INFO nova.virt.libvirt.driver [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Snapshot image upload complete#033[00m
Oct  2 09:25:16 np0005466031 nova_compute[235803]: 2025-10-02 13:25:16.785 2 INFO nova.compute.manager [None req-6dfa56f2-354d-4a6a-aec3-35bf30bd8583 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Took 4.85 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 09:25:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:17 np0005466031 nova_compute[235803]: 2025-10-02 13:25:17.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:17.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:18 np0005466031 nova_compute[235803]: 2025-10-02 13:25:18.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:18.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e415 e415: 3 total, 3 up, 3 in
Oct  2 09:25:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:19.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:20.079 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:25:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:20.081 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:25:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:20.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.274 2 DEBUG nova.compute.manager [req-b730c444-953d-4a9f-88da-0b889d50ae75 req-9809b544-ca70-49f7-b444-ff25c12723e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-changed-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.274 2 DEBUG nova.compute.manager [req-b730c444-953d-4a9f-88da-0b889d50ae75 req-9809b544-ca70-49f7-b444-ff25c12723e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Refreshing instance network info cache due to event network-changed-50d2c8f3-84cf-4e4a-8919-ec4a87bef603. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.274 2 DEBUG oslo_concurrency.lockutils [req-b730c444-953d-4a9f-88da-0b889d50ae75 req-9809b544-ca70-49f7-b444-ff25c12723e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.274 2 DEBUG oslo_concurrency.lockutils [req-b730c444-953d-4a9f-88da-0b889d50ae75 req-9809b544-ca70-49f7-b444-ff25c12723e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.275 2 DEBUG nova.network.neutron [req-b730c444-953d-4a9f-88da-0b889d50ae75 req-9809b544-ca70-49f7-b444-ff25c12723e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Refreshing network info cache for port 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.359 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.360 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.360 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.360 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.361 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.362 2 INFO nova.compute.manager [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Terminating instance#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.362 2 DEBUG nova.compute.manager [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:25:20 np0005466031 kernel: tap50d2c8f3-84 (unregistering): left promiscuous mode
Oct  2 09:25:20 np0005466031 NetworkManager[44907]: <info>  [1759411520.4714] device (tap50d2c8f3-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:25:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:25:20Z|00859|binding|INFO|Releasing lport 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 from this chassis (sb_readonly=0)
Oct  2 09:25:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:25:20Z|00860|binding|INFO|Setting lport 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 down in Southbound
Oct  2 09:25:20 np0005466031 ovn_controller[132413]: 2025-10-02T13:25:20Z|00861|binding|INFO|Removing iface tap50d2c8f3-84 ovn-installed in OVS
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:20.492 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:16:87 10.100.0.13'], port_security=['fa:16:3e:8f:16:87 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '265d1e00-c92e-483b-8e0b-153ca45f2fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecfaf38d20784d06a43ced1560cede11', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd43d4ef4-4d2c-4cee-ba2f-defd20425090', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7f2f940-ba8f-4f9e-ad7e-a0f8c2d0d84f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=50d2c8f3-84cf-4e4a-8919-ec4a87bef603) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:25:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:20.496 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 50d2c8f3-84cf-4e4a-8919-ec4a87bef603 in datapath 769e2243-bbee-45a3-8fea-c44f0ea9a1e8 unbound from our chassis#033[00m
Oct  2 09:25:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:20.497 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 769e2243-bbee-45a3-8fea-c44f0ea9a1e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:25:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:20.499 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f47509c4-d007-4328-a8ea-df3734ca5626]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:25:20 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:20.500 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 namespace which is not needed anymore#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005466031 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Oct  2 09:25:20 np0005466031 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000d9.scope: Consumed 16.128s CPU time.
Oct  2 09:25:20 np0005466031 systemd-machined[192227]: Machine qemu-98-instance-000000d9 terminated.
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.605 2 INFO nova.virt.libvirt.driver [-] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Instance destroyed successfully.#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.606 2 DEBUG nova.objects.instance [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lazy-loading 'resources' on Instance uuid 265d1e00-c92e-483b-8e0b-153ca45f2fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.619 2 DEBUG nova.virt.libvirt.vif [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-289182303',display_name='tempest-TestSnapshotPattern-server-289182303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-289182303',id=217,image_ref='4dbf986e-53ef-4e53-875d-c9d73e683338',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUvYaec5qv58Rt+7etIfxdZk5jlQmcuvdY+kvpbop0QcPd4KDRUXca759VWr6i4CfOCrn9td/XvE0cFAdnY7gxedUw6zaigljtueTe5IN5w0sb9JVzLt/cY3hOst2Sg1A==',key_name='tempest-TestSnapshotPattern-1796131382',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:24:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ecfaf38d20784d06a43ced1560cede11',ramdisk_id='',reservation_id='r-g3ri0pc2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5bc23980-d45b-4acb-9b37-858b821d2252',image_min_disk='1',image_min_ram='0',image_owner_id='ecfaf38d20784d06a43ced1560cede11',image_owner_project_name='tempest-TestSnapshotPattern-1671292510',image_owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member',image_user_id='7eba544d42c8426295f2a88f0e85d446',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1671292510',owner_user_name='tempest-TestSnapshotPattern-1671292510-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:25:16Z,user_data=None,user_id='7eba544d42c8426295f2a88f0e85d446',uuid=265d1e00-c92e-483b-8e0b-153ca45f2fa5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.619 2 DEBUG nova.network.os_vif_util [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converting VIF {"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.620 2 DEBUG nova.network.os_vif_util [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:16:87,bridge_name='br-int',has_traffic_filtering=True,id=50d2c8f3-84cf-4e4a-8919-ec4a87bef603,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50d2c8f3-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.620 2 DEBUG os_vif [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:16:87,bridge_name='br-int',has_traffic_filtering=True,id=50d2c8f3-84cf-4e4a-8919-ec4a87bef603,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50d2c8f3-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50d2c8f3-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.628 2 INFO os_vif [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:16:87,bridge_name='br-int',has_traffic_filtering=True,id=50d2c8f3-84cf-4e4a-8919-ec4a87bef603,network=Network(769e2243-bbee-45a3-8fea-c44f0ea9a1e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50d2c8f3-84')#033[00m
Oct  2 09:25:20 np0005466031 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[335876]: [NOTICE]   (335880) : haproxy version is 2.8.14-c23fe91
Oct  2 09:25:20 np0005466031 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[335876]: [NOTICE]   (335880) : path to executable is /usr/sbin/haproxy
Oct  2 09:25:20 np0005466031 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[335876]: [WARNING]  (335880) : Exiting Master process...
Oct  2 09:25:20 np0005466031 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[335876]: [WARNING]  (335880) : Exiting Master process...
Oct  2 09:25:20 np0005466031 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[335876]: [ALERT]    (335880) : Current worker (335882) exited with code 143 (Terminated)
Oct  2 09:25:20 np0005466031 neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8[335876]: [WARNING]  (335880) : All workers exited. Exiting... (0)
Oct  2 09:25:20 np0005466031 systemd[1]: libpod-7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930.scope: Deactivated successfully.
Oct  2 09:25:20 np0005466031 podman[336543]: 2025-10-02 13:25:20.683905654 +0000 UTC m=+0.076979787 container died 7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:25:20 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930-userdata-shm.mount: Deactivated successfully.
Oct  2 09:25:20 np0005466031 systemd[1]: var-lib-containers-storage-overlay-fe751809ef80f3cfc63615782884bfbfb8ea10dd242382edcb27b2cca67e11cb-merged.mount: Deactivated successfully.
Oct  2 09:25:20 np0005466031 podman[336543]: 2025-10-02 13:25:20.840748781 +0000 UTC m=+0.233822884 container cleanup 7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:25:20 np0005466031 systemd[1]: libpod-conmon-7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930.scope: Deactivated successfully.
Oct  2 09:25:20 np0005466031 podman[336591]: 2025-10-02 13:25:20.880346531 +0000 UTC m=+0.164441996 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:25:20 np0005466031 podman[336582]: 2025-10-02 13:25:20.88759751 +0000 UTC m=+0.174707692 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.932 2 DEBUG nova.compute.manager [req-78f92cab-fe6f-465a-a2f6-b9d083588788 req-11cff1a4-999a-42ee-a6d9-8084640e17b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-vif-unplugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.933 2 DEBUG oslo_concurrency.lockutils [req-78f92cab-fe6f-465a-a2f6-b9d083588788 req-11cff1a4-999a-42ee-a6d9-8084640e17b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.933 2 DEBUG oslo_concurrency.lockutils [req-78f92cab-fe6f-465a-a2f6-b9d083588788 req-11cff1a4-999a-42ee-a6d9-8084640e17b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.933 2 DEBUG oslo_concurrency.lockutils [req-78f92cab-fe6f-465a-a2f6-b9d083588788 req-11cff1a4-999a-42ee-a6d9-8084640e17b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.933 2 DEBUG nova.compute.manager [req-78f92cab-fe6f-465a-a2f6-b9d083588788 req-11cff1a4-999a-42ee-a6d9-8084640e17b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] No waiting events found dispatching network-vif-unplugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:25:20 np0005466031 nova_compute[235803]: 2025-10-02 13:25:20.934 2 DEBUG nova.compute.manager [req-78f92cab-fe6f-465a-a2f6-b9d083588788 req-11cff1a4-999a-42ee-a6d9-8084640e17b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-vif-unplugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:25:21 np0005466031 podman[336614]: 2025-10-02 13:25:21.437849154 +0000 UTC m=+0.574720550 container remove 7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.444 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[82c651ac-deca-4268-a013-e8840226dc7a]: (4, ('Thu Oct  2 01:25:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 (7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930)\n7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930\nThu Oct  2 01:25:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 (7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930)\n7872d601a2361e33e81a11b9cd41e90cdbd015dff435ce9981cb63dfc7183930\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.446 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c65d3c45-540c-4ee4-9273-88749fa9a583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.446 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769e2243-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:25:21 np0005466031 nova_compute[235803]: 2025-10-02 13:25:21.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:21 np0005466031 kernel: tap769e2243-b0: left promiscuous mode
Oct  2 09:25:21 np0005466031 nova_compute[235803]: 2025-10-02 13:25:21.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.453 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b501c9bb-e10d-41fb-8a66-d567d2e25211]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:25:21 np0005466031 nova_compute[235803]: 2025-10-02 13:25:21.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:25:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:21.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.488 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4fddba-b8cb-42c3-b96a-280233cfec7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.489 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a4033b8e-0388-441c-bd2c-72826a263847]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.506 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2c592e96-d5c9-4d29-9ae6-27da88b3f47e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 932999, 'reachable_time': 33668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336647, 'error': None, 'target': 'ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:25:21 np0005466031 systemd[1]: run-netns-ovnmeta\x2d769e2243\x2dbbee\x2d45a3\x2d8fea\x2dc44f0ea9a1e8.mount: Deactivated successfully.
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.510 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-769e2243-bbee-45a3-8fea-c44f0ea9a1e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:25:21 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:21.511 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[da1ea637-0bb9-42c4-8984-963baeb304e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:25:21 np0005466031 nova_compute[235803]: 2025-10-02 13:25:21.543 2 DEBUG nova.network.neutron [req-b730c444-953d-4a9f-88da-0b889d50ae75 req-9809b544-ca70-49f7-b444-ff25c12723e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updated VIF entry in instance network info cache for port 50d2c8f3-84cf-4e4a-8919-ec4a87bef603. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:25:21 np0005466031 nova_compute[235803]: 2025-10-02 13:25:21.544 2 DEBUG nova.network.neutron [req-b730c444-953d-4a9f-88da-0b889d50ae75 req-9809b544-ca70-49f7-b444-ff25c12723e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updating instance_info_cache with network_info: [{"id": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "address": "fa:16:3e:8f:16:87", "network": {"id": "769e2243-bbee-45a3-8fea-c44f0ea9a1e8", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1174794611-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ecfaf38d20784d06a43ced1560cede11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50d2c8f3-84", "ovs_interfaceid": "50d2c8f3-84cf-4e4a-8919-ec4a87bef603", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:25:21 np0005466031 nova_compute[235803]: 2025-10-02 13:25:21.574 2 DEBUG oslo_concurrency.lockutils [req-b730c444-953d-4a9f-88da-0b889d50ae75 req-9809b544-ca70-49f7-b444-ff25c12723e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-265d1e00-c92e-483b-8e0b-153ca45f2fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:25:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e416 e416: 3 total, 3 up, 3 in
Oct  2 09:25:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:22.083 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:25:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:25:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:25:22 np0005466031 nova_compute[235803]: 2025-10-02 13:25:22.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:22 np0005466031 nova_compute[235803]: 2025-10-02 13:25:22.989 2 INFO nova.virt.libvirt.driver [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Deleting instance files /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5_del#033[00m
Oct  2 09:25:22 np0005466031 nova_compute[235803]: 2025-10-02 13:25:22.989 2 INFO nova.virt.libvirt.driver [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Deletion of /var/lib/nova/instances/265d1e00-c92e-483b-8e0b-153ca45f2fa5_del complete#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.000 2 DEBUG nova.compute.manager [req-72105506-39c7-4efc-ba51-09b8a151e557 req-69f2b82d-55be-4662-a84c-048c1aeec25e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.000 2 DEBUG oslo_concurrency.lockutils [req-72105506-39c7-4efc-ba51-09b8a151e557 req-69f2b82d-55be-4662-a84c-048c1aeec25e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.000 2 DEBUG oslo_concurrency.lockutils [req-72105506-39c7-4efc-ba51-09b8a151e557 req-69f2b82d-55be-4662-a84c-048c1aeec25e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.001 2 DEBUG oslo_concurrency.lockutils [req-72105506-39c7-4efc-ba51-09b8a151e557 req-69f2b82d-55be-4662-a84c-048c1aeec25e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.001 2 DEBUG nova.compute.manager [req-72105506-39c7-4efc-ba51-09b8a151e557 req-69f2b82d-55be-4662-a84c-048c1aeec25e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] No waiting events found dispatching network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.001 2 WARNING nova.compute.manager [req-72105506-39c7-4efc-ba51-09b8a151e557 req-69f2b82d-55be-4662-a84c-048c1aeec25e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received unexpected event network-vif-plugged-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.038 2 INFO nova.compute.manager [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Took 2.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.039 2 DEBUG oslo.service.loopingcall [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.039 2 DEBUG nova.compute.manager [-] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.039 2 DEBUG nova.network.neutron [-] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:25:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:23.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.867 2 DEBUG nova.network.neutron [-] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.885 2 INFO nova.compute.manager [-] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Took 0.85 seconds to deallocate network for instance.#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.915 2 DEBUG nova.compute.manager [req-d53ce39e-3220-4de2-bb5f-a5f581b206ac req-f3d8d3f7-d4da-4087-bf45-5cbafdfa170d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Received event network-vif-deleted-50d2c8f3-84cf-4e4a-8919-ec4a87bef603 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.924 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.924 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:23 np0005466031 nova_compute[235803]: 2025-10-02 13:25:23.994 2 DEBUG oslo_concurrency.processutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:25:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:25:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:24.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:25:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:25:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4070755414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:25:24 np0005466031 nova_compute[235803]: 2025-10-02 13:25:24.473 2 DEBUG oslo_concurrency.processutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:25:24 np0005466031 nova_compute[235803]: 2025-10-02 13:25:24.480 2 DEBUG nova.compute.provider_tree [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:25:24 np0005466031 nova_compute[235803]: 2025-10-02 13:25:24.498 2 DEBUG nova.scheduler.client.report [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:25:24 np0005466031 nova_compute[235803]: 2025-10-02 13:25:24.536 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:24 np0005466031 nova_compute[235803]: 2025-10-02 13:25:24.579 2 INFO nova.scheduler.client.report [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Deleted allocations for instance 265d1e00-c92e-483b-8e0b-153ca45f2fa5#033[00m
Oct  2 09:25:24 np0005466031 nova_compute[235803]: 2025-10-02 13:25:24.643 2 DEBUG oslo_concurrency.lockutils [None req-41b1cea5-fbae-4fbb-80bd-bddaca48a35f 7eba544d42c8426295f2a88f0e85d446 ecfaf38d20784d06a43ced1560cede11 - - default default] Lock "265d1e00-c92e-483b-8e0b-153ca45f2fa5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:25.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:25 np0005466031 nova_compute[235803]: 2025-10-02 13:25:25.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:25.894 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:25.895 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:25:25.895 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e417 e417: 3 total, 3 up, 3 in
Oct  2 09:25:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e418 e418: 3 total, 3 up, 3 in
Oct  2 09:25:27 np0005466031 nova_compute[235803]: 2025-10-02 13:25:27.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:29.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:30.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:30 np0005466031 nova_compute[235803]: 2025-10-02 13:25:30.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:31.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:25:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:32.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:25:32 np0005466031 nova_compute[235803]: 2025-10-02 13:25:32.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:32 np0005466031 nova_compute[235803]: 2025-10-02 13:25:32.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:33.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:33 np0005466031 nova_compute[235803]: 2025-10-02 13:25:33.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:33 np0005466031 nova_compute[235803]: 2025-10-02 13:25:33.675 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:33 np0005466031 nova_compute[235803]: 2025-10-02 13:25:33.676 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:33 np0005466031 nova_compute[235803]: 2025-10-02 13:25:33.676 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:33 np0005466031 nova_compute[235803]: 2025-10-02 13:25:33.676 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:25:33 np0005466031 nova_compute[235803]: 2025-10-02 13:25:33.676 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:25:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:25:34 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1206261349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.134 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:25:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:25:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.300 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.301 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4083MB free_disk=20.97104263305664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.301 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.302 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.373 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.373 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.407 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:25:34 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:25:34 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3046713103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.839 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.845 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.868 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.898 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:25:34 np0005466031 nova_compute[235803]: 2025-10-02 13:25:34.899 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:35 np0005466031 nova_compute[235803]: 2025-10-02 13:25:35.605 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411520.604652, 265d1e00-c92e-483b-8e0b-153ca45f2fa5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:25:35 np0005466031 nova_compute[235803]: 2025-10-02 13:25:35.606 2 INFO nova.compute.manager [-] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:25:35 np0005466031 nova_compute[235803]: 2025-10-02 13:25:35.626 2 DEBUG nova.compute.manager [None req-607aa451-2d6c-4495-ab33-813368a85781 - - - - - -] [instance: 265d1e00-c92e-483b-8e0b-153ca45f2fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:25:35 np0005466031 nova_compute[235803]: 2025-10-02 13:25:35.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:36.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:36 np0005466031 nova_compute[235803]: 2025-10-02 13:25:36.899 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 e419: 3 total, 3 up, 3 in
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:36.931343) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536931372, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1237, "num_deletes": 254, "total_data_size": 2505732, "memory_usage": 2548392, "flush_reason": "Manual Compaction"}
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536941944, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 1085387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82018, "largest_seqno": 83249, "table_properties": {"data_size": 1080869, "index_size": 2041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11767, "raw_average_key_size": 21, "raw_value_size": 1071177, "raw_average_value_size": 1933, "num_data_blocks": 90, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411450, "oldest_key_time": 1759411450, "file_creation_time": 1759411536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 10648 microseconds, and 3618 cpu microseconds.
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:36.941989) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 1085387 bytes OK
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:36.942010) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:36.944111) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:36.944127) EVENT_LOG_v1 {"time_micros": 1759411536944122, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:36.944145) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2499807, prev total WAL file size 2499807, number of live WAL files 2.
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:36.944980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303039' seq:0, type:0; will stop at (end)
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1059KB)], [168(13MB)]
Oct  2 09:25:36 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536945014, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14754132, "oldest_snapshot_seqno": -1}
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10230 keys, 11465495 bytes, temperature: kUnknown
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537025220, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 11465495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11401845, "index_size": 36977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 270450, "raw_average_key_size": 26, "raw_value_size": 11225325, "raw_average_value_size": 1097, "num_data_blocks": 1398, "num_entries": 10230, "num_filter_entries": 10230, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:37.025492) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11465495 bytes
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:37.027063) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.8 rd, 142.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(24.2) write-amplify(10.6) OK, records in: 10716, records dropped: 486 output_compression: NoCompression
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:37.027080) EVENT_LOG_v1 {"time_micros": 1759411537027072, "job": 108, "event": "compaction_finished", "compaction_time_micros": 80284, "compaction_time_cpu_micros": 30813, "output_level": 6, "num_output_files": 1, "total_output_size": 11465495, "num_input_records": 10716, "num_output_records": 10230, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537027340, "job": 108, "event": "table_file_deletion", "file_number": 170}
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537030120, "job": 108, "event": "table_file_deletion", "file_number": 168}
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:36.944903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:37.030176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:37.030182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:37.030184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:37.030186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:25:37.030188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466031 nova_compute[235803]: 2025-10-02 13:25:37.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:37 np0005466031 nova_compute[235803]: 2025-10-02 13:25:37.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:37 np0005466031 nova_compute[235803]: 2025-10-02 13:25:37.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:37.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:37 np0005466031 nova_compute[235803]: 2025-10-02 13:25:37.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:38.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:39.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:25:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:40.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:25:40 np0005466031 nova_compute[235803]: 2025-10-02 13:25:40.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:41.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:41 np0005466031 nova_compute[235803]: 2025-10-02 13:25:41.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:42 np0005466031 nova_compute[235803]: 2025-10-02 13:25:42.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:42.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:42 np0005466031 nova_compute[235803]: 2025-10-02 13:25:42.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:42 np0005466031 nova_compute[235803]: 2025-10-02 13:25:42.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:25:42 np0005466031 nova_compute[235803]: 2025-10-02 13:25:42.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:25:42 np0005466031 nova_compute[235803]: 2025-10-02 13:25:42.650 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:25:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:44.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:44 np0005466031 podman[336781]: 2025-10-02 13:25:44.632492078 +0000 UTC m=+0.056243040 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:25:44 np0005466031 podman[336782]: 2025-10-02 13:25:44.658515608 +0000 UTC m=+0.079827130 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:25:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:45 np0005466031 nova_compute[235803]: 2025-10-02 13:25:45.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:46.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:46 np0005466031 nova_compute[235803]: 2025-10-02 13:25:46.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:46 np0005466031 nova_compute[235803]: 2025-10-02 13:25:46.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:25:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:47 np0005466031 nova_compute[235803]: 2025-10-02 13:25:47.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:47.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:48.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:48 np0005466031 nova_compute[235803]: 2025-10-02 13:25:48.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:49.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:50.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:50 np0005466031 nova_compute[235803]: 2025-10-02 13:25:50.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:51.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:51 np0005466031 podman[336825]: 2025-10-02 13:25:51.632472747 +0000 UTC m=+0.064014894 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:25:51 np0005466031 podman[336826]: 2025-10-02 13:25:51.637833522 +0000 UTC m=+0.061722619 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:25:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:52 np0005466031 nova_compute[235803]: 2025-10-02 13:25:52.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:52.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:53.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:54.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:55 np0005466031 nova_compute[235803]: 2025-10-02 13:25:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:56.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:57 np0005466031 nova_compute[235803]: 2025-10-02 13:25:57.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:57.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:58.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:25:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:59.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:00 np0005466031 nova_compute[235803]: 2025-10-02 13:26:00.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:00.368 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:26:00 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:00.369 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:26:00 np0005466031 nova_compute[235803]: 2025-10-02 13:26:00.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:01.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:02 np0005466031 nova_compute[235803]: 2025-10-02 13:26:02.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:02.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:04.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:26:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1524008942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:26:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:26:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1524008942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:26:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:05.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:05 np0005466031 nova_compute[235803]: 2025-10-02 13:26:05.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:06.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:07 np0005466031 nova_compute[235803]: 2025-10-02 13:26:07.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:07.371 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:26:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:26:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:08.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:26:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:26:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 16K writes, 83K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1568 writes, 7979 keys, 1568 commit groups, 1.0 writes per commit group, ingest: 16.14 MB, 0.03 MB/s#012Interval WAL: 1568 writes, 1568 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     60.4      1.72              0.29        54    0.032       0      0       0.0       0.0#012  L6      1/0   10.93 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.3    109.6     93.8      5.86              1.52        53    0.111    401K    28K       0.0       0.0#012 Sum      1/0   10.93 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.3     84.8     86.2      7.58              1.81       107    0.071    401K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2     58.8     58.5      1.68              0.25        14    0.120     73K   3634       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    109.6     93.8      5.86              1.52        53    0.111    401K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     60.5      1.71              0.29        53    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.101, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.64 GB write, 0.11 MB/s write, 0.63 GB read, 0.11 MB/s read, 7.6 seconds#012Interval compaction: 0.10 GB write, 0.16 MB/s write, 0.10 GB read, 0.17 MB/s read, 1.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 68.56 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000404 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3932,65.67 MB,21.6029%) FilterBlock(107,1.08 MB,0.3544%) IndexBlock(107,1.81 MB,0.595133%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:26:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:09.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:26:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:10.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:10 np0005466031 nova_compute[235803]: 2025-10-02 13:26:10.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:12 np0005466031 nova_compute[235803]: 2025-10-02 13:26:12.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:26:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:26:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:26:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:12.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:26:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:13.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:14.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:15.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:15 np0005466031 podman[337059]: 2025-10-02 13:26:15.621323943 +0000 UTC m=+0.051898375 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:26:15 np0005466031 nova_compute[235803]: 2025-10-02 13:26:15.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:15 np0005466031 podman[337060]: 2025-10-02 13:26:15.648251469 +0000 UTC m=+0.074618940 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 09:26:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:26:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:17 np0005466031 nova_compute[235803]: 2025-10-02 13:26:17.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:17.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:18 np0005466031 nova_compute[235803]: 2025-10-02 13:26:18.486 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:18 np0005466031 nova_compute[235803]: 2025-10-02 13:26:18.488 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:18 np0005466031 nova_compute[235803]: 2025-10-02 13:26:18.504 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:26:18 np0005466031 nova_compute[235803]: 2025-10-02 13:26:18.584 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:18 np0005466031 nova_compute[235803]: 2025-10-02 13:26:18.585 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:18 np0005466031 nova_compute[235803]: 2025-10-02 13:26:18.594 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:26:18 np0005466031 nova_compute[235803]: 2025-10-02 13:26:18.594 2 INFO nova.compute.claims [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:26:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:26:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:26:18 np0005466031 nova_compute[235803]: 2025-10-02 13:26:18.699 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:26:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3344741174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.155 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.161 2 DEBUG nova.compute.provider_tree [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.178 2 DEBUG nova.scheduler.client.report [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.211 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.211 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.253 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.253 2 DEBUG nova.network.neutron [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.272 2 INFO nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.289 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.333 2 INFO nova.virt.block_device [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Booting with volume c3c03e41-d6c3-4680-bbf8-a098e1bc3da4 at /dev/vda#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.456 2 DEBUG nova.policy [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.481 2 DEBUG os_brick.utils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.482 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.492 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.493 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[35b27dc0-a465-448a-86c4-c5bc95cd5abc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.494 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.501 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.501 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[fba7574b-a92f-4a9f-9699-749a1d0a9a88]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.502 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.509 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.510 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[78b69965-def3-4349-a368-423df739d939]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.511 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8f3731-42a6-4f31-bf20-721fd23a5ec6]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.512 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:19.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.542 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.545 2 DEBUG os_brick.initiator.connectors.lightos [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.545 2 DEBUG os_brick.initiator.connectors.lightos [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.546 2 DEBUG os_brick.initiator.connectors.lightos [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.546 2 DEBUG os_brick.utils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.546 2 DEBUG nova.virt.block_device [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Updating existing volume attachment record: fb876e18-20a0-43ce-8309-1f26bef31159 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:26:19 np0005466031 nova_compute[235803]: 2025-10-02 13:26:19.985 2 DEBUG nova.network.neutron [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Successfully created port: 1fe16599-d70e-45b4-b8f4-1d8eca231a5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:26:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:26:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:20.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.697 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.698 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.698 2 INFO nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Creating image(s)#033[00m
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.699 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.699 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Ensure instance console log exists: /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.699 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.700 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:20 np0005466031 nova_compute[235803]: 2025-10-02 13:26:20.700 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:21 np0005466031 nova_compute[235803]: 2025-10-02 13:26:21.039 2 DEBUG nova.network.neutron [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Successfully updated port: 1fe16599-d70e-45b4-b8f4-1d8eca231a5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:26:21 np0005466031 nova_compute[235803]: 2025-10-02 13:26:21.055 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-3ff5ed77-0115-45a2-b09e-31c8fff6ac87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:26:21 np0005466031 nova_compute[235803]: 2025-10-02 13:26:21.055 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-3ff5ed77-0115-45a2-b09e-31c8fff6ac87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:26:21 np0005466031 nova_compute[235803]: 2025-10-02 13:26:21.056 2 DEBUG nova.network.neutron [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:26:21 np0005466031 nova_compute[235803]: 2025-10-02 13:26:21.154 2 DEBUG nova.compute.manager [req-70561c92-f74d-4715-9f3f-eef4d00fc07e req-fd5108ec-92b7-470b-81d4-6835b01ef6b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received event network-changed-1fe16599-d70e-45b4-b8f4-1d8eca231a5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:26:21 np0005466031 nova_compute[235803]: 2025-10-02 13:26:21.154 2 DEBUG nova.compute.manager [req-70561c92-f74d-4715-9f3f-eef4d00fc07e req-fd5108ec-92b7-470b-81d4-6835b01ef6b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Refreshing instance network info cache due to event network-changed-1fe16599-d70e-45b4-b8f4-1d8eca231a5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:26:21 np0005466031 nova_compute[235803]: 2025-10-02 13:26:21.154 2 DEBUG oslo_concurrency.lockutils [req-70561c92-f74d-4715-9f3f-eef4d00fc07e req-fd5108ec-92b7-470b-81d4-6835b01ef6b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3ff5ed77-0115-45a2-b09e-31c8fff6ac87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:26:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:21.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:21 np0005466031 nova_compute[235803]: 2025-10-02 13:26:21.851 2 DEBUG nova.network.neutron [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:26:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:22.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:22 np0005466031 podman[337235]: 2025-10-02 13:26:22.62535814 +0000 UTC m=+0.054368877 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 09:26:22 np0005466031 podman[337236]: 2025-10-02 13:26:22.647519698 +0000 UTC m=+0.072596842 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.890 2 DEBUG nova.network.neutron [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Updating instance_info_cache with network_info: [{"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.909 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-3ff5ed77-0115-45a2-b09e-31c8fff6ac87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.909 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Instance network_info: |[{"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.910 2 DEBUG oslo_concurrency.lockutils [req-70561c92-f74d-4715-9f3f-eef4d00fc07e req-fd5108ec-92b7-470b-81d4-6835b01ef6b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3ff5ed77-0115-45a2-b09e-31c8fff6ac87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.911 2 DEBUG nova.network.neutron [req-70561c92-f74d-4715-9f3f-eef4d00fc07e req-fd5108ec-92b7-470b-81d4-6835b01ef6b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Refreshing network info cache for port 1fe16599-d70e-45b4-b8f4-1d8eca231a5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.914 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Start _get_guest_xml network_info=[{"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c3c03e41-d6c3-4680-bbf8-a098e1bc3da4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c3c03e41-d6c3-4680-bbf8-a098e1bc3da4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '3ff5ed77-0115-45a2-b09e-31c8fff6ac87', 'attached_at': '', 'detached_at': '', 'volume_id': 'c3c03e41-d6c3-4680-bbf8-a098e1bc3da4', 'serial': 'c3c03e41-d6c3-4680-bbf8-a098e1bc3da4'}, 'attachment_id': 'fb876e18-20a0-43ce-8309-1f26bef31159', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.921 2 WARNING nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.929 2 DEBUG nova.virt.libvirt.host [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.930 2 DEBUG nova.virt.libvirt.host [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.933 2 DEBUG nova.virt.libvirt.host [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.934 2 DEBUG nova.virt.libvirt.host [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.935 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.935 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.936 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.936 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.936 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.936 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.936 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.937 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.937 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.937 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.937 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.937 2 DEBUG nova.virt.hardware [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.966 2 DEBUG nova.storage.rbd_utils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 3ff5ed77-0115-45a2-b09e-31c8fff6ac87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:26:22 np0005466031 nova_compute[235803]: 2025-10-02 13:26:22.970 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:26:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/977123340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.397 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.501 2 DEBUG os_brick.encryptors [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Using volume encryption metadata '{'encryption_key_id': '5df1a5a9-8b62-44cc-bef4-5aec82b14056', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c3c03e41-d6c3-4680-bbf8-a098e1bc3da4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c3c03e41-d6c3-4680-bbf8-a098e1bc3da4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '3ff5ed77-0115-45a2-b09e-31c8fff6ac87', 'attached_at': '', 'detached_at': '', 'volume_id': 'c3c03e41-d6c3-4680-bbf8-a098e1bc3da4', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.504 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.526 2 DEBUG barbicanclient.v1.secrets [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.526 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:23.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.552 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.553 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.572 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.572 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.593 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.594 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.613 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.614 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.632 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.632 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.662 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.663 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.686 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.687 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.714 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.715 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.761 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.762 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.799 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.800 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.819 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.820 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.841 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.842 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.870 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.871 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.890 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.890 2 INFO barbicanclient.base [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Calculated Secrets uuid ref: secrets/5df1a5a9-8b62-44cc-bef4-5aec82b14056#033[00m
Oct  2 09:26:23 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:23Z|00862|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.909 2 DEBUG barbicanclient.client [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.910 2 DEBUG nova.virt.libvirt.host [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct  2 09:26:23 np0005466031 nova_compute[235803]:  <usage type="volume">
Oct  2 09:26:23 np0005466031 nova_compute[235803]:    <volume>c3c03e41-d6c3-4680-bbf8-a098e1bc3da4</volume>
Oct  2 09:26:23 np0005466031 nova_compute[235803]:  </usage>
Oct  2 09:26:23 np0005466031 nova_compute[235803]: </secret>
Oct  2 09:26:23 np0005466031 nova_compute[235803]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.980 2 DEBUG nova.virt.libvirt.vif [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1932492560',display_name='tempest-TestVolumeBootPattern-server-1932492560',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1932492560',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-7vmlnqza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:26:19Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=3ff5ed77-0115-45a2-b09e-31c8fff6ac87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.981 2 DEBUG nova.network.os_vif_util [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.982 2 DEBUG nova.network.os_vif_util [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a8:ff,bridge_name='br-int',has_traffic_filtering=True,id=1fe16599-d70e-45b4-b8f4-1d8eca231a5a,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe16599-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:26:23 np0005466031 nova_compute[235803]: 2025-10-02 13:26:23.984 2 DEBUG nova.objects.instance [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ff5ed77-0115-45a2-b09e-31c8fff6ac87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.009 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <uuid>3ff5ed77-0115-45a2-b09e-31c8fff6ac87</uuid>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <name>instance-000000da</name>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestVolumeBootPattern-server-1932492560</nova:name>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:26:22</nova:creationTime>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <nova:port uuid="1fe16599-d70e-45b4-b8f4-1d8eca231a5a">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <entry name="serial">3ff5ed77-0115-45a2-b09e-31c8fff6ac87</entry>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <entry name="uuid">3ff5ed77-0115-45a2-b09e-31c8fff6ac87</entry>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/3ff5ed77-0115-45a2-b09e-31c8fff6ac87_disk.config">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-c3c03e41-d6c3-4680-bbf8-a098e1bc3da4">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <serial>c3c03e41-d6c3-4680-bbf8-a098e1bc3da4</serial>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <encryption format="luks">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:        <secret type="passphrase" uuid="7c59972e-5bba-4098-b749-6dc8d1c14832"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      </encryption>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:a6:a8:ff"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <target dev="tap1fe16599-d7"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87/console.log" append="off"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:26:24 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:26:24 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:26:24 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:26:24 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.012 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Preparing to wait for external event network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.013 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.014 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.014 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.015 2 DEBUG nova.virt.libvirt.vif [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1932492560',display_name='tempest-TestVolumeBootPattern-server-1932492560',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1932492560',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-7vmlnqza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:26:19Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=3ff5ed77-0115-45a2-b09e-31c8fff6ac87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.015 2 DEBUG nova.network.os_vif_util [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.016 2 DEBUG nova.network.os_vif_util [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a8:ff,bridge_name='br-int',has_traffic_filtering=True,id=1fe16599-d70e-45b4-b8f4-1d8eca231a5a,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe16599-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.016 2 DEBUG os_vif [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a8:ff,bridge_name='br-int',has_traffic_filtering=True,id=1fe16599-d70e-45b4-b8f4-1d8eca231a5a,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe16599-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1fe16599-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1fe16599-d7, col_values=(('external_ids', {'iface-id': '1fe16599-d70e-45b4-b8f4-1d8eca231a5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:a8:ff', 'vm-uuid': '3ff5ed77-0115-45a2-b09e-31c8fff6ac87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:24 np0005466031 NetworkManager[44907]: <info>  [1759411584.0250] manager: (tap1fe16599-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.035 2 INFO os_vif [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a8:ff,bridge_name='br-int',has_traffic_filtering=True,id=1fe16599-d70e-45b4-b8f4-1d8eca231a5a,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe16599-d7')#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.083 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.084 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.084 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:a6:a8:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.084 2 INFO nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Using config drive#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.110 2 DEBUG nova.storage.rbd_utils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 3ff5ed77-0115-45a2-b09e-31c8fff6ac87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:26:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:24.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.636 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.636 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.637 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.637 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.637 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.638 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.672 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.672 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.673 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] 3ff5ed77-0115-45a2-b09e-31c8fff6ac87 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.673 2 WARNING nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.673 2 WARNING nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.673 2 WARNING nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.674 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Removable base files: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.674 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.674 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/504ddca37fa682fe676ed6ed2451bc9473b1f829#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.674 2 INFO nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.675 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.675 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.675 2 DEBUG nova.virt.libvirt.imagecache [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.873 2 DEBUG nova.network.neutron [req-70561c92-f74d-4715-9f3f-eef4d00fc07e req-fd5108ec-92b7-470b-81d4-6835b01ef6b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Updated VIF entry in instance network info cache for port 1fe16599-d70e-45b4-b8f4-1d8eca231a5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.874 2 DEBUG nova.network.neutron [req-70561c92-f74d-4715-9f3f-eef4d00fc07e req-fd5108ec-92b7-470b-81d4-6835b01ef6b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Updating instance_info_cache with network_info: [{"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:26:24 np0005466031 nova_compute[235803]: 2025-10-02 13:26:24.889 2 DEBUG oslo_concurrency.lockutils [req-70561c92-f74d-4715-9f3f-eef4d00fc07e req-fd5108ec-92b7-470b-81d4-6835b01ef6b5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3ff5ed77-0115-45a2-b09e-31c8fff6ac87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.021 2 INFO nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Creating config drive at /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87/disk.config#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.026 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdwjzaid8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.166 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdwjzaid8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.197 2 DEBUG nova.storage.rbd_utils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 3ff5ed77-0115-45a2-b09e-31c8fff6ac87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.201 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87/disk.config 3ff5ed77-0115-45a2-b09e-31c8fff6ac87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.466 2 DEBUG oslo_concurrency.processutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87/disk.config 3ff5ed77-0115-45a2-b09e-31c8fff6ac87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.467 2 INFO nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Deleting local config drive /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87/disk.config because it was imported into RBD.#033[00m
Oct  2 09:26:25 np0005466031 kernel: tap1fe16599-d7: entered promiscuous mode
Oct  2 09:26:25 np0005466031 NetworkManager[44907]: <info>  [1759411585.5254] manager: (tap1fe16599-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:25Z|00863|binding|INFO|Claiming lport 1fe16599-d70e-45b4-b8f4-1d8eca231a5a for this chassis.
Oct  2 09:26:25 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:25Z|00864|binding|INFO|1fe16599-d70e-45b4-b8f4-1d8eca231a5a: Claiming fa:16:3e:a6:a8:ff 10.100.0.3
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:25.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.549 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:a8:ff 10.100.0.3'], port_security=['fa:16:3e:a6:a8:ff 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3ff5ed77-0115-45a2-b09e-31c8fff6ac87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e3c9d0c-4774-4653-b077-2f10893accdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=1fe16599-d70e-45b4-b8f4-1d8eca231a5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.551 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 1fe16599-d70e-45b4-b8f4-1d8eca231a5a in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.552 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5#033[00m
Oct  2 09:26:25 np0005466031 systemd-udevd[337390]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:26:25 np0005466031 systemd-machined[192227]: New machine qemu-99-instance-000000da.
Oct  2 09:26:25 np0005466031 NetworkManager[44907]: <info>  [1759411585.5675] device (tap1fe16599-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:26:25 np0005466031 NetworkManager[44907]: <info>  [1759411585.5688] device (tap1fe16599-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.569 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[114433b1-ac46-4fc4-8718-f6315acc3869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.571 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.573 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.573 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[17cd4202-b451-46b5-80aa-bf746596a21b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.574 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bb85003c-7404-471a-b000-56b2b9ee2481]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 systemd[1]: Started Virtual Machine qemu-99-instance-000000da.
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.585 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[fa634b92-06be-4300-9501-bffb938a3b45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:25Z|00865|binding|INFO|Setting lport 1fe16599-d70e-45b4-b8f4-1d8eca231a5a ovn-installed in OVS
Oct  2 09:26:25 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:25Z|00866|binding|INFO|Setting lport 1fe16599-d70e-45b4-b8f4-1d8eca231a5a up in Southbound
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.601 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[078f7994-792c-44c8-b85c-c3fdfcaf9c8f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.629 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4759e4ad-cda2-4b55-b758-99bcf3a643bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 NetworkManager[44907]: <info>  [1759411585.6357] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/391)
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.635 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a423ffc6-a49a-4c05-a727-67e46e95bd0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.670 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[8721d665-ad19-4247-9998-aec0293ed27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.673 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[6bae43cb-856e-4a33-8c84-d993d5a63e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 NetworkManager[44907]: <info>  [1759411585.7006] device (tap858f2b6f-80): carrier: link connected
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.706 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[da3f2281-e8a4-4a90-b4e7-572667f8b526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.729 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0ba69c-6daa-484f-b24a-3e9f6221020f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 944129, 'reachable_time': 33675, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337423, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.746 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d872366e-9b4d-4a6b-9411-46d4c064043e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 944129, 'tstamp': 944129}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337424, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.764 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[43e4a7dd-fc74-4c87-ba35-e68586da2bdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 944129, 'reachable_time': 33675, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337425, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.795 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c83eb56a-ecef-4305-b0ec-446103cfb334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.855 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e83de1-9621-49e6-99f5-c5854fd0547a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.857 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.857 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.857 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005466031 NetworkManager[44907]: <info>  [1759411585.8597] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Oct  2 09:26:25 np0005466031 kernel: tap858f2b6f-80: entered promiscuous mode
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.861 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:25Z|00867|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct  2 09:26:25 np0005466031 nova_compute[235803]: 2025-10-02 13:26:25.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.878 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.879 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[68a3690e-6060-4a66-8812-d053941c376d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.880 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.881 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.895 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.896 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:25.896 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:26 np0005466031 nova_compute[235803]: 2025-10-02 13:26:26.053 2 DEBUG nova.compute.manager [req-178d546a-eb5a-4765-b9aa-75d532419b1b req-7436858a-d4ce-4f29-94a5-50cf8f4b53e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received event network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:26:26 np0005466031 nova_compute[235803]: 2025-10-02 13:26:26.054 2 DEBUG oslo_concurrency.lockutils [req-178d546a-eb5a-4765-b9aa-75d532419b1b req-7436858a-d4ce-4f29-94a5-50cf8f4b53e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:26 np0005466031 nova_compute[235803]: 2025-10-02 13:26:26.055 2 DEBUG oslo_concurrency.lockutils [req-178d546a-eb5a-4765-b9aa-75d532419b1b req-7436858a-d4ce-4f29-94a5-50cf8f4b53e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:26 np0005466031 nova_compute[235803]: 2025-10-02 13:26:26.055 2 DEBUG oslo_concurrency.lockutils [req-178d546a-eb5a-4765-b9aa-75d532419b1b req-7436858a-d4ce-4f29-94a5-50cf8f4b53e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:26 np0005466031 nova_compute[235803]: 2025-10-02 13:26:26.055 2 DEBUG nova.compute.manager [req-178d546a-eb5a-4765-b9aa-75d532419b1b req-7436858a-d4ce-4f29-94a5-50cf8f4b53e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Processing event network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:26:26 np0005466031 podman[337494]: 2025-10-02 13:26:26.252240744 +0000 UTC m=+0.058065333 container create fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:26:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:26.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:26 np0005466031 systemd[1]: Started libpod-conmon-fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984.scope.
Oct  2 09:26:26 np0005466031 podman[337494]: 2025-10-02 13:26:26.225657728 +0000 UTC m=+0.031482307 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:26:26 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:26:26 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c2b9e7e0701c3f2937c6268d1f4577adc38808e22f7d3980a2d545dbb9a1a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:26:26 np0005466031 podman[337494]: 2025-10-02 13:26:26.332815444 +0000 UTC m=+0.138640003 container init fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:26:26 np0005466031 podman[337494]: 2025-10-02 13:26:26.338435136 +0000 UTC m=+0.144259685 container start fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:26:26 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[337509]: [NOTICE]   (337513) : New worker (337515) forked
Oct  2 09:26:26 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[337509]: [NOTICE]   (337513) : Loading success.
Oct  2 09:26:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:27 np0005466031 nova_compute[235803]: 2025-10-02 13:26:27.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:27.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.167 2 DEBUG nova.compute.manager [req-357b72fa-cbfb-4ff0-8886-85e2f92cf1f6 req-16242805-363d-460e-93e8-9690d4eb0768 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received event network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.168 2 DEBUG oslo_concurrency.lockutils [req-357b72fa-cbfb-4ff0-8886-85e2f92cf1f6 req-16242805-363d-460e-93e8-9690d4eb0768 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.168 2 DEBUG oslo_concurrency.lockutils [req-357b72fa-cbfb-4ff0-8886-85e2f92cf1f6 req-16242805-363d-460e-93e8-9690d4eb0768 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.168 2 DEBUG oslo_concurrency.lockutils [req-357b72fa-cbfb-4ff0-8886-85e2f92cf1f6 req-16242805-363d-460e-93e8-9690d4eb0768 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.169 2 DEBUG nova.compute.manager [req-357b72fa-cbfb-4ff0-8886-85e2f92cf1f6 req-16242805-363d-460e-93e8-9690d4eb0768 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] No waiting events found dispatching network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.169 2 WARNING nova.compute.manager [req-357b72fa-cbfb-4ff0-8886-85e2f92cf1f6 req-16242805-363d-460e-93e8-9690d4eb0768 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received unexpected event network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a for instance with vm_state building and task_state spawning.#033[00m
Oct  2 09:26:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:28.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.871 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411588.8700657, 3ff5ed77-0115-45a2-b09e-31c8fff6ac87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.872 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] VM Started (Lifecycle Event)#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.874 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.877 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.881 2 INFO nova.virt.libvirt.driver [-] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Instance spawned successfully.#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.881 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.898 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.901 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.908 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.909 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.909 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.910 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.910 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.911 2 DEBUG nova.virt.libvirt.driver [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.938 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.939 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411588.8708665, 3ff5ed77-0115-45a2-b09e-31c8fff6ac87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.939 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.973 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.976 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411588.8767493, 3ff5ed77-0115-45a2-b09e-31c8fff6ac87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:26:28 np0005466031 nova_compute[235803]: 2025-10-02 13:26:28.976 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:26:29 np0005466031 nova_compute[235803]: 2025-10-02 13:26:29.019 2 INFO nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Took 8.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:26:29 np0005466031 nova_compute[235803]: 2025-10-02 13:26:29.019 2 DEBUG nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:26:29 np0005466031 nova_compute[235803]: 2025-10-02 13:26:29.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:29 np0005466031 nova_compute[235803]: 2025-10-02 13:26:29.028 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:26:29 np0005466031 nova_compute[235803]: 2025-10-02 13:26:29.031 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:26:29 np0005466031 nova_compute[235803]: 2025-10-02 13:26:29.056 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:26:29 np0005466031 nova_compute[235803]: 2025-10-02 13:26:29.090 2 INFO nova.compute.manager [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Took 10.54 seconds to build instance.#033[00m
Oct  2 09:26:29 np0005466031 nova_compute[235803]: 2025-10-02 13:26:29.108 2 DEBUG oslo_concurrency.lockutils [None req-2f307ff0-90b7-4dab-b7a1-e05ba4139795 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:30.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.430 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.430 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.430 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.431 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.431 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.432 2 INFO nova.compute.manager [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Terminating instance#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.433 2 DEBUG nova.compute.manager [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:26:30 np0005466031 kernel: tap1fe16599-d7 (unregistering): left promiscuous mode
Oct  2 09:26:30 np0005466031 NetworkManager[44907]: <info>  [1759411590.5310] device (tap1fe16599-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:26:30 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:30Z|00868|binding|INFO|Releasing lport 1fe16599-d70e-45b4-b8f4-1d8eca231a5a from this chassis (sb_readonly=0)
Oct  2 09:26:30 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:30Z|00869|binding|INFO|Setting lport 1fe16599-d70e-45b4-b8f4-1d8eca231a5a down in Southbound
Oct  2 09:26:30 np0005466031 ovn_controller[132413]: 2025-10-02T13:26:30Z|00870|binding|INFO|Removing iface tap1fe16599-d7 ovn-installed in OVS
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.549 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:a8:ff 10.100.0.3'], port_security=['fa:16:3e:a6:a8:ff 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3ff5ed77-0115-45a2-b09e-31c8fff6ac87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e3c9d0c-4774-4653-b077-2f10893accdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=1fe16599-d70e-45b4-b8f4-1d8eca231a5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.550 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 1fe16599-d70e-45b4-b8f4-1d8eca231a5a in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.551 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.552 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1d628f20-9053-46ad-8d2f-dedef1385a6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.552 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:30 np0005466031 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000da.scope: Deactivated successfully.
Oct  2 09:26:30 np0005466031 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000da.scope: Consumed 3.622s CPU time.
Oct  2 09:26:30 np0005466031 systemd-machined[192227]: Machine qemu-99-instance-000000da terminated.
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.673 2 INFO nova.virt.libvirt.driver [-] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Instance destroyed successfully.#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.673 2 DEBUG nova.objects.instance [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 3ff5ed77-0115-45a2-b09e-31c8fff6ac87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.685 2 DEBUG nova.virt.libvirt.vif [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1932492560',display_name='tempest-TestVolumeBootPattern-server-1932492560',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1932492560',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:26:29Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-7vmlnqza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:26:29Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=3ff5ed77-0115-45a2-b09e-31c8fff6ac87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.685 2 DEBUG nova.network.os_vif_util [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "address": "fa:16:3e:a6:a8:ff", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe16599-d7", "ovs_interfaceid": "1fe16599-d70e-45b4-b8f4-1d8eca231a5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.686 2 DEBUG nova.network.os_vif_util [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a8:ff,bridge_name='br-int',has_traffic_filtering=True,id=1fe16599-d70e-45b4-b8f4-1d8eca231a5a,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe16599-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.687 2 DEBUG os_vif [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a8:ff,bridge_name='br-int',has_traffic_filtering=True,id=1fe16599-d70e-45b4-b8f4-1d8eca231a5a,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe16599-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1fe16599-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:30 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[337509]: [NOTICE]   (337513) : haproxy version is 2.8.14-c23fe91
Oct  2 09:26:30 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[337509]: [NOTICE]   (337513) : path to executable is /usr/sbin/haproxy
Oct  2 09:26:30 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[337509]: [WARNING]  (337513) : Exiting Master process...
Oct  2 09:26:30 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[337509]: [WARNING]  (337513) : Exiting Master process...
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:26:30 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[337509]: [ALERT]    (337513) : Current worker (337515) exited with code 143 (Terminated)
Oct  2 09:26:30 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[337509]: [WARNING]  (337513) : All workers exited. Exiting... (0)
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.694 2 INFO os_vif [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a8:ff,bridge_name='br-int',has_traffic_filtering=True,id=1fe16599-d70e-45b4-b8f4-1d8eca231a5a,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe16599-d7')#033[00m
Oct  2 09:26:30 np0005466031 systemd[1]: libpod-fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984.scope: Deactivated successfully.
Oct  2 09:26:30 np0005466031 podman[337556]: 2025-10-02 13:26:30.702421195 +0000 UTC m=+0.054378927 container died fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:26:30 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984-userdata-shm.mount: Deactivated successfully.
Oct  2 09:26:30 np0005466031 systemd[1]: var-lib-containers-storage-overlay-b2c2b9e7e0701c3f2937c6268d1f4577adc38808e22f7d3980a2d545dbb9a1a6-merged.mount: Deactivated successfully.
Oct  2 09:26:30 np0005466031 podman[337556]: 2025-10-02 13:26:30.738286588 +0000 UTC m=+0.090244330 container cleanup fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:26:30 np0005466031 systemd[1]: libpod-conmon-fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984.scope: Deactivated successfully.
Oct  2 09:26:30 np0005466031 podman[337614]: 2025-10-02 13:26:30.804066422 +0000 UTC m=+0.043556125 container remove fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.810 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdce8d1-a11a-4fd1-bdcc-14859680cda9]: (4, ('Thu Oct  2 01:26:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984)\nfc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984\nThu Oct  2 01:26:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (fc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984)\nfc4451805962b803e2f038be0bd6420e645aa7a0dc7a1f10e67adfe266860984\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.811 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbc417e-3399-433b-a64b-2d158a60a5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.812 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:30 np0005466031 kernel: tap858f2b6f-80: left promiscuous mode
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.831 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[28d3efe1-5df6-4e81-8a15-60cd5f2ff876]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.862 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[abe11e7b-7149-4001-a1df-968a5b4434cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.864 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad515c1-3aa2-46b3-ba31-0a8e5f19f89a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.884 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c4f378-b7a2-4149-847f-7bc96ef606e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 944121, 'reachable_time': 27820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337629, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.888 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:26:30 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:30.888 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce2df68-d3e5-405f-9ec0-30696c95fe22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:26:30 np0005466031 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.945 2 INFO nova.virt.libvirt.driver [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Deleting instance files /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87_del#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.946 2 INFO nova.virt.libvirt.driver [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Deletion of /var/lib/nova/instances/3ff5ed77-0115-45a2-b09e-31c8fff6ac87_del complete#033[00m
Oct  2 09:26:30 np0005466031 nova_compute[235803]: 2025-10-02 13:26:30.999 2 DEBUG nova.compute.manager [req-78f25d8a-da0e-43c3-8fb1-f95f3b37d83d req-d6d7187e-dd07-4893-b1cf-3fb4b53a7c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received event network-vif-unplugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.000 2 DEBUG oslo_concurrency.lockutils [req-78f25d8a-da0e-43c3-8fb1-f95f3b37d83d req-d6d7187e-dd07-4893-b1cf-3fb4b53a7c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.000 2 DEBUG oslo_concurrency.lockutils [req-78f25d8a-da0e-43c3-8fb1-f95f3b37d83d req-d6d7187e-dd07-4893-b1cf-3fb4b53a7c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.000 2 DEBUG oslo_concurrency.lockutils [req-78f25d8a-da0e-43c3-8fb1-f95f3b37d83d req-d6d7187e-dd07-4893-b1cf-3fb4b53a7c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.000 2 DEBUG nova.compute.manager [req-78f25d8a-da0e-43c3-8fb1-f95f3b37d83d req-d6d7187e-dd07-4893-b1cf-3fb4b53a7c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] No waiting events found dispatching network-vif-unplugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.001 2 DEBUG nova.compute.manager [req-78f25d8a-da0e-43c3-8fb1-f95f3b37d83d req-d6d7187e-dd07-4893-b1cf-3fb4b53a7c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received event network-vif-unplugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.007 2 INFO nova.compute.manager [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.008 2 DEBUG oslo.service.loopingcall [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.009 2 DEBUG nova.compute.manager [-] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.009 2 DEBUG nova.network.neutron [-] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.527 2 DEBUG nova.network.neutron [-] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.541 2 INFO nova.compute.manager [-] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Took 0.53 seconds to deallocate network for instance.#033[00m
Oct  2 09:26:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.606 2 DEBUG nova.compute.manager [req-b15393bc-03e0-4fd7-9d9d-21308d4b50e1 req-9663ac1a-96e5-46e1-ab92-1728b710d327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received event network-vif-deleted-1fe16599-d70e-45b4-b8f4-1d8eca231a5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.698 2 INFO nova.compute.manager [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.756 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.756 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:31 np0005466031 nova_compute[235803]: 2025-10-02 13:26:31.809 2 DEBUG oslo_concurrency.processutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:26:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2893176451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:26:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:32.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:26:32 np0005466031 nova_compute[235803]: 2025-10-02 13:26:32.299 2 DEBUG oslo_concurrency.processutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:32 np0005466031 nova_compute[235803]: 2025-10-02 13:26:32.308 2 DEBUG nova.compute.provider_tree [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:26:32 np0005466031 nova_compute[235803]: 2025-10-02 13:26:32.330 2 DEBUG nova.scheduler.client.report [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:26:32 np0005466031 nova_compute[235803]: 2025-10-02 13:26:32.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:32 np0005466031 nova_compute[235803]: 2025-10-02 13:26:32.350 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:32 np0005466031 nova_compute[235803]: 2025-10-02 13:26:32.379 2 INFO nova.scheduler.client.report [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 3ff5ed77-0115-45a2-b09e-31c8fff6ac87#033[00m
Oct  2 09:26:32 np0005466031 nova_compute[235803]: 2025-10-02 13:26:32.452 2 DEBUG oslo_concurrency.lockutils [None req-ce45e08d-6ec2-4900-9f29-7029499eb8b6 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:33 np0005466031 nova_compute[235803]: 2025-10-02 13:26:33.099 2 DEBUG nova.compute.manager [req-d83af602-ec0a-43be-ace5-2e50d948ba9a req-f2a368ab-ff9a-4311-bf30-849d92e15685 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received event network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:26:33 np0005466031 nova_compute[235803]: 2025-10-02 13:26:33.099 2 DEBUG oslo_concurrency.lockutils [req-d83af602-ec0a-43be-ace5-2e50d948ba9a req-f2a368ab-ff9a-4311-bf30-849d92e15685 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:33 np0005466031 nova_compute[235803]: 2025-10-02 13:26:33.100 2 DEBUG oslo_concurrency.lockutils [req-d83af602-ec0a-43be-ace5-2e50d948ba9a req-f2a368ab-ff9a-4311-bf30-849d92e15685 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:33 np0005466031 nova_compute[235803]: 2025-10-02 13:26:33.100 2 DEBUG oslo_concurrency.lockutils [req-d83af602-ec0a-43be-ace5-2e50d948ba9a req-f2a368ab-ff9a-4311-bf30-849d92e15685 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3ff5ed77-0115-45a2-b09e-31c8fff6ac87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:33 np0005466031 nova_compute[235803]: 2025-10-02 13:26:33.101 2 DEBUG nova.compute.manager [req-d83af602-ec0a-43be-ace5-2e50d948ba9a req-f2a368ab-ff9a-4311-bf30-849d92e15685 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] No waiting events found dispatching network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:26:33 np0005466031 nova_compute[235803]: 2025-10-02 13:26:33.101 2 WARNING nova.compute.manager [req-d83af602-ec0a-43be-ace5-2e50d948ba9a req-f2a368ab-ff9a-4311-bf30-849d92e15685 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Received unexpected event network-vif-plugged-1fe16599-d70e-45b4-b8f4-1d8eca231a5a for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:26:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:33.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:33 np0005466031 nova_compute[235803]: 2025-10-02 13:26:33.671 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:34.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:35.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:35 np0005466031 nova_compute[235803]: 2025-10-02 13:26:35.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:35 np0005466031 nova_compute[235803]: 2025-10-02 13:26:35.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:35 np0005466031 nova_compute[235803]: 2025-10-02 13:26:35.684 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:35 np0005466031 nova_compute[235803]: 2025-10-02 13:26:35.684 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:35 np0005466031 nova_compute[235803]: 2025-10-02 13:26:35.684 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:35 np0005466031 nova_compute[235803]: 2025-10-02 13:26:35.684 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:26:35 np0005466031 nova_compute[235803]: 2025-10-02 13:26:35.685 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:35 np0005466031 nova_compute[235803]: 2025-10-02 13:26:35.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:26:36 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/859102279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.134 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.279 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.280 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4099MB free_disk=20.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.280 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.280 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:36.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.483 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.484 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.498 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.537 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.538 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.552 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.575 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:26:36 np0005466031 nova_compute[235803]: 2025-10-02 13:26:36.589 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:26:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3214726545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:26:37 np0005466031 nova_compute[235803]: 2025-10-02 13:26:37.026 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:37 np0005466031 nova_compute[235803]: 2025-10-02 13:26:37.033 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:26:37 np0005466031 nova_compute[235803]: 2025-10-02 13:26:37.050 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:26:37 np0005466031 nova_compute[235803]: 2025-10-02 13:26:37.077 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:26:37 np0005466031 nova_compute[235803]: 2025-10-02 13:26:37.077 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:37 np0005466031 nova_compute[235803]: 2025-10-02 13:26:37.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:37.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:38.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:39 np0005466031 nova_compute[235803]: 2025-10-02 13:26:39.078 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:40.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:26:40 np0005466031 nova_compute[235803]: 2025-10-02 13:26:40.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:41.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:42 np0005466031 nova_compute[235803]: 2025-10-02 13:26:42.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:26:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 74K writes, 304K keys, 74K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.05 MB/s#012Cumulative WAL: 74K writes, 27K syncs, 2.73 writes per sync, written: 0.31 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2624 writes, 10K keys, 2624 commit groups, 1.0 writes per commit group, ingest: 10.09 MB, 0.02 MB/s#012Interval WAL: 2624 writes, 1007 syncs, 2.61 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Oct  2 09:26:42 np0005466031 nova_compute[235803]: 2025-10-02 13:26:42.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:43.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:44.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:44 np0005466031 nova_compute[235803]: 2025-10-02 13:26:44.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:44 np0005466031 nova_compute[235803]: 2025-10-02 13:26:44.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:26:44 np0005466031 nova_compute[235803]: 2025-10-02 13:26:44.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:26:44 np0005466031 nova_compute[235803]: 2025-10-02 13:26:44.650 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:26:44 np0005466031 nova_compute[235803]: 2025-10-02 13:26:44.651 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:45.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:45 np0005466031 nova_compute[235803]: 2025-10-02 13:26:45.671 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411590.6701882, 3ff5ed77-0115-45a2-b09e-31c8fff6ac87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:26:45 np0005466031 nova_compute[235803]: 2025-10-02 13:26:45.672 2 INFO nova.compute.manager [-] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:26:45 np0005466031 nova_compute[235803]: 2025-10-02 13:26:45.697 2 DEBUG nova.compute.manager [None req-675c0b17-cd4d-44ed-b09b-056138d50947 - - - - - -] [instance: 3ff5ed77-0115-45a2-b09e-31c8fff6ac87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:26:45 np0005466031 nova_compute[235803]: 2025-10-02 13:26:45.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:46.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:26:46 np0005466031 podman[337758]: 2025-10-02 13:26:46.627143217 +0000 UTC m=+0.054393397 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:26:46 np0005466031 podman[337759]: 2025-10-02 13:26:46.664476322 +0000 UTC m=+0.091726582 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:26:46 np0005466031 nova_compute[235803]: 2025-10-02 13:26:46.665 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:46 np0005466031 nova_compute[235803]: 2025-10-02 13:26:46.665 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:26:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:47 np0005466031 nova_compute[235803]: 2025-10-02 13:26:47.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:47.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:48.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:49.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:26:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:50.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e420 e420: 3 total, 3 up, 3 in
Oct  2 09:26:50 np0005466031 nova_compute[235803]: 2025-10-02 13:26:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:50 np0005466031 nova_compute[235803]: 2025-10-02 13:26:50.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:51.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:52.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:52 np0005466031 nova_compute[235803]: 2025-10-02 13:26:52.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:53.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:53 np0005466031 podman[337808]: 2025-10-02 13:26:53.628477028 +0000 UTC m=+0.058687981 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:26:53 np0005466031 podman[337807]: 2025-10-02 13:26:53.631599088 +0000 UTC m=+0.064863008 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:26:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:54.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:55.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:55 np0005466031 nova_compute[235803]: 2025-10-02 13:26:55.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.229 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.230 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.247 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:26:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:26:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:56.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.330 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.331 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.340 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.341 2 INFO nova.compute.claims [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.454 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:26:56 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2514806046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.940 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:56 np0005466031 nova_compute[235803]: 2025-10-02 13:26:56.947 2 DEBUG nova.compute.provider_tree [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.252 2 DEBUG nova.scheduler.client.report [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.283 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.285 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.329 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.330 2 DEBUG nova.network.neutron [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.365 2 INFO nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.389 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.432 2 INFO nova.virt.block_device [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Booting with volume snapshot 9ae89550-43b8-4753-be7a-188d37df60ee at /dev/vda#033[00m
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:57.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:57 np0005466031 nova_compute[235803]: 2025-10-02 13:26:57.923 2 DEBUG nova.policy [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:26:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:58.877 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:26:58 np0005466031 nova_compute[235803]: 2025-10-02 13:26:58.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:58 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:26:58.880 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:26:58 np0005466031 nova_compute[235803]: 2025-10-02 13:26:58.997 2 DEBUG nova.network.neutron [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Successfully created port: dcf338f2-d183-4498-9e4b-ee36f16b6600 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:26:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:26:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:26:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:59.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.126 2 DEBUG nova.network.neutron [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Successfully updated port: dcf338f2-d183-4498-9e4b-ee36f16b6600 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.139 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-94242ea5-c37c-4fde-b4f4-3b1145291ba8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.140 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-94242ea5-c37c-4fde-b4f4-3b1145291ba8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.140 2 DEBUG nova.network.neutron [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.238 2 DEBUG nova.compute.manager [req-e681de34-0e3d-43d8-82e4-426730da4228 req-4be55cc1-5edf-4c90-b907-48c9cfe061e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received event network-changed-dcf338f2-d183-4498-9e4b-ee36f16b6600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.239 2 DEBUG nova.compute.manager [req-e681de34-0e3d-43d8-82e4-426730da4228 req-4be55cc1-5edf-4c90-b907-48c9cfe061e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Refreshing instance network info cache due to event network-changed-dcf338f2-d183-4498-9e4b-ee36f16b6600. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.239 2 DEBUG oslo_concurrency.lockutils [req-e681de34-0e3d-43d8-82e4-426730da4228 req-4be55cc1-5edf-4c90-b907-48c9cfe061e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-94242ea5-c37c-4fde-b4f4-3b1145291ba8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.288 2 DEBUG nova.network.neutron [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:27:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:00 np0005466031 nova_compute[235803]: 2025-10-02 13:27:00.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.564 2 DEBUG nova.network.neutron [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Updating instance_info_cache with network_info: [{"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:27:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:01.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.588 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-94242ea5-c37c-4fde-b4f4-3b1145291ba8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.589 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Instance network_info: |[{"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.589 2 DEBUG oslo_concurrency.lockutils [req-e681de34-0e3d-43d8-82e4-426730da4228 req-4be55cc1-5edf-4c90-b907-48c9cfe061e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-94242ea5-c37c-4fde-b4f4-3b1145291ba8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.590 2 DEBUG nova.network.neutron [req-e681de34-0e3d-43d8-82e4-426730da4228 req-4be55cc1-5edf-4c90-b907-48c9cfe061e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Refreshing network info cache for port dcf338f2-d183-4498-9e4b-ee36f16b6600 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.689 2 DEBUG os_brick.utils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.690 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.705 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.706 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea34dad-a2c3-476f-abd5-1336b46f8745]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.708 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.717 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.717 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[b355fe70-98b9-4580-ac1f-25c50fa9cb95]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.719 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.729 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.729 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4da9dd-c88f-407d-bdb0-a1f7de4d8a1f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.731 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[850febf4-a38b-4316-ab22-72755f1a4ed5]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.732 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.774 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.776 2 DEBUG os_brick.initiator.connectors.lightos [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.777 2 DEBUG os_brick.initiator.connectors.lightos [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.777 2 DEBUG os_brick.initiator.connectors.lightos [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.777 2 DEBUG os_brick.utils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:27:01 np0005466031 nova_compute[235803]: 2025-10-02 13:27:01.778 2 DEBUG nova.virt.block_device [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Updating existing volume attachment record: ba8dceeb-6508-446a-a054-9e23f36ad0c3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:27:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:02.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.652 2 DEBUG nova.network.neutron [req-e681de34-0e3d-43d8-82e4-426730da4228 req-4be55cc1-5edf-4c90-b907-48c9cfe061e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Updated VIF entry in instance network info cache for port dcf338f2-d183-4498-9e4b-ee36f16b6600. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.652 2 DEBUG nova.network.neutron [req-e681de34-0e3d-43d8-82e4-426730da4228 req-4be55cc1-5edf-4c90-b907-48c9cfe061e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Updating instance_info_cache with network_info: [{"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.700 2 DEBUG oslo_concurrency.lockutils [req-e681de34-0e3d-43d8-82e4-426730da4228 req-4be55cc1-5edf-4c90-b907-48c9cfe061e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-94242ea5-c37c-4fde-b4f4-3b1145291ba8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.952 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.953 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.954 2 INFO nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Creating image(s)#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.954 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.954 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Ensure instance console log exists: /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.955 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.955 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.955 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.957 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Start _get_guest_xml network_info=[{"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2426a7f5-76f6-42eb-8c88-184ea4203bdd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2426a7f5-76f6-42eb-8c88-184ea4203bdd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '94242ea5-c37c-4fde-b4f4-3b1145291ba8', 'attached_at': '', 'detached_at': '', 'volume_id': '2426a7f5-76f6-42eb-8c88-184ea4203bdd', 'serial': '2426a7f5-76f6-42eb-8c88-184ea4203bdd'}, 'attachment_id': 'ba8dceeb-6508-446a-a054-9e23f36ad0c3', 'delete_on_termination': True, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.961 2 WARNING nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.965 2 DEBUG nova.virt.libvirt.host [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.965 2 DEBUG nova.virt.libvirt.host [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.968 2 DEBUG nova.virt.libvirt.host [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.968 2 DEBUG nova.virt.libvirt.host [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.969 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.969 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.970 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.970 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.970 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.970 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.971 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.971 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.971 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.971 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.971 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.971 2 DEBUG nova.virt.hardware [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:27:02 np0005466031 nova_compute[235803]: 2025-10-02 13:27:02.998 2 DEBUG nova.storage.rbd_utils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 94242ea5-c37c-4fde-b4f4-3b1145291ba8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.002 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:03 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:27:03 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3139082809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.550 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:03.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.638 2 DEBUG nova.virt.libvirt.vif [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:26:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1548178805',display_name='tempest-TestVolumeBootPattern-server-1548178805',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1548178805',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-d8o96vh8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:26:57Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=94242ea5-c37c-4fde-b4f4-3b1145291ba8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.638 2 DEBUG nova.network.os_vif_util [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.639 2 DEBUG nova.network.os_vif_util [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:6e:47,bridge_name='br-int',has_traffic_filtering=True,id=dcf338f2-d183-4498-9e4b-ee36f16b6600,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcf338f2-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.640 2 DEBUG nova.objects.instance [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 94242ea5-c37c-4fde-b4f4-3b1145291ba8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.685 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <uuid>94242ea5-c37c-4fde-b4f4-3b1145291ba8</uuid>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <name>instance-000000db</name>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestVolumeBootPattern-server-1548178805</nova:name>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:27:02</nova:creationTime>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <nova:port uuid="dcf338f2-d183-4498-9e4b-ee36f16b6600">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <entry name="serial">94242ea5-c37c-4fde-b4f4-3b1145291ba8</entry>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <entry name="uuid">94242ea5-c37c-4fde-b4f4-3b1145291ba8</entry>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/94242ea5-c37c-4fde-b4f4-3b1145291ba8_disk.config">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-2426a7f5-76f6-42eb-8c88-184ea4203bdd">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <serial>2426a7f5-76f6-42eb-8c88-184ea4203bdd</serial>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:9a:6e:47"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <target dev="tapdcf338f2-d1"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8/console.log" append="off"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:27:03 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:27:03 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:27:03 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:27:03 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.685 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Preparing to wait for external event network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.685 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.686 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.686 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.686 2 DEBUG nova.virt.libvirt.vif [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:26:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1548178805',display_name='tempest-TestVolumeBootPattern-server-1548178805',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1548178805',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-d8o96vh8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:26:57Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=94242ea5-c37c-4fde-b4f4-3b1145291ba8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.687 2 DEBUG nova.network.os_vif_util [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.687 2 DEBUG nova.network.os_vif_util [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:6e:47,bridge_name='br-int',has_traffic_filtering=True,id=dcf338f2-d183-4498-9e4b-ee36f16b6600,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcf338f2-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.687 2 DEBUG os_vif [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:6e:47,bridge_name='br-int',has_traffic_filtering=True,id=dcf338f2-d183-4498-9e4b-ee36f16b6600,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcf338f2-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdcf338f2-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdcf338f2-d1, col_values=(('external_ids', {'iface-id': 'dcf338f2-d183-4498-9e4b-ee36f16b6600', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:6e:47', 'vm-uuid': '94242ea5-c37c-4fde-b4f4-3b1145291ba8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:03 np0005466031 NetworkManager[44907]: <info>  [1759411623.6945] manager: (tapdcf338f2-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.701 2 INFO os_vif [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:6e:47,bridge_name='br-int',has_traffic_filtering=True,id=dcf338f2-d183-4498-9e4b-ee36f16b6600,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcf338f2-d1')#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.852 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.853 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.853 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:9a:6e:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.853 2 INFO nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Using config drive#033[00m
Oct  2 09:27:03 np0005466031 nova_compute[235803]: 2025-10-02 13:27:03.879 2 DEBUG nova.storage.rbd_utils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 94242ea5-c37c-4fde-b4f4-3b1145291ba8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.209 2 INFO nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Creating config drive at /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8/disk.config#033[00m
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.213 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5vbftam6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:27:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:04.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.352 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5vbftam6" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.383 2 DEBUG nova.storage.rbd_utils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 94242ea5-c37c-4fde-b4f4-3b1145291ba8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.387 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8/disk.config 94242ea5-c37c-4fde-b4f4-3b1145291ba8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.558 2 DEBUG oslo_concurrency.processutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8/disk.config 94242ea5-c37c-4fde-b4f4-3b1145291ba8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.559 2 INFO nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Deleting local config drive /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8/disk.config because it was imported into RBD.#033[00m
Oct  2 09:27:04 np0005466031 kernel: tapdcf338f2-d1: entered promiscuous mode
Oct  2 09:27:04 np0005466031 NetworkManager[44907]: <info>  [1759411624.6201] manager: (tapdcf338f2-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Oct  2 09:27:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:04Z|00871|binding|INFO|Claiming lport dcf338f2-d183-4498-9e4b-ee36f16b6600 for this chassis.
Oct  2 09:27:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:04Z|00872|binding|INFO|dcf338f2-d183-4498-9e4b-ee36f16b6600: Claiming fa:16:3e:9a:6e:47 10.100.0.11
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:04Z|00873|binding|INFO|Setting lport dcf338f2-d183-4498-9e4b-ee36f16b6600 ovn-installed in OVS
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:04 np0005466031 nova_compute[235803]: 2025-10-02 13:27:04.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:04 np0005466031 systemd-udevd[338045]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:27:04 np0005466031 systemd-machined[192227]: New machine qemu-100-instance-000000db.
Oct  2 09:27:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:04Z|00874|binding|INFO|Setting lport dcf338f2-d183-4498-9e4b-ee36f16b6600 up in Southbound
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.666 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:6e:47 10.100.0.11'], port_security=['fa:16:3e:9a:6e:47 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '94242ea5-c37c-4fde-b4f4-3b1145291ba8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e3c9d0c-4774-4653-b077-2f10893accdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=dcf338f2-d183-4498-9e4b-ee36f16b6600) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.668 141898 INFO neutron.agent.ovn.metadata.agent [-] Port dcf338f2-d183-4498-9e4b-ee36f16b6600 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.669 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5#033[00m
Oct  2 09:27:04 np0005466031 NetworkManager[44907]: <info>  [1759411624.6780] device (tapdcf338f2-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:27:04 np0005466031 NetworkManager[44907]: <info>  [1759411624.6792] device (tapdcf338f2-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:27:04 np0005466031 systemd[1]: Started Virtual Machine qemu-100-instance-000000db.
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.687 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[19e20c0f-ec01-4116-80d5-f8422718a40e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.687 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.694 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.694 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[32e29121-3468-47f5-a6f3-6e8eb3009bb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.696 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d5795fae-c939-4d5c-bb69-d301645a8c9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.712 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ef2522-876e-48f3-8954-cc84a907d146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.727 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3aafedc5-6ac6-4e77-920a-e81751116af3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.760 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[162d63fa-e1d0-4549-ab6b-439e2a8d23fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 NetworkManager[44907]: <info>  [1759411624.7653] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.766 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[87d6934d-5110-436e-a7ef-bb25c5c1e222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.800 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[45b5d6d0-6eef-4534-8a93-02f4c375ed6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.803 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[4d34b907-ca85-42bd-8996-2468ca20d538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 NetworkManager[44907]: <info>  [1759411624.8253] device (tap858f2b6f-80): carrier: link connected
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.836 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfa5238-fb73-49f8-bc0a-40a27b52f494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.854 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3a704a63-7e19-4539-99cb-0083ab58659e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948041, 'reachable_time': 19070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338078, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.872 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fb8484-fc05-4504-9c2c-226fcb92e4b5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948041, 'tstamp': 948041}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338079, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.895 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e21ee399-db72-4fba-882a-042c408e953b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948041, 'reachable_time': 19070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338080, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.930 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5a65a0-a25a-4aaf-807f-598dc9547e03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.994 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[5bba2de1-65d5-4c04-9712-d5162908a3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.995 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.996 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:27:04 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:04.996 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:05 np0005466031 kernel: tap858f2b6f-80: entered promiscuous mode
Oct  2 09:27:05 np0005466031 NetworkManager[44907]: <info>  [1759411624.9993] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:05.006 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:05 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:05Z|00875|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:05.010 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:05.010 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b49739-f42b-458a-9aaf-edff16e15c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:05.011 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:27:05 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:05.012 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.147 2 DEBUG nova.compute.manager [req-d0edc57a-901a-4d5a-9438-a9b9b9e7fdb2 req-c1619789-4cea-4000-89a5-47d8b4229d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received event network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.148 2 DEBUG oslo_concurrency.lockutils [req-d0edc57a-901a-4d5a-9438-a9b9b9e7fdb2 req-c1619789-4cea-4000-89a5-47d8b4229d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.148 2 DEBUG oslo_concurrency.lockutils [req-d0edc57a-901a-4d5a-9438-a9b9b9e7fdb2 req-c1619789-4cea-4000-89a5-47d8b4229d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.149 2 DEBUG oslo_concurrency.lockutils [req-d0edc57a-901a-4d5a-9438-a9b9b9e7fdb2 req-c1619789-4cea-4000-89a5-47d8b4229d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.149 2 DEBUG nova.compute.manager [req-d0edc57a-901a-4d5a-9438-a9b9b9e7fdb2 req-c1619789-4cea-4000-89a5-47d8b4229d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Processing event network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:27:05 np0005466031 podman[338153]: 2025-10-02 13:27:05.383379633 +0000 UTC m=+0.049545778 container create 210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:27:05 np0005466031 systemd[1]: Started libpod-conmon-210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd.scope.
Oct  2 09:27:05 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:27:05 np0005466031 podman[338153]: 2025-10-02 13:27:05.357896959 +0000 UTC m=+0.024063134 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:27:05 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f855aa556af7cbbfe908429c1aa47e1eecd5fa06771644c2fbb7ecd0b14ff1a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:27:05 np0005466031 podman[338153]: 2025-10-02 13:27:05.469732919 +0000 UTC m=+0.135899094 container init 210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 09:27:05 np0005466031 podman[338153]: 2025-10-02 13:27:05.475819775 +0000 UTC m=+0.141985920 container start 210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:27:05 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[338168]: [NOTICE]   (338172) : New worker (338174) forked
Oct  2 09:27:05 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[338168]: [NOTICE]   (338172) : Loading success.
Oct  2 09:27:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.718 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411625.7179303, 94242ea5-c37c-4fde-b4f4-3b1145291ba8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.719 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] VM Started (Lifecycle Event)#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.721 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.724 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.727 2 INFO nova.virt.libvirt.driver [-] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Instance spawned successfully.#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.727 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.760 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.764 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.768 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.768 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.769 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.769 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.769 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.769 2 DEBUG nova.virt.libvirt.driver [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.818 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.818 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411625.7180324, 94242ea5-c37c-4fde-b4f4-3b1145291ba8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.818 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.892 2 INFO nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Took 2.94 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.893 2 DEBUG nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.918 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.922 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411625.723621, 94242ea5-c37c-4fde-b4f4-3b1145291ba8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.922 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.959 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:05 np0005466031 nova_compute[235803]: 2025-10-02 13:27:05.962 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:27:06 np0005466031 nova_compute[235803]: 2025-10-02 13:27:06.019 2 INFO nova.compute.manager [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Took 9.72 seconds to build instance.#033[00m
Oct  2 09:27:06 np0005466031 nova_compute[235803]: 2025-10-02 13:27:06.213 2 DEBUG oslo_concurrency.lockutils [None req-3b68728d-edd4-4ecf-8122-3e70982f24f9 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:06.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:07 np0005466031 nova_compute[235803]: 2025-10-02 13:27:07.247 2 DEBUG nova.compute.manager [req-cc9e5524-526a-479d-bdc1-79876f904fe4 req-9e148788-9250-4897-82f5-6673a10e6c3c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received event network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:07 np0005466031 nova_compute[235803]: 2025-10-02 13:27:07.248 2 DEBUG oslo_concurrency.lockutils [req-cc9e5524-526a-479d-bdc1-79876f904fe4 req-9e148788-9250-4897-82f5-6673a10e6c3c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:07 np0005466031 nova_compute[235803]: 2025-10-02 13:27:07.248 2 DEBUG oslo_concurrency.lockutils [req-cc9e5524-526a-479d-bdc1-79876f904fe4 req-9e148788-9250-4897-82f5-6673a10e6c3c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:07 np0005466031 nova_compute[235803]: 2025-10-02 13:27:07.248 2 DEBUG oslo_concurrency.lockutils [req-cc9e5524-526a-479d-bdc1-79876f904fe4 req-9e148788-9250-4897-82f5-6673a10e6c3c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:07 np0005466031 nova_compute[235803]: 2025-10-02 13:27:07.249 2 DEBUG nova.compute.manager [req-cc9e5524-526a-479d-bdc1-79876f904fe4 req-9e148788-9250-4897-82f5-6673a10e6c3c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] No waiting events found dispatching network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:27:07 np0005466031 nova_compute[235803]: 2025-10-02 13:27:07.249 2 WARNING nova.compute.manager [req-cc9e5524-526a-479d-bdc1-79876f904fe4 req-9e148788-9250-4897-82f5-6673a10e6c3c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received unexpected event network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:27:07 np0005466031 nova_compute[235803]: 2025-10-02 13:27:07.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:07.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:07 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:07.885 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.262 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.263 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.263 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.264 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.265 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.266 2 INFO nova.compute.manager [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Terminating instance#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.267 2 DEBUG nova.compute.manager [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:27:08 np0005466031 kernel: tapdcf338f2-d1 (unregistering): left promiscuous mode
Oct  2 09:27:08 np0005466031 NetworkManager[44907]: <info>  [1759411628.3182] device (tapdcf338f2-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:27:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:08.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:08 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:08Z|00876|binding|INFO|Releasing lport dcf338f2-d183-4498-9e4b-ee36f16b6600 from this chassis (sb_readonly=0)
Oct  2 09:27:08 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:08Z|00877|binding|INFO|Setting lport dcf338f2-d183-4498-9e4b-ee36f16b6600 down in Southbound
Oct  2 09:27:08 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:08Z|00878|binding|INFO|Removing iface tapdcf338f2-d1 ovn-installed in OVS
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.347 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:6e:47 10.100.0.11'], port_security=['fa:16:3e:9a:6e:47 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '94242ea5-c37c-4fde-b4f4-3b1145291ba8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e3c9d0c-4774-4653-b077-2f10893accdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=dcf338f2-d183-4498-9e4b-ee36f16b6600) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.348 141898 INFO neutron.agent.ovn.metadata.agent [-] Port dcf338f2-d183-4498-9e4b-ee36f16b6600 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.349 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.351 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[22479b67-ddfe-4327-8d19-c201b39bfd94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.351 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:08 np0005466031 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000db.scope: Deactivated successfully.
Oct  2 09:27:08 np0005466031 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000db.scope: Consumed 3.790s CPU time.
Oct  2 09:27:08 np0005466031 systemd-machined[192227]: Machine qemu-100-instance-000000db terminated.
Oct  2 09:27:08 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[338168]: [NOTICE]   (338172) : haproxy version is 2.8.14-c23fe91
Oct  2 09:27:08 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[338168]: [NOTICE]   (338172) : path to executable is /usr/sbin/haproxy
Oct  2 09:27:08 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[338168]: [WARNING]  (338172) : Exiting Master process...
Oct  2 09:27:08 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[338168]: [WARNING]  (338172) : Exiting Master process...
Oct  2 09:27:08 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[338168]: [ALERT]    (338172) : Current worker (338174) exited with code 143 (Terminated)
Oct  2 09:27:08 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[338168]: [WARNING]  (338172) : All workers exited. Exiting... (0)
Oct  2 09:27:08 np0005466031 systemd[1]: libpod-210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd.scope: Deactivated successfully.
Oct  2 09:27:08 np0005466031 podman[338210]: 2025-10-02 13:27:08.508274012 +0000 UTC m=+0.050397552 container died 210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.511 2 INFO nova.virt.libvirt.driver [-] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Instance destroyed successfully.#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.512 2 DEBUG nova.objects.instance [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 94242ea5-c37c-4fde-b4f4-3b1145291ba8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.535 2 DEBUG nova.virt.libvirt.vif [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:26:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1548178805',display_name='tempest-TestVolumeBootPattern-server-1548178805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1548178805',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:27:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-d8o96vh8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:27:05Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=94242ea5-c37c-4fde-b4f4-3b1145291ba8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.536 2 DEBUG nova.network.os_vif_util [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "address": "fa:16:3e:9a:6e:47", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcf338f2-d1", "ovs_interfaceid": "dcf338f2-d183-4498-9e4b-ee36f16b6600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.537 2 DEBUG nova.network.os_vif_util [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:6e:47,bridge_name='br-int',has_traffic_filtering=True,id=dcf338f2-d183-4498-9e4b-ee36f16b6600,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcf338f2-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.542 2 DEBUG os_vif [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:6e:47,bridge_name='br-int',has_traffic_filtering=True,id=dcf338f2-d183-4498-9e4b-ee36f16b6600,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcf338f2-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:27:08 np0005466031 systemd[1]: var-lib-containers-storage-overlay-f855aa556af7cbbfe908429c1aa47e1eecd5fa06771644c2fbb7ecd0b14ff1a7-merged.mount: Deactivated successfully.
Oct  2 09:27:08 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd-userdata-shm.mount: Deactivated successfully.
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.545 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcf338f2-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:27:08 np0005466031 podman[338210]: 2025-10-02 13:27:08.550530869 +0000 UTC m=+0.092654419 container cleanup 210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.552 2 INFO os_vif [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:6e:47,bridge_name='br-int',has_traffic_filtering=True,id=dcf338f2-d183-4498-9e4b-ee36f16b6600,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcf338f2-d1')#033[00m
Oct  2 09:27:08 np0005466031 systemd[1]: libpod-conmon-210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd.scope: Deactivated successfully.
Oct  2 09:27:08 np0005466031 podman[338256]: 2025-10-02 13:27:08.624777117 +0000 UTC m=+0.044943745 container remove 210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.631 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[1747a5f0-475f-4bd4-8ce8-d9e85695e928]: (4, ('Thu Oct  2 01:27:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd)\n210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd\nThu Oct  2 01:27:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd)\n210a3690f8064581ea7fa721e97f3631a58f92a9670def56eb2c089fa770dcdd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.633 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[ba15b03a-dd1d-4d11-be93-2e72afcf0e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.634 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:08 np0005466031 kernel: tap858f2b6f-80: left promiscuous mode
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.652 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[7d31d36b-4008-41e6-bc9d-145a051a5a3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.680 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e9dcbe-b3d6-4ca3-ae99-630fd0b2b78c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.682 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0c56193b-3995-4925-8c9c-83282d368f7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.703 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6b8493-d12c-4b1f-b03a-1a6ff63bbf8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948034, 'reachable_time': 23347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338282, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:08 np0005466031 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.708 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:27:08 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:08.709 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[16fdf1fa-7a4c-4940-9e8e-3f766d08c565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.948 2 INFO nova.virt.libvirt.driver [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Deleting instance files /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8_del#033[00m
Oct  2 09:27:08 np0005466031 nova_compute[235803]: 2025-10-02 13:27:08.950 2 INFO nova.virt.libvirt.driver [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Deletion of /var/lib/nova/instances/94242ea5-c37c-4fde-b4f4-3b1145291ba8_del complete#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.009 2 INFO nova.compute.manager [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.010 2 DEBUG oslo.service.loopingcall [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.011 2 DEBUG nova.compute.manager [-] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.012 2 DEBUG nova.network.neutron [-] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.319 2 DEBUG nova.compute.manager [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received event network-vif-unplugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.321 2 DEBUG oslo_concurrency.lockutils [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.322 2 DEBUG oslo_concurrency.lockutils [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.322 2 DEBUG oslo_concurrency.lockutils [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.322 2 DEBUG nova.compute.manager [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] No waiting events found dispatching network-vif-unplugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.322 2 DEBUG nova.compute.manager [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received event network-vif-unplugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.323 2 DEBUG nova.compute.manager [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received event network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.323 2 DEBUG oslo_concurrency.lockutils [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.323 2 DEBUG oslo_concurrency.lockutils [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.324 2 DEBUG oslo_concurrency.lockutils [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.324 2 DEBUG nova.compute.manager [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] No waiting events found dispatching network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.324 2 WARNING nova.compute.manager [req-15044b56-b77a-4631-ab5c-3a53700bb368 req-a44c0ca5-a7ec-4c56-8951-1e3d4f7d9621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received unexpected event network-vif-plugged-dcf338f2-d183-4498-9e4b-ee36f16b6600 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:27:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:09.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.621 2 DEBUG nova.network.neutron [-] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.647 2 INFO nova.compute.manager [-] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Took 0.64 seconds to deallocate network for instance.#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.710 2 DEBUG nova.compute.manager [req-1d52352f-bd32-485c-93bd-3b797866b800 req-e2a78045-c7ad-4b65-bf9a-3cfda458685b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Received event network-vif-deleted-dcf338f2-d183-4498-9e4b-ee36f16b6600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.849 2 INFO nova.compute.manager [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Oct  2 09:27:09 np0005466031 nova_compute[235803]: 2025-10-02 13:27:09.851 2 DEBUG nova.compute.manager [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Deleting volume: 2426a7f5-76f6-42eb-8c88-184ea4203bdd _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.047 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.048 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.099 2 DEBUG oslo_concurrency.processutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:27:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/288643418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.551 2 DEBUG oslo_concurrency.processutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.559 2 DEBUG nova.compute.provider_tree [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.574 2 DEBUG nova.scheduler.client.report [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.595 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.615 2 INFO nova.scheduler.client.report [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 94242ea5-c37c-4fde-b4f4-3b1145291ba8#033[00m
Oct  2 09:27:10 np0005466031 nova_compute[235803]: 2025-10-02 13:27:10.689 2 DEBUG oslo_concurrency.lockutils [None req-d332db87-deee-4cf8-a2d2-de9742c3902b 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "94242ea5-c37c-4fde-b4f4-3b1145291ba8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e421 e421: 3 total, 3 up, 3 in
Oct  2 09:27:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:12 np0005466031 nova_compute[235803]: 2025-10-02 13:27:12.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:13 np0005466031 nova_compute[235803]: 2025-10-02 13:27:13.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:14.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 09:27:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:15.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 09:27:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:17 np0005466031 nova_compute[235803]: 2025-10-02 13:27:17.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:17.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:17 np0005466031 podman[338310]: 2025-10-02 13:27:17.644692289 +0000 UTC m=+0.066670680 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:27:17 np0005466031 podman[338311]: 2025-10-02 13:27:17.679894333 +0000 UTC m=+0.096988564 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:27:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:18 np0005466031 nova_compute[235803]: 2025-10-02 13:27:18.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:19 np0005466031 podman[338526]: 2025-10-02 13:27:19.076939979 +0000 UTC m=+0.069855662 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 09:27:19 np0005466031 podman[338526]: 2025-10-02 13:27:19.175911119 +0000 UTC m=+0.168826782 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 09:27:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:19.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:19 np0005466031 podman[338659]: 2025-10-02 13:27:19.718775301 +0000 UTC m=+0.048084656 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 09:27:19 np0005466031 podman[338659]: 2025-10-02 13:27:19.730091647 +0000 UTC m=+0.059400982 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 09:27:19 np0005466031 podman[338723]: 2025-10-02 13:27:19.923488066 +0000 UTC m=+0.055092948 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container)
Oct  2 09:27:19 np0005466031 podman[338723]: 2025-10-02 13:27:19.964860627 +0000 UTC m=+0.096465489 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct  2 09:27:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:27:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:21 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:27:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:21.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 e422: 3 total, 3 up, 3 in
Oct  2 09:27:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:22.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:22 np0005466031 nova_compute[235803]: 2025-10-02 13:27:22.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:23 np0005466031 nova_compute[235803]: 2025-10-02 13:27:23.510 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411628.5076406, 94242ea5-c37c-4fde-b4f4-3b1145291ba8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:27:23 np0005466031 nova_compute[235803]: 2025-10-02 13:27:23.511 2 INFO nova.compute.manager [-] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:27:23 np0005466031 nova_compute[235803]: 2025-10-02 13:27:23.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:23 np0005466031 nova_compute[235803]: 2025-10-02 13:27:23.638 2 DEBUG nova.compute.manager [None req-bd93104b-220a-4701-b914-a4beec7136dc - - - - - -] [instance: 94242ea5-c37c-4fde-b4f4-3b1145291ba8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:24.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:24 np0005466031 podman[338958]: 2025-10-02 13:27:24.633289863 +0000 UTC m=+0.064347553 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:27:24 np0005466031 nova_compute[235803]: 2025-10-02 13:27:24.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:24 np0005466031 nova_compute[235803]: 2025-10-02 13:27:24.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:27:24 np0005466031 podman[338957]: 2025-10-02 13:27:24.6401229 +0000 UTC m=+0.071060507 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:27:24 np0005466031 nova_compute[235803]: 2025-10-02 13:27:24.679 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:27:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:25 np0005466031 nova_compute[235803]: 2025-10-02 13:27:25.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:25 np0005466031 nova_compute[235803]: 2025-10-02 13:27:25.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:25 np0005466031 nova_compute[235803]: 2025-10-02 13:27:25.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:27:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:25.896 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:25.896 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:25.896 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:26.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:27 np0005466031 nova_compute[235803]: 2025-10-02 13:27:27.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:28.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:28 np0005466031 nova_compute[235803]: 2025-10-02 13:27:28.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:29.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:30.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.178654) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651178719, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1430, "num_deletes": 252, "total_data_size": 3131847, "memory_usage": 3173536, "flush_reason": "Manual Compaction"}
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651193410, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 2055343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83254, "largest_seqno": 84679, "table_properties": {"data_size": 2049288, "index_size": 3321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13355, "raw_average_key_size": 20, "raw_value_size": 2036904, "raw_average_value_size": 3086, "num_data_blocks": 146, "num_entries": 660, "num_filter_entries": 660, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411537, "oldest_key_time": 1759411537, "file_creation_time": 1759411651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 14807 microseconds, and 5639 cpu microseconds.
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.193464) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 2055343 bytes OK
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.193487) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.194966) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.194981) EVENT_LOG_v1 {"time_micros": 1759411651194975, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.194998) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 3125161, prev total WAL file size 3140660, number of live WAL files 2.
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.195919) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(2007KB)], [171(10MB)]
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651195952, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 13520838, "oldest_snapshot_seqno": -1}
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10367 keys, 11507490 bytes, temperature: kUnknown
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651256267, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 11507490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11442953, "index_size": 37544, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25925, "raw_key_size": 274033, "raw_average_key_size": 26, "raw_value_size": 11263985, "raw_average_value_size": 1086, "num_data_blocks": 1416, "num_entries": 10367, "num_filter_entries": 10367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.256799) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11507490 bytes
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.258903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.9 rd, 190.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.9 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(12.2) write-amplify(5.6) OK, records in: 10890, records dropped: 523 output_compression: NoCompression
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.258933) EVENT_LOG_v1 {"time_micros": 1759411651258922, "job": 110, "event": "compaction_finished", "compaction_time_micros": 60393, "compaction_time_cpu_micros": 29304, "output_level": 6, "num_output_files": 1, "total_output_size": 11507490, "num_input_records": 10890, "num_output_records": 10367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651259677, "job": 110, "event": "table_file_deletion", "file_number": 173}
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651262199, "job": 110, "event": "table_file_deletion", "file_number": 171}
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.195824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.262269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.262275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.262276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.262278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:27:31.262280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:32 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:32.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:32 np0005466031 nova_compute[235803]: 2025-10-02 13:27:32.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:33 np0005466031 nova_compute[235803]: 2025-10-02 13:27:33.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:27:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:34 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:27:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:34.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:34 np0005466031 nova_compute[235803]: 2025-10-02 13:27:34.680 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:35.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:36.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:36 np0005466031 nova_compute[235803]: 2025-10-02 13:27:36.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:36 np0005466031 nova_compute[235803]: 2025-10-02 13:27:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:36 np0005466031 nova_compute[235803]: 2025-10-02 13:27:36.677 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:36 np0005466031 nova_compute[235803]: 2025-10-02 13:27:36.678 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:36 np0005466031 nova_compute[235803]: 2025-10-02 13:27:36.678 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:36 np0005466031 nova_compute[235803]: 2025-10-02 13:27:36.678 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:27:36 np0005466031 nova_compute[235803]: 2025-10-02 13:27:36.678 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:27:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3531806881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.174 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.404 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.406 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4099MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.406 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.407 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.494 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.494 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.529 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:27:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2860605944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.986 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:37 np0005466031 nova_compute[235803]: 2025-10-02 13:27:37.992 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:27:38 np0005466031 nova_compute[235803]: 2025-10-02 13:27:38.008 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:27:38 np0005466031 nova_compute[235803]: 2025-10-02 13:27:38.028 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:27:38 np0005466031 nova_compute[235803]: 2025-10-02 13:27:38.029 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:38.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:38 np0005466031 nova_compute[235803]: 2025-10-02 13:27:38.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:39 np0005466031 nova_compute[235803]: 2025-10-02 13:27:39.030 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:39 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:39Z|00879|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct  2 09:27:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 09:27:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.290 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.291 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.310 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.389 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.389 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.395 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.396 2 INFO nova.compute.claims [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:27:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.509 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:27:40 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3155975616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.971 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:40 np0005466031 nova_compute[235803]: 2025-10-02 13:27:40.980 2 DEBUG nova.compute.provider_tree [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.027 2 DEBUG nova.scheduler.client.report [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.055 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.055 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.112 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.112 2 DEBUG nova.network.neutron [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.132 2 INFO nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.156 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.204 2 INFO nova.virt.block_device [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Booting with volume ffd00fca-313e-46cc-b503-d377c1529a56 at /dev/vda#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.428 2 DEBUG os_brick.utils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.429 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.440 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.440 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[98552501-be4c-4c32-8aa6-b14c2a529ba5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.441 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.449 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.449 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2ed3ab-2d99-474d-b6b7-6039180c3351]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.450 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.457 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.458 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[31f0b13f-37d7-49e9-9fd5-8f7affdf1730]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.459 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[1564d12d-1217-4a70-a5f3-813125ed5652]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.459 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.492 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.494 2 DEBUG os_brick.initiator.connectors.lightos [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.494 2 DEBUG os_brick.initiator.connectors.lightos [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.494 2 DEBUG os_brick.initiator.connectors.lightos [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.495 2 DEBUG os_brick.utils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.495 2 DEBUG nova.virt.block_device [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updating existing volume attachment record: d901e444-cc00-4b69-bdfd-7de2ef27e29b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:27:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:41 np0005466031 nova_compute[235803]: 2025-10-02 13:27:41.967 2 DEBUG nova.policy [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:27:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.976 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.978 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.978 2 INFO nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Creating image(s)#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.978 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.978 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Ensure instance console log exists: /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.979 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.979 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:42 np0005466031 nova_compute[235803]: 2025-10-02 13:27:42.979 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:43.321 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:27:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:43.322 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:27:43 np0005466031 nova_compute[235803]: 2025-10-02 13:27:43.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:43 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:43.323 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:43 np0005466031 nova_compute[235803]: 2025-10-02 13:27:43.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:43.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:43 np0005466031 nova_compute[235803]: 2025-10-02 13:27:43.986 2 DEBUG nova.network.neutron [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Successfully created port: 97b9a2a3-7efa-47a4-89ba-36a75b507bed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:27:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:44.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:44 np0005466031 nova_compute[235803]: 2025-10-02 13:27:44.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:44 np0005466031 nova_compute[235803]: 2025-10-02 13:27:44.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:27:44 np0005466031 nova_compute[235803]: 2025-10-02 13:27:44.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:27:44 np0005466031 nova_compute[235803]: 2025-10-02 13:27:44.654 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 09:27:44 np0005466031 nova_compute[235803]: 2025-10-02 13:27:44.654 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:27:45 np0005466031 nova_compute[235803]: 2025-10-02 13:27:45.302 2 DEBUG nova.network.neutron [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Successfully updated port: 97b9a2a3-7efa-47a4-89ba-36a75b507bed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:27:45 np0005466031 nova_compute[235803]: 2025-10-02 13:27:45.319 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:27:45 np0005466031 nova_compute[235803]: 2025-10-02 13:27:45.319 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:27:45 np0005466031 nova_compute[235803]: 2025-10-02 13:27:45.319 2 DEBUG nova.network.neutron [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:27:45 np0005466031 nova_compute[235803]: 2025-10-02 13:27:45.448 2 DEBUG nova.compute.manager [req-74d695ff-9501-409c-8000-83771987ebed req-511d21da-c17c-4fd4-9729-5f632ab7501a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received event network-changed-97b9a2a3-7efa-47a4-89ba-36a75b507bed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:45 np0005466031 nova_compute[235803]: 2025-10-02 13:27:45.448 2 DEBUG nova.compute.manager [req-74d695ff-9501-409c-8000-83771987ebed req-511d21da-c17c-4fd4-9729-5f632ab7501a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Refreshing instance network info cache due to event network-changed-97b9a2a3-7efa-47a4-89ba-36a75b507bed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:27:45 np0005466031 nova_compute[235803]: 2025-10-02 13:27:45.448 2 DEBUG oslo_concurrency.lockutils [req-74d695ff-9501-409c-8000-83771987ebed req-511d21da-c17c-4fd4-9729-5f632ab7501a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:27:45 np0005466031 nova_compute[235803]: 2025-10-02 13:27:45.502 2 DEBUG nova.network.neutron [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:27:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:46 np0005466031 nova_compute[235803]: 2025-10-02 13:27:46.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:46 np0005466031 nova_compute[235803]: 2025-10-02 13:27:46.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:27:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.208 2 DEBUG nova.network.neutron [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updating instance_info_cache with network_info: [{"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.240 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.241 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Instance network_info: |[{"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.241 2 DEBUG oslo_concurrency.lockutils [req-74d695ff-9501-409c-8000-83771987ebed req-511d21da-c17c-4fd4-9729-5f632ab7501a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.242 2 DEBUG nova.network.neutron [req-74d695ff-9501-409c-8000-83771987ebed req-511d21da-c17c-4fd4-9729-5f632ab7501a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Refreshing network info cache for port 97b9a2a3-7efa-47a4-89ba-36a75b507bed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.245 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Start _get_guest_xml network_info=[{"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ffd00fca-313e-46cc-b503-d377c1529a56', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ffd00fca-313e-46cc-b503-d377c1529a56', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '96f0e757-ffe8-40a4-8b11-c76935502853', 'attached_at': '', 'detached_at': '', 'volume_id': 'ffd00fca-313e-46cc-b503-d377c1529a56', 'serial': 'ffd00fca-313e-46cc-b503-d377c1529a56'}, 'attachment_id': 'd901e444-cc00-4b69-bdfd-7de2ef27e29b', 'delete_on_termination': True, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.249 2 WARNING nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.254 2 DEBUG nova.virt.libvirt.host [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.255 2 DEBUG nova.virt.libvirt.host [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.260 2 DEBUG nova.virt.libvirt.host [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.261 2 DEBUG nova.virt.libvirt.host [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.262 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.262 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.263 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.263 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.263 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.263 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.264 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.264 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.264 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.264 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.264 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.265 2 DEBUG nova.virt.hardware [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.289 2 DEBUG nova.storage.rbd_utils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 96f0e757-ffe8-40a4-8b11-c76935502853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.292 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:47.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:27:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4220012604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.732 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.800 2 DEBUG nova.virt.libvirt.vif [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:27:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1976996844',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1976996844',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1976996844',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZmGAY4fnhaxJRnxC4yZp5UpthlRPS9W2SbS9dXHUqaI9PZ13McahuXwzVVJSqQ74HrXtoeqlH7uydccFaGXQlf+BRVE6E5/AZu2Lfz7DPRtw+PxfPWe0ejPDhgqe2u5A==',key_name='tempest-keypair-8645022',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-4z8hxgpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:27:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=96f0e757-ffe8-40a4-8b11-c76935502853,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.801 2 DEBUG nova.network.os_vif_util [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.803 2 DEBUG nova.network.os_vif_util [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b7:7b,bridge_name='br-int',has_traffic_filtering=True,id=97b9a2a3-7efa-47a4-89ba-36a75b507bed,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97b9a2a3-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.804 2 DEBUG nova.objects.instance [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 96f0e757-ffe8-40a4-8b11-c76935502853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.832 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <uuid>96f0e757-ffe8-40a4-8b11-c76935502853</uuid>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <name>instance-000000dc</name>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-1976996844</nova:name>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:27:47</nova:creationTime>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <nova:port uuid="97b9a2a3-7efa-47a4-89ba-36a75b507bed">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <entry name="serial">96f0e757-ffe8-40a4-8b11-c76935502853</entry>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <entry name="uuid">96f0e757-ffe8-40a4-8b11-c76935502853</entry>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/96f0e757-ffe8-40a4-8b11-c76935502853_disk.config">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-ffd00fca-313e-46cc-b503-d377c1529a56">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <serial>ffd00fca-313e-46cc-b503-d377c1529a56</serial>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:e4:b7:7b"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <target dev="tap97b9a2a3-7e"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853/console.log" append="off"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:27:47 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:27:47 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:27:47 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:27:47 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.833 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Preparing to wait for external event network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.833 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.834 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.834 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.835 2 DEBUG nova.virt.libvirt.vif [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:27:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1976996844',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1976996844',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1976996844',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZmGAY4fnhaxJRnxC4yZp5UpthlRPS9W2SbS9dXHUqaI9PZ13McahuXwzVVJSqQ74HrXtoeqlH7uydccFaGXQlf+BRVE6E5/AZu2Lfz7DPRtw+PxfPWe0ejPDhgqe2u5A==',key_name='tempest-keypair-8645022',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-4z8hxgpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:27:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=96f0e757-ffe8-40a4-8b11-c76935502853,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.835 2 DEBUG nova.network.os_vif_util [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.835 2 DEBUG nova.network.os_vif_util [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b7:7b,bridge_name='br-int',has_traffic_filtering=True,id=97b9a2a3-7efa-47a4-89ba-36a75b507bed,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97b9a2a3-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.836 2 DEBUG os_vif [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b7:7b,bridge_name='br-int',has_traffic_filtering=True,id=97b9a2a3-7efa-47a4-89ba-36a75b507bed,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97b9a2a3-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97b9a2a3-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97b9a2a3-7e, col_values=(('external_ids', {'iface-id': '97b9a2a3-7efa-47a4-89ba-36a75b507bed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:b7:7b', 'vm-uuid': '96f0e757-ffe8-40a4-8b11-c76935502853'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:47 np0005466031 NetworkManager[44907]: <info>  [1759411667.8768] manager: (tap97b9a2a3-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.885 2 INFO os_vif [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:b7:7b,bridge_name='br-int',has_traffic_filtering=True,id=97b9a2a3-7efa-47a4-89ba-36a75b507bed,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97b9a2a3-7e')#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.942 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.942 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.942 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:e4:b7:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.943 2 INFO nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Using config drive#033[00m
Oct  2 09:27:47 np0005466031 nova_compute[235803]: 2025-10-02 13:27:47.967 2 DEBUG nova.storage.rbd_utils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 96f0e757-ffe8-40a4-8b11-c76935502853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:27:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:48.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:48 np0005466031 podman[339294]: 2025-10-02 13:27:48.622910579 +0000 UTC m=+0.055909841 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct  2 09:27:48 np0005466031 podman[339295]: 2025-10-02 13:27:48.656296701 +0000 UTC m=+0.085287007 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.275 2 INFO nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Creating config drive at /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853/disk.config#033[00m
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.280 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvg9npg7v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.448 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvg9npg7v" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.475 2 DEBUG nova.storage.rbd_utils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 96f0e757-ffe8-40a4-8b11-c76935502853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.479 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853/disk.config 96f0e757-ffe8-40a4-8b11-c76935502853_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:49.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.694 2 DEBUG oslo_concurrency.processutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853/disk.config 96f0e757-ffe8-40a4-8b11-c76935502853_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.695 2 INFO nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Deleting local config drive /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853/disk.config because it was imported into RBD.#033[00m
Oct  2 09:27:49 np0005466031 kernel: tap97b9a2a3-7e: entered promiscuous mode
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:49 np0005466031 NetworkManager[44907]: <info>  [1759411669.7638] manager: (tap97b9a2a3-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Oct  2 09:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:49Z|00880|binding|INFO|Claiming lport 97b9a2a3-7efa-47a4-89ba-36a75b507bed for this chassis.
Oct  2 09:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:49Z|00881|binding|INFO|97b9a2a3-7efa-47a4-89ba-36a75b507bed: Claiming fa:16:3e:e4:b7:7b 10.100.0.12
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.773 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:b7:7b 10.100.0.12'], port_security=['fa:16:3e:e4:b7:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '96f0e757-ffe8-40a4-8b11-c76935502853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79380880-e1a5-49f8-b4ef-1ce955cdc492', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=97b9a2a3-7efa-47a4-89ba-36a75b507bed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.774 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 97b9a2a3-7efa-47a4-89ba-36a75b507bed in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.775 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5#033[00m
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:49Z|00882|binding|INFO|Setting lport 97b9a2a3-7efa-47a4-89ba-36a75b507bed ovn-installed in OVS
Oct  2 09:27:49 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:49Z|00883|binding|INFO|Setting lport 97b9a2a3-7efa-47a4-89ba-36a75b507bed up in Southbound
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.794 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6796a9-0996-43d8-a468-a02652a689b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.794 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.797 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.797 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b16025bb-4951-44d5-9d6e-449b48ef1a03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.798 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd2e988-2f3a-48d9-a81d-fea9851876ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 systemd-udevd[339391]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.815 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[ece71055-d749-40ce-894d-e56cb133c089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 systemd-machined[192227]: New machine qemu-101-instance-000000dc.
Oct  2 09:27:49 np0005466031 NetworkManager[44907]: <info>  [1759411669.8215] device (tap97b9a2a3-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:27:49 np0005466031 NetworkManager[44907]: <info>  [1759411669.8230] device (tap97b9a2a3-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.831 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5ead4c-f8d9-4db3-9b76-4d4b381ab6a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 systemd[1]: Started Virtual Machine qemu-101-instance-000000dc.
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.848 2 DEBUG nova.network.neutron [req-74d695ff-9501-409c-8000-83771987ebed req-511d21da-c17c-4fd4-9729-5f632ab7501a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updated VIF entry in instance network info cache for port 97b9a2a3-7efa-47a4-89ba-36a75b507bed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.848 2 DEBUG nova.network.neutron [req-74d695ff-9501-409c-8000-83771987ebed req-511d21da-c17c-4fd4-9729-5f632ab7501a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updating instance_info_cache with network_info: [{"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.870 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[799ca56d-3cd2-46c0-9dd6-44493ff99801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 NetworkManager[44907]: <info>  [1759411669.8751] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/399)
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.874 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e34e4f-3a5d-4433-ae78-9ac3f17eb2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 nova_compute[235803]: 2025-10-02 13:27:49.887 2 DEBUG oslo_concurrency.lockutils [req-74d695ff-9501-409c-8000-83771987ebed req-511d21da-c17c-4fd4-9729-5f632ab7501a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.911 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[45dbd220-91ae-4d8e-95bb-b158adfedff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.914 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[de4f1b88-b8d2-4055-a883-a2741cb0689e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 NetworkManager[44907]: <info>  [1759411669.9371] device (tap858f2b6f-80): carrier: link connected
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.943 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[75c8c24d-5455-417b-9928-ce975b2fd490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.963 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[14ee5d94-254f-49ea-88d6-4acd95222b3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 952552, 'reachable_time': 31411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339424, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:49 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:49.984 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[9d91f925-0455-482f-a993-0ef381136cca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 952552, 'tstamp': 952552}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339425, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.000 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb285d5-6e57-42cd-8a98-3858295c3075]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 952552, 'reachable_time': 31411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339426, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.030 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[02ed260b-153b-458b-b55e-720965450199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.064 2 DEBUG nova.compute.manager [req-dfb4781e-8d70-4178-a3a9-6eeb1034ab28 req-c8778d48-be53-448a-ae80-5c7421671e59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received event network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.066 2 DEBUG oslo_concurrency.lockutils [req-dfb4781e-8d70-4178-a3a9-6eeb1034ab28 req-c8778d48-be53-448a-ae80-5c7421671e59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.066 2 DEBUG oslo_concurrency.lockutils [req-dfb4781e-8d70-4178-a3a9-6eeb1034ab28 req-c8778d48-be53-448a-ae80-5c7421671e59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.066 2 DEBUG oslo_concurrency.lockutils [req-dfb4781e-8d70-4178-a3a9-6eeb1034ab28 req-c8778d48-be53-448a-ae80-5c7421671e59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.067 2 DEBUG nova.compute.manager [req-dfb4781e-8d70-4178-a3a9-6eeb1034ab28 req-c8778d48-be53-448a-ae80-5c7421671e59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Processing event network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.095 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[850356cf-0076-4cac-8b99-1d60ae118296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.097 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.097 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.098 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:50 np0005466031 NetworkManager[44907]: <info>  [1759411670.1011] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct  2 09:27:50 np0005466031 kernel: tap858f2b6f-80: entered promiscuous mode
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.105 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:50 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:50Z|00884|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.108 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.120 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[17b7dab7-cd74-477e-970a-3d5e647bc727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.122 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:27:50 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:27:50.123 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:27:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:50.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:50 np0005466031 podman[339501]: 2025-10-02 13:27:50.501639813 +0000 UTC m=+0.052734971 container create dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:27:50 np0005466031 systemd[1]: Started libpod-conmon-dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6.scope.
Oct  2 09:27:50 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:27:50 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4cf9d3315d402b6df4c805a8d68188983446bb5cf453f18313045759f7a98d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:27:50 np0005466031 podman[339501]: 2025-10-02 13:27:50.477290231 +0000 UTC m=+0.028385409 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:27:50 np0005466031 podman[339501]: 2025-10-02 13:27:50.573848903 +0000 UTC m=+0.124944061 container init dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:27:50 np0005466031 podman[339501]: 2025-10-02 13:27:50.57997236 +0000 UTC m=+0.131067518 container start dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:27:50 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[339516]: [NOTICE]   (339520) : New worker (339522) forked
Oct  2 09:27:50 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[339516]: [NOTICE]   (339520) : Loading success.
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.704 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411670.7039886, 96f0e757-ffe8-40a4-8b11-c76935502853 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.704 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] VM Started (Lifecycle Event)#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.706 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.709 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.712 2 INFO nova.virt.libvirt.driver [-] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Instance spawned successfully.#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.712 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.730 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.733 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.762 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.763 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.763 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.763 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.764 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.764 2 DEBUG nova.virt.libvirt.driver [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.769 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.769 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411670.704142, 96f0e757-ffe8-40a4-8b11-c76935502853 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.769 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.815 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.820 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411670.708559, 96f0e757-ffe8-40a4-8b11-c76935502853 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.820 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.850 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.854 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.868 2 INFO nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Took 7.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.868 2 DEBUG nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.880 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.934 2 INFO nova.compute.manager [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Took 10.57 seconds to build instance.#033[00m
Oct  2 09:27:50 np0005466031 nova_compute[235803]: 2025-10-02 13:27:50.973 2 DEBUG oslo_concurrency.lockutils [None req-59969cc6-6598-4919-8960-507d170ae31c 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:51.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:51 np0005466031 nova_compute[235803]: 2025-10-02 13:27:51.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:52 np0005466031 nova_compute[235803]: 2025-10-02 13:27:52.150 2 DEBUG nova.compute.manager [req-804a394c-3b6f-4e25-bc10-1a2d2beec7a4 req-6db37178-6fda-424e-ad80-25533cb5bf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received event network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:52 np0005466031 nova_compute[235803]: 2025-10-02 13:27:52.151 2 DEBUG oslo_concurrency.lockutils [req-804a394c-3b6f-4e25-bc10-1a2d2beec7a4 req-6db37178-6fda-424e-ad80-25533cb5bf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:52 np0005466031 nova_compute[235803]: 2025-10-02 13:27:52.151 2 DEBUG oslo_concurrency.lockutils [req-804a394c-3b6f-4e25-bc10-1a2d2beec7a4 req-6db37178-6fda-424e-ad80-25533cb5bf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:52 np0005466031 nova_compute[235803]: 2025-10-02 13:27:52.151 2 DEBUG oslo_concurrency.lockutils [req-804a394c-3b6f-4e25-bc10-1a2d2beec7a4 req-6db37178-6fda-424e-ad80-25533cb5bf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:52 np0005466031 nova_compute[235803]: 2025-10-02 13:27:52.151 2 DEBUG nova.compute.manager [req-804a394c-3b6f-4e25-bc10-1a2d2beec7a4 req-6db37178-6fda-424e-ad80-25533cb5bf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] No waiting events found dispatching network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:27:52 np0005466031 nova_compute[235803]: 2025-10-02 13:27:52.152 2 WARNING nova.compute.manager [req-804a394c-3b6f-4e25-bc10-1a2d2beec7a4 req-6db37178-6fda-424e-ad80-25533cb5bf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received unexpected event network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed for instance with vm_state active and task_state None.#033[00m
Oct  2 09:27:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:52.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:52 np0005466031 nova_compute[235803]: 2025-10-02 13:27:52.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:52 np0005466031 nova_compute[235803]: 2025-10-02 13:27:52.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:54.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:55 np0005466031 podman[339534]: 2025-10-02 13:27:55.636554935 +0000 UTC m=+0.060432052 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 09:27:55 np0005466031 podman[339533]: 2025-10-02 13:27:55.671843012 +0000 UTC m=+0.099636002 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  2 09:27:55 np0005466031 NetworkManager[44907]: <info>  [1759411675.9663] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Oct  2 09:27:55 np0005466031 nova_compute[235803]: 2025-10-02 13:27:55.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:55 np0005466031 NetworkManager[44907]: <info>  [1759411675.9673] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Oct  2 09:27:56 np0005466031 nova_compute[235803]: 2025-10-02 13:27:56.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:56 np0005466031 ovn_controller[132413]: 2025-10-02T13:27:56Z|00885|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct  2 09:27:56 np0005466031 nova_compute[235803]: 2025-10-02 13:27:56.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:56 np0005466031 nova_compute[235803]: 2025-10-02 13:27:56.435 2 DEBUG nova.compute.manager [req-609979f2-aeec-4e75-b77f-d5a91e0cbbed req-ffbd3ee6-612f-4a2b-a760-7e365bfcc04b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received event network-changed-97b9a2a3-7efa-47a4-89ba-36a75b507bed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:27:56 np0005466031 nova_compute[235803]: 2025-10-02 13:27:56.436 2 DEBUG nova.compute.manager [req-609979f2-aeec-4e75-b77f-d5a91e0cbbed req-ffbd3ee6-612f-4a2b-a760-7e365bfcc04b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Refreshing instance network info cache due to event network-changed-97b9a2a3-7efa-47a4-89ba-36a75b507bed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:27:56 np0005466031 nova_compute[235803]: 2025-10-02 13:27:56.436 2 DEBUG oslo_concurrency.lockutils [req-609979f2-aeec-4e75-b77f-d5a91e0cbbed req-ffbd3ee6-612f-4a2b-a760-7e365bfcc04b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:27:56 np0005466031 nova_compute[235803]: 2025-10-02 13:27:56.437 2 DEBUG oslo_concurrency.lockutils [req-609979f2-aeec-4e75-b77f-d5a91e0cbbed req-ffbd3ee6-612f-4a2b-a760-7e365bfcc04b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:27:56 np0005466031 nova_compute[235803]: 2025-10-02 13:27:56.437 2 DEBUG nova.network.neutron [req-609979f2-aeec-4e75-b77f-d5a91e0cbbed req-ffbd3ee6-612f-4a2b-a760-7e365bfcc04b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Refreshing network info cache for port 97b9a2a3-7efa-47a4-89ba-36a75b507bed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:27:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:57 np0005466031 nova_compute[235803]: 2025-10-02 13:27:57.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:57.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:57 np0005466031 nova_compute[235803]: 2025-10-02 13:27:57.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:27:58 np0005466031 nova_compute[235803]: 2025-10-02 13:27:58.465 2 DEBUG nova.network.neutron [req-609979f2-aeec-4e75-b77f-d5a91e0cbbed req-ffbd3ee6-612f-4a2b-a760-7e365bfcc04b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updated VIF entry in instance network info cache for port 97b9a2a3-7efa-47a4-89ba-36a75b507bed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:27:58 np0005466031 nova_compute[235803]: 2025-10-02 13:27:58.465 2 DEBUG nova.network.neutron [req-609979f2-aeec-4e75-b77f-d5a91e0cbbed req-ffbd3ee6-612f-4a2b-a760-7e365bfcc04b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updating instance_info_cache with network_info: [{"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:27:58 np0005466031 nova_compute[235803]: 2025-10-02 13:27:58.499 2 DEBUG oslo_concurrency.lockutils [req-609979f2-aeec-4e75-b77f-d5a91e0cbbed req-ffbd3ee6-612f-4a2b-a760-7e365bfcc04b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:27:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:27:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:27:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:59.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:00 np0005466031 nova_compute[235803]: 2025-10-02 13:28:00.095 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:00 np0005466031 nova_compute[235803]: 2025-10-02 13:28:00.138 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Triggering sync for uuid 96f0e757-ffe8-40a4-8b11-c76935502853 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 09:28:00 np0005466031 nova_compute[235803]: 2025-10-02 13:28:00.139 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:00 np0005466031 nova_compute[235803]: 2025-10-02 13:28:00.139 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "96f0e757-ffe8-40a4-8b11-c76935502853" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:00 np0005466031 nova_compute[235803]: 2025-10-02 13:28:00.168 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "96f0e757-ffe8-40a4-8b11-c76935502853" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:00.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:02.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:02 np0005466031 nova_compute[235803]: 2025-10-02 13:28:02.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:02 np0005466031 nova_compute[235803]: 2025-10-02 13:28:02.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:04.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:28:04Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:b7:7b 10.100.0.12
Oct  2 09:28:04 np0005466031 ovn_controller[132413]: 2025-10-02T13:28:04Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:b7:7b 10.100.0.12
Oct  2 09:28:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:05.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:06.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:07 np0005466031 nova_compute[235803]: 2025-10-02 13:28:07.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:07.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:07 np0005466031 nova_compute[235803]: 2025-10-02 13:28:07.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:08.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:09.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:10.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:28:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:11.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:28:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:12.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:12 np0005466031 nova_compute[235803]: 2025-10-02 13:28:12.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:12 np0005466031 nova_compute[235803]: 2025-10-02 13:28:12.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:13.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:14.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 e423: 3 total, 3 up, 3 in
Oct  2 09:28:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:17 np0005466031 nova_compute[235803]: 2025-10-02 13:28:17.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:17.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:17 np0005466031 nova_compute[235803]: 2025-10-02 13:28:17.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:18.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:19 np0005466031 podman[339634]: 2025-10-02 13:28:19.651467708 +0000 UTC m=+0.071230794 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 09:28:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:28:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:19.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:28:19 np0005466031 podman[339635]: 2025-10-02 13:28:19.692705386 +0000 UTC m=+0.110312039 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 09:28:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:20.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:22 np0005466031 nova_compute[235803]: 2025-10-02 13:28:22.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:22 np0005466031 nova_compute[235803]: 2025-10-02 13:28:22.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:23.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:25.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:28:25.897 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:28:25.898 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:28:25.898 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:26 np0005466031 podman[339735]: 2025-10-02 13:28:26.63236531 +0000 UTC m=+0.062779660 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:28:26 np0005466031 podman[339736]: 2025-10-02 13:28:26.656384012 +0000 UTC m=+0.084034013 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 09:28:26 np0005466031 nova_compute[235803]: 2025-10-02 13:28:26.681 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:27 np0005466031 nova_compute[235803]: 2025-10-02 13:28:27.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:27.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:27 np0005466031 nova_compute[235803]: 2025-10-02 13:28:27.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:28.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:29.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:31.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:32 np0005466031 nova_compute[235803]: 2025-10-02 13:28:32.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:32 np0005466031 nova_compute[235803]: 2025-10-02 13:28:32.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:33.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:34.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:34 np0005466031 nova_compute[235803]: 2025-10-02 13:28:34.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:35.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:36.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:36 np0005466031 nova_compute[235803]: 2025-10-02 13:28:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:36 np0005466031 nova_compute[235803]: 2025-10-02 13:28:36.686 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:36 np0005466031 nova_compute[235803]: 2025-10-02 13:28:36.687 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:36 np0005466031 nova_compute[235803]: 2025-10-02 13:28:36.687 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:36 np0005466031 nova_compute[235803]: 2025-10-02 13:28:36.688 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:28:36 np0005466031 nova_compute[235803]: 2025-10-02 13:28:36.689 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:28:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1506722830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.125 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.227 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.228 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.417 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.419 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3881MB free_disk=20.98794174194336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.419 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.419 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.538 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 96f0e757-ffe8-40a4-8b11-c76935502853 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.539 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.539 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.619 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:37 np0005466031 nova_compute[235803]: 2025-10-02 13:28:37.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:28:38 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1612691171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:28:38 np0005466031 nova_compute[235803]: 2025-10-02 13:28:38.069 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:38 np0005466031 nova_compute[235803]: 2025-10-02 13:28:38.078 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:28:38 np0005466031 nova_compute[235803]: 2025-10-02 13:28:38.113 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:28:38 np0005466031 nova_compute[235803]: 2025-10-02 13:28:38.144 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:28:38 np0005466031 nova_compute[235803]: 2025-10-02 13:28:38.144 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:38.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:39 np0005466031 nova_compute[235803]: 2025-10-02 13:28:39.147 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:39 np0005466031 nova_compute[235803]: 2025-10-02 13:28:39.147 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:39.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:28:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:28:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:42.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:28:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:28:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:28:42 np0005466031 nova_compute[235803]: 2025-10-02 13:28:42.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:43 np0005466031 nova_compute[235803]: 2025-10-02 13:28:43.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:44.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:44 np0005466031 nova_compute[235803]: 2025-10-02 13:28:44.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:44 np0005466031 nova_compute[235803]: 2025-10-02 13:28:44.639 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:28:44 np0005466031 nova_compute[235803]: 2025-10-02 13:28:44.639 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:28:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:45.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:45 np0005466031 nova_compute[235803]: 2025-10-02 13:28:45.997 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:28:45 np0005466031 nova_compute[235803]: 2025-10-02 13:28:45.998 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquired lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:28:45 np0005466031 nova_compute[235803]: 2025-10-02 13:28:45.998 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:28:45 np0005466031 nova_compute[235803]: 2025-10-02 13:28:45.999 2 DEBUG nova.objects.instance [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96f0e757-ffe8-40a4-8b11-c76935502853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:28:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:47 np0005466031 nova_compute[235803]: 2025-10-02 13:28:47.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:47.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:48 np0005466031 nova_compute[235803]: 2025-10-02 13:28:48.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:49 np0005466031 nova_compute[235803]: 2025-10-02 13:28:49.393 2 DEBUG nova.network.neutron [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updating instance_info_cache with network_info: [{"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:28:49 np0005466031 nova_compute[235803]: 2025-10-02 13:28:49.454 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Releasing lock "refresh_cache-96f0e757-ffe8-40a4-8b11-c76935502853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:28:49 np0005466031 nova_compute[235803]: 2025-10-02 13:28:49.455 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:28:49 np0005466031 nova_compute[235803]: 2025-10-02 13:28:49.456 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:49 np0005466031 nova_compute[235803]: 2025-10-02 13:28:49.457 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:49 np0005466031 nova_compute[235803]: 2025-10-02 13:28:49.457 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:28:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:49.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:50.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:50 np0005466031 podman[340018]: 2025-10-02 13:28:50.658297902 +0000 UTC m=+0.084302750 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:28:50 np0005466031 podman[340019]: 2025-10-02 13:28:50.693294481 +0000 UTC m=+0.106359516 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  2 09:28:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:51.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:52 np0005466031 nova_compute[235803]: 2025-10-02 13:28:52.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:52 np0005466031 nova_compute[235803]: 2025-10-02 13:28:52.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:53 np0005466031 nova_compute[235803]: 2025-10-02 13:28:53.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:28:53 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:28:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:28:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:28:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:55.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:57 np0005466031 nova_compute[235803]: 2025-10-02 13:28:57.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:57 np0005466031 podman[340118]: 2025-10-02 13:28:57.627636604 +0000 UTC m=+0.059711631 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:28:57 np0005466031 podman[340119]: 2025-10-02 13:28:57.639229978 +0000 UTC m=+0.064949742 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:28:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:57.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:58 np0005466031 nova_compute[235803]: 2025-10-02 13:28:58.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:58.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:28:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:59.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:01.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:02.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:02 np0005466031 nova_compute[235803]: 2025-10-02 13:29:02.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:03 np0005466031 nova_compute[235803]: 2025-10-02 13:29:03.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:03.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:04.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:05.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:06 np0005466031 ovn_controller[132413]: 2025-10-02T13:29:06Z|00886|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 09:29:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:06.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:07 np0005466031 nova_compute[235803]: 2025-10-02 13:29:07.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:07.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:08 np0005466031 nova_compute[235803]: 2025-10-02 13:29:08.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:08.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:08 np0005466031 nova_compute[235803]: 2025-10-02 13:29:08.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:09.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:10.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:11.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:12 np0005466031 nova_compute[235803]: 2025-10-02 13:29:12.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:13 np0005466031 nova_compute[235803]: 2025-10-02 13:29:13.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:13.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:14.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:15.151 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:29:15 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:15.152 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:29:15 np0005466031 nova_compute[235803]: 2025-10-02 13:29:15.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:15.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:17 np0005466031 nova_compute[235803]: 2025-10-02 13:29:17.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:17.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:18 np0005466031 nova_compute[235803]: 2025-10-02 13:29:18.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:18.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:19.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:20.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e424 e424: 3 total, 3 up, 3 in
Oct  2 09:29:21 np0005466031 podman[340220]: 2025-10-02 13:29:21.673762642 +0000 UTC m=+0.088570773 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 09:29:21 np0005466031 podman[340221]: 2025-10-02 13:29:21.684477421 +0000 UTC m=+0.095057060 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:29:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:22 np0005466031 nova_compute[235803]: 2025-10-02 13:29:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:23 np0005466031 nova_compute[235803]: 2025-10-02 13:29:23.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:23.154 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:29:23 np0005466031 nova_compute[235803]: 2025-10-02 13:29:23.629 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:23 np0005466031 nova_compute[235803]: 2025-10-02 13:29:23.630 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:23 np0005466031 nova_compute[235803]: 2025-10-02 13:29:23.631 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:23 np0005466031 nova_compute[235803]: 2025-10-02 13:29:23.631 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:23 np0005466031 nova_compute[235803]: 2025-10-02 13:29:23.631 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:23 np0005466031 nova_compute[235803]: 2025-10-02 13:29:23.632 2 INFO nova.compute.manager [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Terminating instance#033[00m
Oct  2 09:29:23 np0005466031 nova_compute[235803]: 2025-10-02 13:29:23.634 2 DEBUG nova.compute.manager [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:29:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:23.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:24 np0005466031 kernel: tap97b9a2a3-7e (unregistering): left promiscuous mode
Oct  2 09:29:24 np0005466031 NetworkManager[44907]: <info>  [1759411764.1318] device (tap97b9a2a3-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005466031 ovn_controller[132413]: 2025-10-02T13:29:24Z|00887|binding|INFO|Releasing lport 97b9a2a3-7efa-47a4-89ba-36a75b507bed from this chassis (sb_readonly=0)
Oct  2 09:29:24 np0005466031 ovn_controller[132413]: 2025-10-02T13:29:24Z|00888|binding|INFO|Setting lport 97b9a2a3-7efa-47a4-89ba-36a75b507bed down in Southbound
Oct  2 09:29:24 np0005466031 ovn_controller[132413]: 2025-10-02T13:29:24Z|00889|binding|INFO|Removing iface tap97b9a2a3-7e ovn-installed in OVS
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.153 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:b7:7b 10.100.0.12'], port_security=['fa:16:3e:e4:b7:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '96f0e757-ffe8-40a4-8b11-c76935502853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '79380880-e1a5-49f8-b4ef-1ce955cdc492', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=97b9a2a3-7efa-47a4-89ba-36a75b507bed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.154 141898 INFO neutron.agent.ovn.metadata.agent [-] Port 97b9a2a3-7efa-47a4-89ba-36a75b507bed in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.155 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.157 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[00d8b4e3-b2eb-4fe4-be36-d4040160ffcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.157 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005466031 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Oct  2 09:29:24 np0005466031 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000dc.scope: Consumed 17.480s CPU time.
Oct  2 09:29:24 np0005466031 systemd-machined[192227]: Machine qemu-101-instance-000000dc terminated.
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.276 2 INFO nova.virt.libvirt.driver [-] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Instance destroyed successfully.#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.277 2 DEBUG nova.objects.instance [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 96f0e757-ffe8-40a4-8b11-c76935502853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.291 2 DEBUG nova.virt.libvirt.vif [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:27:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1976996844',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1976996844',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1976996844',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZmGAY4fnhaxJRnxC4yZp5UpthlRPS9W2SbS9dXHUqaI9PZ13McahuXwzVVJSqQ74HrXtoeqlH7uydccFaGXQlf+BRVE6E5/AZu2Lfz7DPRtw+PxfPWe0ejPDhgqe2u5A==',key_name='tempest-keypair-8645022',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:27:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-4z8hxgpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:27:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=96f0e757-ffe8-40a4-8b11-c76935502853,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.292 2 DEBUG nova.network.os_vif_util [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "address": "fa:16:3e:e4:b7:7b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97b9a2a3-7e", "ovs_interfaceid": "97b9a2a3-7efa-47a4-89ba-36a75b507bed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.293 2 DEBUG nova.network.os_vif_util [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b7:7b,bridge_name='br-int',has_traffic_filtering=True,id=97b9a2a3-7efa-47a4-89ba-36a75b507bed,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97b9a2a3-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.293 2 DEBUG os_vif [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b7:7b,bridge_name='br-int',has_traffic_filtering=True,id=97b9a2a3-7efa-47a4-89ba-36a75b507bed,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97b9a2a3-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97b9a2a3-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.303 2 INFO os_vif [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:b7:7b,bridge_name='br-int',has_traffic_filtering=True,id=97b9a2a3-7efa-47a4-89ba-36a75b507bed,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97b9a2a3-7e')#033[00m
Oct  2 09:29:24 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[339516]: [NOTICE]   (339520) : haproxy version is 2.8.14-c23fe91
Oct  2 09:29:24 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[339516]: [NOTICE]   (339520) : path to executable is /usr/sbin/haproxy
Oct  2 09:29:24 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[339516]: [WARNING]  (339520) : Exiting Master process...
Oct  2 09:29:24 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[339516]: [ALERT]    (339520) : Current worker (339522) exited with code 143 (Terminated)
Oct  2 09:29:24 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[339516]: [WARNING]  (339520) : All workers exited. Exiting... (0)
Oct  2 09:29:24 np0005466031 systemd[1]: libpod-dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6.scope: Deactivated successfully.
Oct  2 09:29:24 np0005466031 podman[340343]: 2025-10-02 13:29:24.318804428 +0000 UTC m=+0.048213740 container died dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:29:24 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6-userdata-shm.mount: Deactivated successfully.
Oct  2 09:29:24 np0005466031 systemd[1]: var-lib-containers-storage-overlay-a4cf9d3315d402b6df4c805a8d68188983446bb5cf453f18313045759f7a98d4-merged.mount: Deactivated successfully.
Oct  2 09:29:24 np0005466031 podman[340343]: 2025-10-02 13:29:24.367845291 +0000 UTC m=+0.097254613 container cleanup dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 09:29:24 np0005466031 systemd[1]: libpod-conmon-dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6.scope: Deactivated successfully.
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.393 2 DEBUG nova.compute.manager [req-77dee495-9576-485e-b360-802e616c9ad8 req-b2f4259b-bf2a-468a-bf4e-996597eb3948 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received event network-vif-unplugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.394 2 DEBUG oslo_concurrency.lockutils [req-77dee495-9576-485e-b360-802e616c9ad8 req-b2f4259b-bf2a-468a-bf4e-996597eb3948 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.394 2 DEBUG oslo_concurrency.lockutils [req-77dee495-9576-485e-b360-802e616c9ad8 req-b2f4259b-bf2a-468a-bf4e-996597eb3948 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.394 2 DEBUG oslo_concurrency.lockutils [req-77dee495-9576-485e-b360-802e616c9ad8 req-b2f4259b-bf2a-468a-bf4e-996597eb3948 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.394 2 DEBUG nova.compute.manager [req-77dee495-9576-485e-b360-802e616c9ad8 req-b2f4259b-bf2a-468a-bf4e-996597eb3948 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] No waiting events found dispatching network-vif-unplugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.394 2 DEBUG nova.compute.manager [req-77dee495-9576-485e-b360-802e616c9ad8 req-b2f4259b-bf2a-468a-bf4e-996597eb3948 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received event network-vif-unplugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:29:24 np0005466031 podman[340397]: 2025-10-02 13:29:24.439455005 +0000 UTC m=+0.045561244 container remove dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.445 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[bc04bf7f-3509-455a-89bc-de4c3f91aca4]: (4, ('Thu Oct  2 01:29:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6)\ndfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6\nThu Oct  2 01:29:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (dfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6)\ndfb9a39885eba99938c1e42b133ebcea8c7891d6cb92bc493c3ce2875a0ce6d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.448 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[db0d6836-08b2-41b6-8b70-3e3d4164221d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.449 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005466031 kernel: tap858f2b6f-80: left promiscuous mode
Oct  2 09:29:24 np0005466031 nova_compute[235803]: 2025-10-02 13:29:24.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.465 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bc7f11-a8f5-441f-b719-fac34b2b5a13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.491 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0d53c2-30c5-4572-a92d-3a2b51b893e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.493 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2afae1-572c-40db-b55f-332d835506b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.515 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[557a943c-4145-4e8a-a302-94a052d0a482]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 952545, 'reachable_time': 40039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340412, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:24 np0005466031 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.522 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:29:24 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:24.522 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[af1c7d82-0ad6-4fd5-b9dc-ae3786694bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:25.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:25.898 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:25.899 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:29:25.899 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:26 np0005466031 nova_compute[235803]: 2025-10-02 13:29:26.750 2 DEBUG nova.compute.manager [req-eb4cf292-e747-4145-bb47-4fe57732c41f req-94716a09-e75e-488f-849d-403baac79575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received event network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:29:26 np0005466031 nova_compute[235803]: 2025-10-02 13:29:26.751 2 DEBUG oslo_concurrency.lockutils [req-eb4cf292-e747-4145-bb47-4fe57732c41f req-94716a09-e75e-488f-849d-403baac79575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:26 np0005466031 nova_compute[235803]: 2025-10-02 13:29:26.751 2 DEBUG oslo_concurrency.lockutils [req-eb4cf292-e747-4145-bb47-4fe57732c41f req-94716a09-e75e-488f-849d-403baac79575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:26 np0005466031 nova_compute[235803]: 2025-10-02 13:29:26.751 2 DEBUG oslo_concurrency.lockutils [req-eb4cf292-e747-4145-bb47-4fe57732c41f req-94716a09-e75e-488f-849d-403baac79575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:26 np0005466031 nova_compute[235803]: 2025-10-02 13:29:26.751 2 DEBUG nova.compute.manager [req-eb4cf292-e747-4145-bb47-4fe57732c41f req-94716a09-e75e-488f-849d-403baac79575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] No waiting events found dispatching network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:29:26 np0005466031 nova_compute[235803]: 2025-10-02 13:29:26.751 2 WARNING nova.compute.manager [req-eb4cf292-e747-4145-bb47-4fe57732c41f req-94716a09-e75e-488f-849d-403baac79575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received unexpected event network-vif-plugged-97b9a2a3-7efa-47a4-89ba-36a75b507bed for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:29:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:26 np0005466031 nova_compute[235803]: 2025-10-02 13:29:26.987 2 INFO nova.virt.libvirt.driver [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Deleting instance files /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853_del#033[00m
Oct  2 09:29:26 np0005466031 nova_compute[235803]: 2025-10-02 13:29:26.988 2 INFO nova.virt.libvirt.driver [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Deletion of /var/lib/nova/instances/96f0e757-ffe8-40a4-8b11-c76935502853_del complete#033[00m
Oct  2 09:29:27 np0005466031 nova_compute[235803]: 2025-10-02 13:29:27.052 2 INFO nova.compute.manager [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Took 3.42 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:29:27 np0005466031 nova_compute[235803]: 2025-10-02 13:29:27.053 2 DEBUG oslo.service.loopingcall [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:29:27 np0005466031 nova_compute[235803]: 2025-10-02 13:29:27.053 2 DEBUG nova.compute.manager [-] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:29:27 np0005466031 nova_compute[235803]: 2025-10-02 13:29:27.054 2 DEBUG nova.network.neutron [-] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:29:27 np0005466031 nova_compute[235803]: 2025-10-02 13:29:27.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:27 np0005466031 nova_compute[235803]: 2025-10-02 13:29:27.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e425 e425: 3 total, 3 up, 3 in
Oct  2 09:29:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:27.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:28.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:28 np0005466031 podman[340417]: 2025-10-02 13:29:28.645526643 +0000 UTC m=+0.066617040 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:29:28 np0005466031 podman[340416]: 2025-10-02 13:29:28.645428781 +0000 UTC m=+0.072412538 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:29:29 np0005466031 nova_compute[235803]: 2025-10-02 13:29:29.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:29 np0005466031 nova_compute[235803]: 2025-10-02 13:29:29.475 2 DEBUG nova.network.neutron [-] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:29:29 np0005466031 nova_compute[235803]: 2025-10-02 13:29:29.500 2 INFO nova.compute.manager [-] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Took 2.45 seconds to deallocate network for instance.#033[00m
Oct  2 09:29:29 np0005466031 nova_compute[235803]: 2025-10-02 13:29:29.587 2 DEBUG nova.compute.manager [req-40d3f4d5-f823-48b4-bacc-8df7378d6b6f req-1d95fe59-1eeb-487a-9b17-1f07bd1d3884 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Received event network-vif-deleted-97b9a2a3-7efa-47a4-89ba-36a75b507bed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:29:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:29.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:29 np0005466031 nova_compute[235803]: 2025-10-02 13:29:29.790 2 INFO nova.compute.manager [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Took 0.29 seconds to detach 1 volumes for instance.#033[00m
Oct  2 09:29:29 np0005466031 nova_compute[235803]: 2025-10-02 13:29:29.793 2 DEBUG nova.compute.manager [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Deleting volume: ffd00fca-313e-46cc-b503-d377c1529a56 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 09:29:29 np0005466031 nova_compute[235803]: 2025-10-02 13:29:29.971 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:29 np0005466031 nova_compute[235803]: 2025-10-02 13:29:29.972 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:30 np0005466031 nova_compute[235803]: 2025-10-02 13:29:30.029 2 DEBUG oslo_concurrency.processutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:29:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:29:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1239937293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:29:30 np0005466031 nova_compute[235803]: 2025-10-02 13:29:30.496 2 DEBUG oslo_concurrency.processutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:29:30 np0005466031 nova_compute[235803]: 2025-10-02 13:29:30.506 2 DEBUG nova.compute.provider_tree [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:29:30 np0005466031 nova_compute[235803]: 2025-10-02 13:29:30.560 2 DEBUG nova.scheduler.client.report [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:29:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:30 np0005466031 nova_compute[235803]: 2025-10-02 13:29:30.752 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:30 np0005466031 nova_compute[235803]: 2025-10-02 13:29:30.975 2 INFO nova.scheduler.client.report [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 96f0e757-ffe8-40a4-8b11-c76935502853#033[00m
Oct  2 09:29:31 np0005466031 nova_compute[235803]: 2025-10-02 13:29:31.587 2 DEBUG oslo_concurrency.lockutils [None req-9c8dccdb-51b9-4bbe-b6cd-c0a0a2fdba16 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "96f0e757-ffe8-40a4-8b11-c76935502853" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:31.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:32 np0005466031 nova_compute[235803]: 2025-10-02 13:29:32.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 09:29:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:33.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 09:29:34 np0005466031 nova_compute[235803]: 2025-10-02 13:29:34.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:35 np0005466031 nova_compute[235803]: 2025-10-02 13:29:35.634 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:35.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:29:36 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155816079' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:29:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:29:36 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155816079' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:29:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:36.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:36 np0005466031 nova_compute[235803]: 2025-10-02 13:29:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:36 np0005466031 nova_compute[235803]: 2025-10-02 13:29:36.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:36 np0005466031 nova_compute[235803]: 2025-10-02 13:29:36.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:36 np0005466031 nova_compute[235803]: 2025-10-02 13:29:36.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:36 np0005466031 nova_compute[235803]: 2025-10-02 13:29:36.658 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:29:36 np0005466031 nova_compute[235803]: 2025-10-02 13:29:36.659 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:29:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:29:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1826604766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.083 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.280 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.281 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4102MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.281 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.281 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.336 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.337 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.403 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:37.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:29:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/112464615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.924 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.932 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:29:37 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.954 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:29:38 np0005466031 nova_compute[235803]: 2025-10-02 13:29:37.999 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:29:38 np0005466031 nova_compute[235803]: 2025-10-02 13:29:38.000 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:38.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:39 np0005466031 nova_compute[235803]: 2025-10-02 13:29:39.274 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411764.2727787, 96f0e757-ffe8-40a4-8b11-c76935502853 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:29:39 np0005466031 nova_compute[235803]: 2025-10-02 13:29:39.275 2 INFO nova.compute.manager [-] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:29:39 np0005466031 nova_compute[235803]: 2025-10-02 13:29:39.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:39 np0005466031 nova_compute[235803]: 2025-10-02 13:29:39.320 2 DEBUG nova.compute.manager [None req-615f45bc-e5a2-45b4-88d9-076f22577be3 - - - - - -] [instance: 96f0e757-ffe8-40a4-8b11-c76935502853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:29:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:39.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:40 np0005466031 nova_compute[235803]: 2025-10-02 13:29:40.001 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:40 np0005466031 nova_compute[235803]: 2025-10-02 13:29:40.004 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e426 e426: 3 total, 3 up, 3 in
Oct  2 09:29:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:40.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:41.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:42 np0005466031 nova_compute[235803]: 2025-10-02 13:29:42.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:43.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:44 np0005466031 nova_compute[235803]: 2025-10-02 13:29:44.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:44.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:44 np0005466031 nova_compute[235803]: 2025-10-02 13:29:44.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:45 np0005466031 nova_compute[235803]: 2025-10-02 13:29:45.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:45 np0005466031 nova_compute[235803]: 2025-10-02 13:29:45.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:29:45 np0005466031 nova_compute[235803]: 2025-10-02 13:29:45.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:29:45 np0005466031 nova_compute[235803]: 2025-10-02 13:29:45.680 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:29:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:45.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:46.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:47 np0005466031 nova_compute[235803]: 2025-10-02 13:29:47.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:47 np0005466031 nova_compute[235803]: 2025-10-02 13:29:47.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:47 np0005466031 nova_compute[235803]: 2025-10-02 13:29:47.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:29:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:47.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 e427: 3 total, 3 up, 3 in
Oct  2 09:29:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:49 np0005466031 nova_compute[235803]: 2025-10-02 13:29:49.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:49.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:50.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:52 np0005466031 nova_compute[235803]: 2025-10-02 13:29:52.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:29:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:29:52 np0005466031 podman[340586]: 2025-10-02 13:29:52.61638101 +0000 UTC m=+0.049991182 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:29:52 np0005466031 podman[340587]: 2025-10-02 13:29:52.652409028 +0000 UTC m=+0.084009732 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:29:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:53.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:54 np0005466031 nova_compute[235803]: 2025-10-02 13:29:54.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:29:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:29:54 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:29:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:54 np0005466031 nova_compute[235803]: 2025-10-02 13:29:54.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:29:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:55.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:29:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:56.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:57 np0005466031 nova_compute[235803]: 2025-10-02 13:29:57.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:57.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:58.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:59 np0005466031 nova_compute[235803]: 2025-10-02 13:29:59.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:59 np0005466031 ovn_controller[132413]: 2025-10-02T13:29:59Z|00890|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  2 09:29:59 np0005466031 podman[340767]: 2025-10-02 13:29:59.679568594 +0000 UTC m=+0.096401769 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct  2 09:29:59 np0005466031 podman[340768]: 2025-10-02 13:29:59.688044388 +0000 UTC m=+0.105820400 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid)
Oct  2 09:29:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:29:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:59.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:01 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 09:30:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:30:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:30:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:02 np0005466031 nova_compute[235803]: 2025-10-02 13:30:02.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:30:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:02.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:30:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:04 np0005466031 nova_compute[235803]: 2025-10-02 13:30:04.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:30:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085208703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:30:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:30:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085208703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:30:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 09:30:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:05.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 09:30:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:06.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.025 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "61df455a-66c4-4f77-86c9-edad9954045d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.026 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.069 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.235 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.235 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.245 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.245 2 INFO nova.compute.claims [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.443 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:07.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:30:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3192368789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.918 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:07 np0005466031 nova_compute[235803]: 2025-10-02 13:30:07.926 2 DEBUG nova.compute.provider_tree [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:30:08 np0005466031 nova_compute[235803]: 2025-10-02 13:30:08.047 2 DEBUG nova.scheduler.client.report [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:30:08 np0005466031 nova_compute[235803]: 2025-10-02 13:30:08.259 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:08 np0005466031 nova_compute[235803]: 2025-10-02 13:30:08.260 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:30:08 np0005466031 nova_compute[235803]: 2025-10-02 13:30:08.601 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:30:08 np0005466031 nova_compute[235803]: 2025-10-02 13:30:08.602 2 DEBUG nova.network.neutron [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:30:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:08 np0005466031 nova_compute[235803]: 2025-10-02 13:30:08.729 2 INFO nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:30:08 np0005466031 nova_compute[235803]: 2025-10-02 13:30:08.891 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.075 2 DEBUG nova.policy [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.301 2 INFO nova.virt.block_device [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Booting with volume c345d50c-19ab-4a23-a4a8-f7c734528d26 at /dev/vda#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.406 2 DEBUG os_brick.utils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.407 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.418 2888 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.418 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[731e91a2-ba5f-4736-a2c6-4f996bbd164c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.420 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.427 2888 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.427 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[03f7222a-c346-4348-811e-5f9c8300f047]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d79ae5a31735', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.430 2888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.438 2888 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.438 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[79580ebe-d1e3-43b9-9fee-e5e28854313f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.441 2888 DEBUG oslo.privsep.daemon [-] privsep: reply[0c14d2c5-c2d9-4e6a-89cf-ab76d143bb8e]: (4, '91df6c8e-6fe2-49d2-9991-360b14608f11') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.442 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.481 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.483 2 DEBUG os_brick.initiator.connectors.lightos [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.485 2 DEBUG os_brick.initiator.connectors.lightos [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.485 2 DEBUG os_brick.initiator.connectors.lightos [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.486 2 DEBUG os_brick.utils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d79ae5a31735', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '91df6c8e-6fe2-49d2-9991-360b14608f11', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:30:09 np0005466031 nova_compute[235803]: 2025-10-02 13:30:09.486 2 DEBUG nova.virt.block_device [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Updating existing volume attachment record: 35d5caec-046e-4321-a915-26dc0ab0740a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:30:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:09.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:11.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:12 np0005466031 nova_compute[235803]: 2025-10-02 13:30:12.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.312673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813312743, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1886, "num_deletes": 257, "total_data_size": 4384541, "memory_usage": 4458144, "flush_reason": "Manual Compaction"}
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813328514, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 2871132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84684, "largest_seqno": 86565, "table_properties": {"data_size": 2863316, "index_size": 4693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16490, "raw_average_key_size": 20, "raw_value_size": 2847513, "raw_average_value_size": 3481, "num_data_blocks": 206, "num_entries": 818, "num_filter_entries": 818, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411651, "oldest_key_time": 1759411651, "file_creation_time": 1759411813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 15925 microseconds, and 7910 cpu microseconds.
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.328605) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 2871132 bytes OK
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.328631) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.329940) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.329957) EVENT_LOG_v1 {"time_micros": 1759411813329952, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.329977) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4376002, prev total WAL file size 4376002, number of live WAL files 2.
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.331331) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323730' seq:72057594037927935, type:22 .. '6C6F676D0033353231' seq:0, type:0; will stop at (end)
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(2803KB)], [174(10MB)]
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813331357, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 14378622, "oldest_snapshot_seqno": -1}
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 10654 keys, 14240915 bytes, temperature: kUnknown
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813420121, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14240915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14171451, "index_size": 41703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26693, "raw_key_size": 281082, "raw_average_key_size": 26, "raw_value_size": 13984591, "raw_average_value_size": 1312, "num_data_blocks": 1592, "num_entries": 10654, "num_filter_entries": 10654, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.420351) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14240915 bytes
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.421437) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.9 rd, 160.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.0 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 11185, records dropped: 531 output_compression: NoCompression
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.421452) EVENT_LOG_v1 {"time_micros": 1759411813421445, "job": 112, "event": "compaction_finished", "compaction_time_micros": 88839, "compaction_time_cpu_micros": 30745, "output_level": 6, "num_output_files": 1, "total_output_size": 14240915, "num_input_records": 11185, "num_output_records": 10654, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813422155, "job": 112, "event": "table_file_deletion", "file_number": 176}
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813424260, "job": 112, "event": "table_file_deletion", "file_number": 174}
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.331256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.424335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.424341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.424345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.424347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:30:13.424350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:13.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:13 np0005466031 nova_compute[235803]: 2025-10-02 13:30:13.874 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:30:13 np0005466031 nova_compute[235803]: 2025-10-02 13:30:13.877 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:30:13 np0005466031 nova_compute[235803]: 2025-10-02 13:30:13.877 2 INFO nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Creating image(s)#033[00m
Oct  2 09:30:13 np0005466031 nova_compute[235803]: 2025-10-02 13:30:13.878 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:30:13 np0005466031 nova_compute[235803]: 2025-10-02 13:30:13.878 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Ensure instance console log exists: /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:30:13 np0005466031 nova_compute[235803]: 2025-10-02 13:30:13.879 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:13 np0005466031 nova_compute[235803]: 2025-10-02 13:30:13.879 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:13 np0005466031 nova_compute[235803]: 2025-10-02 13:30:13.880 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:14 np0005466031 nova_compute[235803]: 2025-10-02 13:30:14.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:15 np0005466031 nova_compute[235803]: 2025-10-02 13:30:15.551 2 DEBUG nova.network.neutron [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Successfully created port: bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:30:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:17 np0005466031 nova_compute[235803]: 2025-10-02 13:30:17.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:17.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:18 np0005466031 nova_compute[235803]: 2025-10-02 13:30:18.121 2 DEBUG nova.network.neutron [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Successfully updated port: bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:30:18 np0005466031 nova_compute[235803]: 2025-10-02 13:30:18.167 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:30:18 np0005466031 nova_compute[235803]: 2025-10-02 13:30:18.168 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:30:18 np0005466031 nova_compute[235803]: 2025-10-02 13:30:18.168 2 DEBUG nova.network.neutron [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:30:18 np0005466031 nova_compute[235803]: 2025-10-02 13:30:18.238 2 DEBUG nova.compute.manager [req-dee98a96-f9d3-40e9-9a26-5ea4c3414b9d req-62fa1af6-9548-428e-bc03-9cd6deff9b01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received event network-changed-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:30:18 np0005466031 nova_compute[235803]: 2025-10-02 13:30:18.238 2 DEBUG nova.compute.manager [req-dee98a96-f9d3-40e9-9a26-5ea4c3414b9d req-62fa1af6-9548-428e-bc03-9cd6deff9b01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Refreshing instance network info cache due to event network-changed-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:30:18 np0005466031 nova_compute[235803]: 2025-10-02 13:30:18.239 2 DEBUG oslo_concurrency.lockutils [req-dee98a96-f9d3-40e9-9a26-5ea4c3414b9d req-62fa1af6-9548-428e-bc03-9cd6deff9b01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:30:18 np0005466031 nova_compute[235803]: 2025-10-02 13:30:18.303 2 DEBUG nova.network.neutron [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:30:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:18.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.068 2 DEBUG nova.network.neutron [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Updating instance_info_cache with network_info: [{"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.135 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.135 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Instance network_info: |[{"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.136 2 DEBUG oslo_concurrency.lockutils [req-dee98a96-f9d3-40e9-9a26-5ea4c3414b9d req-62fa1af6-9548-428e-bc03-9cd6deff9b01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.136 2 DEBUG nova.network.neutron [req-dee98a96-f9d3-40e9-9a26-5ea4c3414b9d req-62fa1af6-9548-428e-bc03-9cd6deff9b01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Refreshing network info cache for port bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.139 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Start _get_guest_xml network_info=[{"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c345d50c-19ab-4a23-a4a8-f7c734528d26', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c345d50c-19ab-4a23-a4a8-f7c734528d26', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '61df455a-66c4-4f77-86c9-edad9954045d', 'attached_at': '', 'detached_at': '', 'volume_id': 'c345d50c-19ab-4a23-a4a8-f7c734528d26', 'serial': 'c345d50c-19ab-4a23-a4a8-f7c734528d26'}, 'attachment_id': '35d5caec-046e-4321-a915-26dc0ab0740a', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.144 2 WARNING nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.150 2 DEBUG nova.virt.libvirt.host [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.150 2 DEBUG nova.virt.libvirt.host [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.153 2 DEBUG nova.virt.libvirt.host [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.154 2 DEBUG nova.virt.libvirt.host [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.155 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.155 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.156 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.156 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.156 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.156 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.156 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.157 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.157 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.157 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.157 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.158 2 DEBUG nova.virt.hardware [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.189 2 DEBUG nova.storage.rbd_utils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 61df455a-66c4-4f77-86c9-edad9954045d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.193 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:19 np0005466031 nova_compute[235803]: 2025-10-02 13:30:19.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:30:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3095014498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:30:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:19.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.010 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.816s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.082 2 DEBUG nova.virt.libvirt.vif [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:30:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1709611098',display_name='tempest-TestVolumeBootPattern-server-1709611098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1709611098',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-plxlwool',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:30:09Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=61df455a-66c4-4f77-86c9-edad9954045d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.083 2 DEBUG nova.network.os_vif_util [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.084 2 DEBUG nova.network.os_vif_util [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:5f:f7,bridge_name='br-int',has_traffic_filtering=True,id=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf2e48cc-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.085 2 DEBUG nova.objects.instance [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 61df455a-66c4-4f77-86c9-edad9954045d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.165 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <uuid>61df455a-66c4-4f77-86c9-edad9954045d</uuid>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <name>instance-000000de</name>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <memory>131072</memory>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <vcpu>1</vcpu>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <metadata>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <nova:name>tempest-TestVolumeBootPattern-server-1709611098</nova:name>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <nova:creationTime>2025-10-02 13:30:19</nova:creationTime>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <nova:flavor name="m1.nano">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <nova:memory>128</nova:memory>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <nova:disk>1</nova:disk>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <nova:swap>0</nova:swap>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      </nova:flavor>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <nova:owner>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      </nova:owner>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <nova:ports>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <nova:port uuid="bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        </nova:port>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      </nova:ports>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </nova:instance>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  </metadata>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <sysinfo type="smbios">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <system>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <entry name="serial">61df455a-66c4-4f77-86c9-edad9954045d</entry>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <entry name="uuid">61df455a-66c4-4f77-86c9-edad9954045d</entry>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </system>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  </sysinfo>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <os>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <boot dev="hd"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <smbios mode="sysinfo"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  </os>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <features>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <acpi/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <apic/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <vmcoreinfo/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  </features>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <clock offset="utc">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <timer name="hpet" present="no"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  </clock>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <cpu mode="custom" match="exact">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <model>Nehalem</model>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  </cpu>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  <devices>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <disk type="network" device="cdrom">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <driver type="raw" cache="none"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="vms/61df455a-66c4-4f77-86c9-edad9954045d_disk.config">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <target dev="sda" bus="sata"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <disk type="network" device="disk">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <source protocol="rbd" name="volumes/volume-c345d50c-19ab-4a23-a4a8-f7c734528d26">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      </source>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <auth username="openstack">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      </auth>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <target dev="vda" bus="virtio"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <serial>c345d50c-19ab-4a23-a4a8-f7c734528d26</serial>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </disk>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <interface type="ethernet">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <mac address="fa:16:3e:4c:5f:f7"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <mtu size="1442"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <target dev="tapbf2e48cc-75"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </interface>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <serial type="pty">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <log file="/var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d/console.log" append="off"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </serial>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <video>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <model type="virtio"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </video>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <input type="tablet" bus="usb"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <rng model="virtio">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </rng>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <controller type="usb" index="0"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    <memballoon model="virtio">
Oct  2 09:30:20 np0005466031 nova_compute[235803]:      <stats period="10"/>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:    </memballoon>
Oct  2 09:30:20 np0005466031 nova_compute[235803]:  </devices>
Oct  2 09:30:20 np0005466031 nova_compute[235803]: </domain>
Oct  2 09:30:20 np0005466031 nova_compute[235803]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.167 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Preparing to wait for external event network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.167 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "61df455a-66c4-4f77-86c9-edad9954045d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.167 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.168 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.168 2 DEBUG nova.virt.libvirt.vif [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:30:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1709611098',display_name='tempest-TestVolumeBootPattern-server-1709611098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1709611098',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-plxlwool',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:30:09Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=61df455a-66c4-4f77-86c9-edad9954045d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.169 2 DEBUG nova.network.os_vif_util [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.169 2 DEBUG nova.network.os_vif_util [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:5f:f7,bridge_name='br-int',has_traffic_filtering=True,id=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf2e48cc-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.170 2 DEBUG os_vif [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:5f:f7,bridge_name='br-int',has_traffic_filtering=True,id=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf2e48cc-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.170 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf2e48cc-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf2e48cc-75, col_values=(('external_ids', {'iface-id': 'bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:5f:f7', 'vm-uuid': '61df455a-66c4-4f77-86c9-edad9954045d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:20 np0005466031 NetworkManager[44907]: <info>  [1759411820.1764] manager: (tapbf2e48cc-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.184 2 INFO os_vif [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:5f:f7,bridge_name='br-int',has_traffic_filtering=True,id=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf2e48cc-75')#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.340 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.340 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.340 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:4c:5f:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.341 2 INFO nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Using config drive#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.373 2 DEBUG nova.storage.rbd_utils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 61df455a-66c4-4f77-86c9-edad9954045d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.382 2 DEBUG nova.network.neutron [req-dee98a96-f9d3-40e9-9a26-5ea4c3414b9d req-62fa1af6-9548-428e-bc03-9cd6deff9b01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Updated VIF entry in instance network info cache for port bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.383 2 DEBUG nova.network.neutron [req-dee98a96-f9d3-40e9-9a26-5ea4c3414b9d req-62fa1af6-9548-428e-bc03-9cd6deff9b01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Updating instance_info_cache with network_info: [{"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.548 2 DEBUG oslo_concurrency.lockutils [req-dee98a96-f9d3-40e9-9a26-5ea4c3414b9d req-62fa1af6-9548-428e-bc03-9cd6deff9b01 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:30:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:20.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.764 2 INFO nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Creating config drive at /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d/disk.config#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.769 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvzulv7z5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.909 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvzulv7z5" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.943 2 DEBUG nova.storage.rbd_utils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 61df455a-66c4-4f77-86c9-edad9954045d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:30:20 np0005466031 nova_compute[235803]: 2025-10-02 13:30:20.947 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d/disk.config 61df455a-66c4-4f77-86c9-edad9954045d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:21.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.228 2 DEBUG oslo_concurrency.processutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d/disk.config 61df455a-66c4-4f77-86c9-edad9954045d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.229 2 INFO nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Deleting local config drive /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d/disk.config because it was imported into RBD.#033[00m
Oct  2 09:30:22 np0005466031 kernel: tapbf2e48cc-75: entered promiscuous mode
Oct  2 09:30:22 np0005466031 NetworkManager[44907]: <info>  [1759411822.3012] manager: (tapbf2e48cc-75): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Oct  2 09:30:22 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:22Z|00891|binding|INFO|Claiming lport bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f for this chassis.
Oct  2 09:30:22 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:22Z|00892|binding|INFO|bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f: Claiming fa:16:3e:4c:5f:f7 10.100.0.9
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:22 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:22Z|00893|binding|INFO|Setting lport bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f ovn-installed in OVS
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:22 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:22Z|00894|binding|INFO|Setting lport bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f up in Southbound
Oct  2 09:30:22 np0005466031 systemd-udevd[341060]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.333 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:5f:f7 10.100.0.9'], port_security=['fa:16:3e:4c:5f:f7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '61df455a-66c4-4f77-86c9-edad9954045d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.335 141898 INFO neutron.agent.ovn.metadata.agent [-] Port bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.337 141898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5#033[00m
Oct  2 09:30:22 np0005466031 NetworkManager[44907]: <info>  [1759411822.3582] device (tapbf2e48cc-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:30:22 np0005466031 NetworkManager[44907]: <info>  [1759411822.3591] device (tapbf2e48cc-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.352 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a2676e3f-3279-44eb-9101-0445e94fbaa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.354 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.357 239779 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.357 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1c6959-2f38-47a9-b8c6-7c4acca24b4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 systemd-machined[192227]: New machine qemu-102-instance-000000de.
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.359 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[308d11b3-6e5e-4aee-9193-c639613f4d89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 systemd[1]: Started Virtual Machine qemu-102-instance-000000de.
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.376 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[d5cfd058-52ab-45b4-88f0-19a2a0ba60ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.408 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[89ecd1d2-b921-47a4-9017-d18b15ea07f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.445 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[21cbc4e9-b916-4a53-8e73-2b85e94e0686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 NetworkManager[44907]: <info>  [1759411822.4557] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/405)
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.455 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[793db7d9-2feb-48d9-966d-8ae1f839769b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.500 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[e828ba55-30eb-4ab8-bf1d-cffb56c162c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.505 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[84303969-a612-42de-87d5-8ff0bd206786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 NetworkManager[44907]: <info>  [1759411822.5293] device (tap858f2b6f-80): carrier: link connected
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.535 239908 DEBUG oslo.privsep.daemon [-] privsep: reply[2b852b58-d5a5-4065-9ffa-d8eaf14a3684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.557 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[fd21b422-48ff-49f2-a0c7-0a58722abd4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 967812, 'reachable_time': 36705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341145, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.578 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[026a1719-12d7-4b0a-8e4f-9f60687c235c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967812, 'tstamp': 967812}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341146, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.602 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cbff87-f901-40a1-8a87-fbb16f4b3b3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 967812, 'reachable_time': 36705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341147, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.644 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[75b0dd08-8fb2-4168-a57f-293cdbf7e08f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.686 2 DEBUG nova.compute.manager [req-4bc69696-ee19-47dc-907d-9715bfbd58c1 req-f9e2321d-afff-4118-b989-9eb03c5629a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received event network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.687 2 DEBUG oslo_concurrency.lockutils [req-4bc69696-ee19-47dc-907d-9715bfbd58c1 req-f9e2321d-afff-4118-b989-9eb03c5629a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "61df455a-66c4-4f77-86c9-edad9954045d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.688 2 DEBUG oslo_concurrency.lockutils [req-4bc69696-ee19-47dc-907d-9715bfbd58c1 req-f9e2321d-afff-4118-b989-9eb03c5629a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.688 2 DEBUG oslo_concurrency.lockutils [req-4bc69696-ee19-47dc-907d-9715bfbd58c1 req-f9e2321d-afff-4118-b989-9eb03c5629a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.689 2 DEBUG nova.compute.manager [req-4bc69696-ee19-47dc-907d-9715bfbd58c1 req-f9e2321d-afff-4118-b989-9eb03c5629a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Processing event network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.733 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a58cc10e-6ed1-46b3-9466-43fd31ae2caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.735 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.736 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.737 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:22 np0005466031 NetworkManager[44907]: <info>  [1759411822.7412] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct  2 09:30:22 np0005466031 kernel: tap858f2b6f-80: entered promiscuous mode
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.746 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:22 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:22Z|00895|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct  2 09:30:22 np0005466031 nova_compute[235803]: 2025-10-02 13:30:22.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.767 141898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.768 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[d99f4ed5-5645-4c91-9c31-3c3780341ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.769 141898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: global
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    log         /dev/log local0 debug
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    user        root
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    group       root
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    maxconn     1024
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    daemon
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: defaults
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    log global
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    mode http
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    option httplog
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    option dontlognull
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    option http-server-close
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    option forwardfor
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    retries                 3
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    timeout http-request    30s
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    timeout connect         30s
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    timeout client          32s
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    timeout server          32s
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    timeout http-keep-alive 30s
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: listen listener
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    bind 169.254.169.254:80
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]:    http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:30:22 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:22.771 141898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:23.137 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:30:23 np0005466031 podman[341214]: 2025-10-02 13:30:23.205842849 +0000 UTC m=+0.054852332 container create 33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 09:30:23 np0005466031 systemd[1]: Started libpod-conmon-33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a.scope.
Oct  2 09:30:23 np0005466031 podman[341214]: 2025-10-02 13:30:23.177383968 +0000 UTC m=+0.026393481 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:30:23 np0005466031 systemd[1]: Started libcrun container.
Oct  2 09:30:23 np0005466031 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e3fb13a01bf3d99ab6b24e680b374f1f66d3e89f2839314c63065a5c62555/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:30:23 np0005466031 podman[341214]: 2025-10-02 13:30:23.328342188 +0000 UTC m=+0.177351691 container init 33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:30:23 np0005466031 podman[341228]: 2025-10-02 13:30:23.335051702 +0000 UTC m=+0.082292793 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 09:30:23 np0005466031 podman[341214]: 2025-10-02 13:30:23.336086692 +0000 UTC m=+0.185096175 container start 33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:30:23 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[341251]: [NOTICE]   (341272) : New worker (341276) forked
Oct  2 09:30:23 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[341251]: [NOTICE]   (341272) : Loading success.
Oct  2 09:30:23 np0005466031 podman[341229]: 2025-10-02 13:30:23.367635441 +0000 UTC m=+0.111544916 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:30:23 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:23.425 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:30:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:23.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.926 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.927 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411823.9261096, 61df455a-66c4-4f77-86c9-edad9954045d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.927 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] VM Started (Lifecycle Event)#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.932 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.938 2 INFO nova.virt.libvirt.driver [-] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Instance spawned successfully.#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.939 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.950 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.956 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.969 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.970 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.971 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.971 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.972 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.973 2 DEBUG nova.virt.libvirt.driver [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.997 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.998 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411823.9263737, 61df455a-66c4-4f77-86c9-edad9954045d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:30:23 np0005466031 nova_compute[235803]: 2025-10-02 13:30:23.998 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.019 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.025 2 DEBUG nova.virt.driver [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] Emitting event <LifecycleEvent: 1759411823.9312005, 61df455a-66c4-4f77-86c9-edad9954045d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.026 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.057 2 INFO nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Took 10.18 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.058 2 DEBUG nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.059 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.073 2 DEBUG nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.104 2 INFO nova.compute.manager [None req-bf059096-466f-4814-bd20-f06695a67d66 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.135 2 INFO nova.compute.manager [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Took 16.98 seconds to build instance.#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.182 2 DEBUG oslo_concurrency.lockutils [None req-ce187898-3bf2-4cd2-9c89-3eca8b12ee73 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:24.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.792 2 DEBUG nova.compute.manager [req-6eb7d5dd-fc6c-48db-a9df-937114ced8cd req-8e1057cb-0ed1-4e29-9d9e-ac4daef14a44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received event network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.792 2 DEBUG oslo_concurrency.lockutils [req-6eb7d5dd-fc6c-48db-a9df-937114ced8cd req-8e1057cb-0ed1-4e29-9d9e-ac4daef14a44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "61df455a-66c4-4f77-86c9-edad9954045d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.793 2 DEBUG oslo_concurrency.lockutils [req-6eb7d5dd-fc6c-48db-a9df-937114ced8cd req-8e1057cb-0ed1-4e29-9d9e-ac4daef14a44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.793 2 DEBUG oslo_concurrency.lockutils [req-6eb7d5dd-fc6c-48db-a9df-937114ced8cd req-8e1057cb-0ed1-4e29-9d9e-ac4daef14a44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.793 2 DEBUG nova.compute.manager [req-6eb7d5dd-fc6c-48db-a9df-937114ced8cd req-8e1057cb-0ed1-4e29-9d9e-ac4daef14a44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] No waiting events found dispatching network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:30:24 np0005466031 nova_compute[235803]: 2025-10-02 13:30:24.793 2 WARNING nova.compute.manager [req-6eb7d5dd-fc6c-48db-a9df-937114ced8cd req-8e1057cb-0ed1-4e29-9d9e-ac4daef14a44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received unexpected event network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f for instance with vm_state active and task_state None.#033[00m
Oct  2 09:30:25 np0005466031 nova_compute[235803]: 2025-10-02 13:30:25.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:25.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:25.899 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:30:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:26.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:30:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:27 np0005466031 nova_compute[235803]: 2025-10-02 13:30:27.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:27.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:29 np0005466031 nova_compute[235803]: 2025-10-02 13:30:29.381 2 DEBUG nova.compute.manager [req-a04d3c61-c094-47b3-a42a-77100c8bf65c req-777ea70a-7365-468a-87f7-3e4c6e9edb20 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received event network-changed-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:30:29 np0005466031 nova_compute[235803]: 2025-10-02 13:30:29.381 2 DEBUG nova.compute.manager [req-a04d3c61-c094-47b3-a42a-77100c8bf65c req-777ea70a-7365-468a-87f7-3e4c6e9edb20 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Refreshing instance network info cache due to event network-changed-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:30:29 np0005466031 nova_compute[235803]: 2025-10-02 13:30:29.382 2 DEBUG oslo_concurrency.lockutils [req-a04d3c61-c094-47b3-a42a-77100c8bf65c req-777ea70a-7365-468a-87f7-3e4c6e9edb20 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:30:29 np0005466031 nova_compute[235803]: 2025-10-02 13:30:29.383 2 DEBUG oslo_concurrency.lockutils [req-a04d3c61-c094-47b3-a42a-77100c8bf65c req-777ea70a-7365-468a-87f7-3e4c6e9edb20 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:30:29 np0005466031 nova_compute[235803]: 2025-10-02 13:30:29.383 2 DEBUG nova.network.neutron [req-a04d3c61-c094-47b3-a42a-77100c8bf65c req-777ea70a-7365-468a-87f7-3e4c6e9edb20 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Refreshing network info cache for port bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:30:29 np0005466031 nova_compute[235803]: 2025-10-02 13:30:29.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:29.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:30 np0005466031 nova_compute[235803]: 2025-10-02 13:30:30.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:30 np0005466031 nova_compute[235803]: 2025-10-02 13:30:30.557 2 DEBUG nova.network.neutron [req-a04d3c61-c094-47b3-a42a-77100c8bf65c req-777ea70a-7365-468a-87f7-3e4c6e9edb20 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Updated VIF entry in instance network info cache for port bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:30:30 np0005466031 nova_compute[235803]: 2025-10-02 13:30:30.558 2 DEBUG nova.network.neutron [req-a04d3c61-c094-47b3-a42a-77100c8bf65c req-777ea70a-7365-468a-87f7-3e4c6e9edb20 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Updating instance_info_cache with network_info: [{"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:30:30 np0005466031 nova_compute[235803]: 2025-10-02 13:30:30.576 2 DEBUG oslo_concurrency.lockutils [req-a04d3c61-c094-47b3-a42a-77100c8bf65c req-777ea70a-7365-468a-87f7-3e4c6e9edb20 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-61df455a-66c4-4f77-86c9-edad9954045d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:30:30 np0005466031 podman[341296]: 2025-10-02 13:30:30.634277298 +0000 UTC m=+0.062441200 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:30:30 np0005466031 podman[341295]: 2025-10-02 13:30:30.634272948 +0000 UTC m=+0.063366517 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 09:30:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:30.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:31 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:31.428 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:31.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:32 np0005466031 nova_compute[235803]: 2025-10-02 13:30:32.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:33.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:35 np0005466031 nova_compute[235803]: 2025-10-02 13:30:35.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:35 np0005466031 nova_compute[235803]: 2025-10-02 13:30:35.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:35.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:36.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:36 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:37 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:37Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:5f:f7 10.100.0.9
Oct  2 09:30:37 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:37Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:5f:f7 10.100.0.9
Oct  2 09:30:37 np0005466031 nova_compute[235803]: 2025-10-02 13:30:37.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:37.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:38 np0005466031 nova_compute[235803]: 2025-10-02 13:30:38.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:38 np0005466031 nova_compute[235803]: 2025-10-02 13:30:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:30:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:38.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:30:38 np0005466031 nova_compute[235803]: 2025-10-02 13:30:38.677 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:38 np0005466031 nova_compute[235803]: 2025-10-02 13:30:38.678 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:38 np0005466031 nova_compute[235803]: 2025-10-02 13:30:38.678 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:38 np0005466031 nova_compute[235803]: 2025-10-02 13:30:38.678 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:30:38 np0005466031 nova_compute[235803]: 2025-10-02 13:30:38.679 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:30:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3749824698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.117 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.185 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.185 2 DEBUG nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.355 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.357 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3924MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.357 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.358 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.442 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Instance 61df455a-66c4-4f77-86c9-edad9954045d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.443 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.443 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.483 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:30:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:39.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:30:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:30:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1847762052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.950 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.956 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.968 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.986 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:30:39 np0005466031 nova_compute[235803]: 2025-10-02 13:30:39.987 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:40 np0005466031 nova_compute[235803]: 2025-10-02 13:30:40.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:40.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:41.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:41 np0005466031 nova_compute[235803]: 2025-10-02 13:30:41.986 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:42 np0005466031 nova_compute[235803]: 2025-10-02 13:30:42.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:42.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:43.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:44 np0005466031 nova_compute[235803]: 2025-10-02 13:30:44.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:44.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.282 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "61df455a-66c4-4f77-86c9-edad9954045d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.283 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.283 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "61df455a-66c4-4f77-86c9-edad9954045d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.283 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.284 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.285 2 INFO nova.compute.manager [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Terminating instance#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.286 2 DEBUG nova.compute.manager [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:30:45 np0005466031 kernel: tapbf2e48cc-75 (unregistering): left promiscuous mode
Oct  2 09:30:45 np0005466031 NetworkManager[44907]: <info>  [1759411845.3423] device (tapbf2e48cc-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:30:45 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:45Z|00896|binding|INFO|Releasing lport bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f from this chassis (sb_readonly=0)
Oct  2 09:30:45 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:45Z|00897|binding|INFO|Setting lport bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f down in Southbound
Oct  2 09:30:45 np0005466031 ovn_controller[132413]: 2025-10-02T13:30:45Z|00898|binding|INFO|Removing iface tapbf2e48cc-75 ovn-installed in OVS
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.367 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:5f:f7 10.100.0.9'], port_security=['fa:16:3e:4c:5f:f7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '61df455a-66c4-4f77-86c9-edad9954045d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>], logical_port=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd2bc1ab700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.369 141898 INFO neutron.agent.ovn.metadata.agent [-] Port bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.370 141898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.378 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d603ac-deba-47fd-81e5-1d7ef8ee21d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.379 141898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466031 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000de.scope: Deactivated successfully.
Oct  2 09:30:45 np0005466031 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000de.scope: Consumed 14.054s CPU time.
Oct  2 09:30:45 np0005466031 systemd-machined[192227]: Machine qemu-102-instance-000000de terminated.
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.531 2 INFO nova.virt.libvirt.driver [-] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Instance destroyed successfully.#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.531 2 DEBUG nova.objects.instance [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 61df455a-66c4-4f77-86c9-edad9954045d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:30:45 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[341251]: [NOTICE]   (341272) : haproxy version is 2.8.14-c23fe91
Oct  2 09:30:45 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[341251]: [NOTICE]   (341272) : path to executable is /usr/sbin/haproxy
Oct  2 09:30:45 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[341251]: [WARNING]  (341272) : Exiting Master process...
Oct  2 09:30:45 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[341251]: [ALERT]    (341272) : Current worker (341276) exited with code 143 (Terminated)
Oct  2 09:30:45 np0005466031 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[341251]: [WARNING]  (341272) : All workers exited. Exiting... (0)
Oct  2 09:30:45 np0005466031 systemd[1]: libpod-33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a.scope: Deactivated successfully.
Oct  2 09:30:45 np0005466031 podman[341460]: 2025-10-02 13:30:45.548634413 +0000 UTC m=+0.051938347 container died 33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.561 2 DEBUG nova.virt.libvirt.vif [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:30:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1709611098',display_name='tempest-TestVolumeBootPattern-server-1709611098',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1709611098',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:30:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-plxlwool',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:30:24Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=61df455a-66c4-4f77-86c9-edad9954045d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.561 2 DEBUG nova.network.os_vif_util [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "address": "fa:16:3e:4c:5f:f7", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf2e48cc-75", "ovs_interfaceid": "bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.562 2 DEBUG nova.network.os_vif_util [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4c:5f:f7,bridge_name='br-int',has_traffic_filtering=True,id=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf2e48cc-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.562 2 DEBUG os_vif [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:5f:f7,bridge_name='br-int',has_traffic_filtering=True,id=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf2e48cc-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf2e48cc-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.613 2 INFO os_vif [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:5f:f7,bridge_name='br-int',has_traffic_filtering=True,id=bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf2e48cc-75')#033[00m
Oct  2 09:30:45 np0005466031 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a-userdata-shm.mount: Deactivated successfully.
Oct  2 09:30:45 np0005466031 systemd[1]: var-lib-containers-storage-overlay-2e4e3fb13a01bf3d99ab6b24e680b374f1f66d3e89f2839314c63065a5c62555-merged.mount: Deactivated successfully.
Oct  2 09:30:45 np0005466031 podman[341460]: 2025-10-02 13:30:45.640782018 +0000 UTC m=+0.144085942 container cleanup 33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:30:45 np0005466031 systemd[1]: libpod-conmon-33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a.scope: Deactivated successfully.
Oct  2 09:30:45 np0005466031 podman[341518]: 2025-10-02 13:30:45.709809677 +0000 UTC m=+0.041629540 container remove 33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.717 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[a9794a81-b70e-4e46-83b3-367e63e27dc8]: (4, ('Thu Oct  2 01:30:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a)\n33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a\nThu Oct  2 01:30:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a)\n33b7c22bbdc5b67ea3118c3d8646d71e963d0bad825dad7566e1fc68f553840a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.719 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[8e913a9e-719b-4811-bf5f-4a3956ca2dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.719 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466031 kernel: tap858f2b6f-80: left promiscuous mode
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.729 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[058c90b8-3fd8-4a5d-8416-2cb649c5c4d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.760 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[70e180c3-68d5-435d-a78c-d97549769c53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.762 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[57927e57-2528-43a2-9b42-7b650614b635]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.783 239779 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6e139f-773b-4553-bae5-dcd975044b78]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 967803, 'reachable_time': 15837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341536, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:45 np0005466031 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.790 142062 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:30:45 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:30:45.790 142062 DEBUG oslo.privsep.daemon [-] privsep: reply[022d89f4-79b2-4dea-aeae-385c43480a76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:30:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:30:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.860 2 INFO nova.virt.libvirt.driver [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Deleting instance files /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d_del#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.861 2 INFO nova.virt.libvirt.driver [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Deletion of /var/lib/nova/instances/61df455a-66c4-4f77-86c9-edad9954045d_del complete#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.923 2 INFO nova.compute.manager [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.923 2 DEBUG oslo.service.loopingcall [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.924 2 DEBUG nova.compute.manager [-] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:30:45 np0005466031 nova_compute[235803]: 2025-10-02 13:30:45.924 2 DEBUG nova.network.neutron [-] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:30:46 np0005466031 nova_compute[235803]: 2025-10-02 13:30:46.339 2 DEBUG nova.compute.manager [req-e63e166d-4daa-4443-9a78-ba98d4d1f111 req-3c5d77e6-5baa-4548-bf1b-0aa1b156e7a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received event network-vif-unplugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:30:46 np0005466031 nova_compute[235803]: 2025-10-02 13:30:46.339 2 DEBUG oslo_concurrency.lockutils [req-e63e166d-4daa-4443-9a78-ba98d4d1f111 req-3c5d77e6-5baa-4548-bf1b-0aa1b156e7a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "61df455a-66c4-4f77-86c9-edad9954045d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:46 np0005466031 nova_compute[235803]: 2025-10-02 13:30:46.339 2 DEBUG oslo_concurrency.lockutils [req-e63e166d-4daa-4443-9a78-ba98d4d1f111 req-3c5d77e6-5baa-4548-bf1b-0aa1b156e7a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:46 np0005466031 nova_compute[235803]: 2025-10-02 13:30:46.340 2 DEBUG oslo_concurrency.lockutils [req-e63e166d-4daa-4443-9a78-ba98d4d1f111 req-3c5d77e6-5baa-4548-bf1b-0aa1b156e7a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:46 np0005466031 nova_compute[235803]: 2025-10-02 13:30:46.340 2 DEBUG nova.compute.manager [req-e63e166d-4daa-4443-9a78-ba98d4d1f111 req-3c5d77e6-5baa-4548-bf1b-0aa1b156e7a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] No waiting events found dispatching network-vif-unplugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:30:46 np0005466031 nova_compute[235803]: 2025-10-02 13:30:46.341 2 DEBUG nova.compute.manager [req-e63e166d-4daa-4443-9a78-ba98d4d1f111 req-3c5d77e6-5baa-4548-bf1b-0aa1b156e7a6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received event network-vif-unplugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:30:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:46.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.559 2 DEBUG nova.network.neutron [-] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.608 2 INFO nova.compute.manager [-] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Took 1.68 seconds to deallocate network for instance.#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.655 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.656 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.656 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.656 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:30:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.895 2 INFO nova.compute.manager [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Took 0.29 seconds to detach 1 volumes for instance.#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.935 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.936 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:47 np0005466031 nova_compute[235803]: 2025-10-02 13:30:47.994 2 DEBUG oslo_concurrency.processutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:30:48 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3144257203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.422 2 DEBUG nova.compute.manager [req-2b0fd630-626b-4530-8636-3b89cb155865 req-ea0c598f-3089-4c46-bf67-350b53a69572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received event network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.422 2 DEBUG oslo_concurrency.lockutils [req-2b0fd630-626b-4530-8636-3b89cb155865 req-ea0c598f-3089-4c46-bf67-350b53a69572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "61df455a-66c4-4f77-86c9-edad9954045d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.423 2 DEBUG oslo_concurrency.lockutils [req-2b0fd630-626b-4530-8636-3b89cb155865 req-ea0c598f-3089-4c46-bf67-350b53a69572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.423 2 DEBUG oslo_concurrency.lockutils [req-2b0fd630-626b-4530-8636-3b89cb155865 req-ea0c598f-3089-4c46-bf67-350b53a69572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.423 2 DEBUG nova.compute.manager [req-2b0fd630-626b-4530-8636-3b89cb155865 req-ea0c598f-3089-4c46-bf67-350b53a69572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] No waiting events found dispatching network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.423 2 WARNING nova.compute.manager [req-2b0fd630-626b-4530-8636-3b89cb155865 req-ea0c598f-3089-4c46-bf67-350b53a69572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received unexpected event network-vif-plugged-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.424 2 DEBUG nova.compute.manager [req-2b0fd630-626b-4530-8636-3b89cb155865 req-ea0c598f-3089-4c46-bf67-350b53a69572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Received event network-vif-deleted-bf2e48cc-75f3-4fe5-8bf3-fbdb1e13646f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.441 2 DEBUG oslo_concurrency.processutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.447 2 DEBUG nova.compute.provider_tree [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.461 2 DEBUG nova.scheduler.client.report [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.482 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.509 2 INFO nova.scheduler.client.report [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 61df455a-66c4-4f77-86c9-edad9954045d#033[00m
Oct  2 09:30:48 np0005466031 nova_compute[235803]: 2025-10-02 13:30:48.565 2 DEBUG oslo_concurrency.lockutils [None req-45a66714-dda6-42b3-920f-5db12e2958f3 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "61df455a-66c4-4f77-86c9-edad9954045d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:48.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:50 np0005466031 nova_compute[235803]: 2025-10-02 13:30:50.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:52 np0005466031 nova_compute[235803]: 2025-10-02 13:30:52.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:53 np0005466031 podman[341565]: 2025-10-02 13:30:53.628239154 +0000 UTC m=+0.057186398 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 09:30:53 np0005466031 podman[341566]: 2025-10-02 13:30:53.666701583 +0000 UTC m=+0.089440919 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct  2 09:30:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:53.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:54 np0005466031 nova_compute[235803]: 2025-10-02 13:30:54.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:54.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:55 np0005466031 nova_compute[235803]: 2025-10-02 13:30:55.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:55.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:56.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:57 np0005466031 nova_compute[235803]: 2025-10-02 13:30:57.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:30:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:57.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:30:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:58.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:30:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:59.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:00 np0005466031 nova_compute[235803]: 2025-10-02 13:31:00.529 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411845.5273678, 61df455a-66c4-4f77-86c9-edad9954045d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:31:00 np0005466031 nova_compute[235803]: 2025-10-02 13:31:00.529 2 INFO nova.compute.manager [-] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:31:00 np0005466031 nova_compute[235803]: 2025-10-02 13:31:00.544 2 DEBUG nova.compute.manager [None req-11227a2d-c55c-4829-9131-3ef263b59544 - - - - - -] [instance: 61df455a-66c4-4f77-86c9-edad9954045d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:31:00 np0005466031 nova_compute[235803]: 2025-10-02 13:31:00.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:00.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:01 np0005466031 podman[341641]: 2025-10-02 13:31:01.265668765 +0000 UTC m=+0.054477941 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:31:01 np0005466031 podman[341640]: 2025-10-02 13:31:01.265456779 +0000 UTC m=+0.057171409 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:31:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:01.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:01.958636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861958722, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 751, "num_deletes": 251, "total_data_size": 1375146, "memory_usage": 1395456, "flush_reason": "Manual Compaction"}
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861978837, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 907628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86570, "largest_seqno": 87316, "table_properties": {"data_size": 903979, "index_size": 1492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8226, "raw_average_key_size": 19, "raw_value_size": 896732, "raw_average_value_size": 2119, "num_data_blocks": 65, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411813, "oldest_key_time": 1759411813, "file_creation_time": 1759411861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 20210 microseconds, and 4620 cpu microseconds.
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:01.978880) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 907628 bytes OK
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:01.978901) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:01.992011) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:01.992059) EVENT_LOG_v1 {"time_micros": 1759411861992047, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:01.992084) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1371186, prev total WAL file size 1371186, number of live WAL files 2.
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:01.992852) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(886KB)], [177(13MB)]
Oct  2 09:31:01 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861992892, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 15148543, "oldest_snapshot_seqno": -1}
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 10561 keys, 13195570 bytes, temperature: kUnknown
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862194738, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 13195570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13127627, "index_size": 40438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 279789, "raw_average_key_size": 26, "raw_value_size": 12943189, "raw_average_value_size": 1225, "num_data_blocks": 1532, "num_entries": 10561, "num_filter_entries": 10561, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759411861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:02.195051) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13195570 bytes
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:02.204833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.0 rd, 65.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.6 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(31.2) write-amplify(14.5) OK, records in: 11077, records dropped: 516 output_compression: NoCompression
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:02.204875) EVENT_LOG_v1 {"time_micros": 1759411862204857, "job": 114, "event": "compaction_finished", "compaction_time_micros": 201941, "compaction_time_cpu_micros": 30737, "output_level": 6, "num_output_files": 1, "total_output_size": 13195570, "num_input_records": 11077, "num_output_records": 10561, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862205350, "job": 114, "event": "table_file_deletion", "file_number": 179}
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862209237, "job": 114, "event": "table_file_deletion", "file_number": 177}
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:01.992735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:02.209278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:02.209284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:02.209286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:02.209288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:31:02.209290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466031 nova_compute[235803]: 2025-10-02 13:31:02.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:31:02.356 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:31:02 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:31:02.357 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:31:02 np0005466031 nova_compute[235803]: 2025-10-02 13:31:02.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:02.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:02 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:31:03 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:31:03.360 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:31:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:04 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:31:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:04.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:31:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1995901434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:31:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:31:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1995901434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:31:05 np0005466031 nova_compute[235803]: 2025-10-02 13:31:05.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:05.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:06.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:06 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:07 np0005466031 nova_compute[235803]: 2025-10-02 13:31:07.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:07.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:08 np0005466031 nova_compute[235803]: 2025-10-02 13:31:08.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:31:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:08.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:31:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:09.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:10 np0005466031 nova_compute[235803]: 2025-10-02 13:31:10.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:10.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:11.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:12 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:12.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:12 np0005466031 nova_compute[235803]: 2025-10-02 13:31:12.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:13.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:14.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:15 np0005466031 nova_compute[235803]: 2025-10-02 13:31:15.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:15.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:16.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:17 np0005466031 nova_compute[235803]: 2025-10-02 13:31:17.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:17.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:18.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:19.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:20.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:20 np0005466031 nova_compute[235803]: 2025-10-02 13:31:20.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:21.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:22.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:22 np0005466031 nova_compute[235803]: 2025-10-02 13:31:22.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:23.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:24 np0005466031 podman[341949]: 2025-10-02 13:31:24.623604959 +0000 UTC m=+0.055699406 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  2 09:31:24 np0005466031 podman[341950]: 2025-10-02 13:31:24.658556047 +0000 UTC m=+0.089147800 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 09:31:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:24.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:25 np0005466031 nova_compute[235803]: 2025-10-02 13:31:25.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:31:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:25.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:31:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:31:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:31:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:31:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:26.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:27 np0005466031 nova_compute[235803]: 2025-10-02 13:31:27.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:27.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 e428: 3 total, 3 up, 3 in
Oct  2 09:31:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:28.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:29.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:30 np0005466031 nova_compute[235803]: 2025-10-02 13:31:30.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:30.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:31 np0005466031 podman[341998]: 2025-10-02 13:31:31.613722527 +0000 UTC m=+0.048338784 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:31:31 np0005466031 podman[341997]: 2025-10-02 13:31:31.619262786 +0000 UTC m=+0.056001414 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:31:31 np0005466031 nova_compute[235803]: 2025-10-02 13:31:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:31.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:32 np0005466031 nova_compute[235803]: 2025-10-02 13:31:32.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:32.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:31:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:33.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:31:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:34.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:35 np0005466031 nova_compute[235803]: 2025-10-02 13:31:35.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:35.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:36.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:37 np0005466031 nova_compute[235803]: 2025-10-02 13:31:37.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:37 np0005466031 nova_compute[235803]: 2025-10-02 13:31:37.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:37.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:38 np0005466031 nova_compute[235803]: 2025-10-02 13:31:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:38 np0005466031 nova_compute[235803]: 2025-10-02 13:31:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:38 np0005466031 nova_compute[235803]: 2025-10-02 13:31:38.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:38 np0005466031 nova_compute[235803]: 2025-10-02 13:31:38.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:38 np0005466031 nova_compute[235803]: 2025-10-02 13:31:38.672 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:38 np0005466031 nova_compute[235803]: 2025-10-02 13:31:38.672 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:31:38 np0005466031 nova_compute[235803]: 2025-10-02 13:31:38.672 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:31:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3871989529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.150 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.325 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.327 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4125MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.327 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.328 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.480 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.481 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.496 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.511 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.513 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.553 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.580 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:31:39 np0005466031 nova_compute[235803]: 2025-10-02 13:31:39.610 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:40 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:31:40 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1750166120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:31:40 np0005466031 nova_compute[235803]: 2025-10-02 13:31:40.083 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:40 np0005466031 nova_compute[235803]: 2025-10-02 13:31:40.088 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:31:40 np0005466031 nova_compute[235803]: 2025-10-02 13:31:40.102 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:31:40 np0005466031 nova_compute[235803]: 2025-10-02 13:31:40.126 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:31:40 np0005466031 nova_compute[235803]: 2025-10-02 13:31:40.126 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:40 np0005466031 nova_compute[235803]: 2025-10-02 13:31:40.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:40.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:41.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:42.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:42 np0005466031 nova_compute[235803]: 2025-10-02 13:31:42.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:43 np0005466031 nova_compute[235803]: 2025-10-02 13:31:43.126 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:43 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 09:31:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:43.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:44 np0005466031 nova_compute[235803]: 2025-10-02 13:31:44.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:44.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:45 np0005466031 nova_compute[235803]: 2025-10-02 13:31:45.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:31:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:45.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:31:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:46.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:47 np0005466031 nova_compute[235803]: 2025-10-02 13:31:47.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:47.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:48 np0005466031 nova_compute[235803]: 2025-10-02 13:31:48.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:48 np0005466031 nova_compute[235803]: 2025-10-02 13:31:48.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:31:48 np0005466031 nova_compute[235803]: 2025-10-02 13:31:48.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:31:48 np0005466031 nova_compute[235803]: 2025-10-02 13:31:48.665 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:31:48 np0005466031 nova_compute[235803]: 2025-10-02 13:31:48.666 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:48 np0005466031 nova_compute[235803]: 2025-10-02 13:31:48.666 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:31:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:31:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:48.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:31:49 np0005466031 nova_compute[235803]: 2025-10-02 13:31:49.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:49.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:31:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:50.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:31:50 np0005466031 nova_compute[235803]: 2025-10-02 13:31:50.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 09:31:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:51.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 09:31:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:52.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:52 np0005466031 nova_compute[235803]: 2025-10-02 13:31:52.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:53.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:54 np0005466031 nova_compute[235803]: 2025-10-02 13:31:54.663 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:31:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:54.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:31:55 np0005466031 podman[342145]: 2025-10-02 13:31:55.621027935 +0000 UTC m=+0.054620075 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:31:55 np0005466031 podman[342146]: 2025-10-02 13:31:55.6496652 +0000 UTC m=+0.079168642 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:31:55 np0005466031 nova_compute[235803]: 2025-10-02 13:31:55.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:55.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:56.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:57 np0005466031 nova_compute[235803]: 2025-10-02 13:31:57.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:57.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:58.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:31:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:59.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:00 np0005466031 nova_compute[235803]: 2025-10-02 13:32:00.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:01.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:02 np0005466031 podman[342195]: 2025-10-02 13:32:02.638572473 +0000 UTC m=+0.062571074 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:32:02 np0005466031 podman[342194]: 2025-10-02 13:32:02.663045198 +0000 UTC m=+0.085036671 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 09:32:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:02.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:02 np0005466031 nova_compute[235803]: 2025-10-02 13:32:02.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:03.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:04.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:05 np0005466031 nova_compute[235803]: 2025-10-02 13:32:05.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:32:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:05.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:32:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:06.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:07 np0005466031 nova_compute[235803]: 2025-10-02 13:32:07.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:32:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:07.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:32:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:08.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:09.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:10.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:10 np0005466031 nova_compute[235803]: 2025-10-02 13:32:10.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:11.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:12.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:12 np0005466031 nova_compute[235803]: 2025-10-02 13:32:12.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:12 np0005466031 ovn_controller[132413]: 2025-10-02T13:32:12Z|00899|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Oct  2 09:32:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:32:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:32:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:32:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:32:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:13 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:32:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:13.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:14.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:15 np0005466031 nova_compute[235803]: 2025-10-02 13:32:15.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:15.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:16.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:17 np0005466031 nova_compute[235803]: 2025-10-02 13:32:17.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:17.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:18 np0005466031 nova_compute[235803]: 2025-10-02 13:32:18.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:32:18.108 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:32:18 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:32:18.110 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:32:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:18.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 09:32:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:19.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 09:32:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:20.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:20 np0005466031 nova_compute[235803]: 2025-10-02 13:32:20.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:21.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:22.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:22 np0005466031 nova_compute[235803]: 2025-10-02 13:32:22.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:32:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2315824707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:32:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:32:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2315824707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:32:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:23.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:24.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e429 e429: 3 total, 3 up, 3 in
Oct  2 09:32:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:32:25.112 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:32:25 np0005466031 nova_compute[235803]: 2025-10-02 13:32:25.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:32:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:32:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:32:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:25.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:26 np0005466031 podman[342646]: 2025-10-02 13:32:26.635525443 +0000 UTC m=+0.058920699 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:32:26 np0005466031 podman[342647]: 2025-10-02 13:32:26.702457061 +0000 UTC m=+0.124968612 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 09:32:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:26.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:27 np0005466031 nova_compute[235803]: 2025-10-02 13:32:27.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:27.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:28.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:32:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:32:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:30.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:30 np0005466031 nova_compute[235803]: 2025-10-02 13:32:30.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:31 np0005466031 nova_compute[235803]: 2025-10-02 13:32:31.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:31.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:32 np0005466031 nova_compute[235803]: 2025-10-02 13:32:32.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:32 np0005466031 nova_compute[235803]: 2025-10-02 13:32:32.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:32:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:32.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:32 np0005466031 nova_compute[235803]: 2025-10-02 13:32:32.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 e430: 3 total, 3 up, 3 in
Oct  2 09:32:33 np0005466031 podman[342695]: 2025-10-02 13:32:33.619066763 +0000 UTC m=+0.050373863 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:32:33 np0005466031 podman[342694]: 2025-10-02 13:32:33.62140628 +0000 UTC m=+0.055501070 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:32:33 np0005466031 nova_compute[235803]: 2025-10-02 13:32:33.655 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:33 np0005466031 nova_compute[235803]: 2025-10-02 13:32:33.656 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:32:33 np0005466031 nova_compute[235803]: 2025-10-02 13:32:33.696 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:32:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:33.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:34.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:32:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2730303382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:32:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:32:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2730303382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:32:35 np0005466031 nova_compute[235803]: 2025-10-02 13:32:35.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:35.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:36.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:37.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:37 np0005466031 nova_compute[235803]: 2025-10-02 13:32:37.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:38.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:39 np0005466031 nova_compute[235803]: 2025-10-02 13:32:39.671 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:39.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:40 np0005466031 nova_compute[235803]: 2025-10-02 13:32:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:40 np0005466031 nova_compute[235803]: 2025-10-02 13:32:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:40 np0005466031 nova_compute[235803]: 2025-10-02 13:32:40.714 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:40 np0005466031 nova_compute[235803]: 2025-10-02 13:32:40.714 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:40 np0005466031 nova_compute[235803]: 2025-10-02 13:32:40.714 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:40 np0005466031 nova_compute[235803]: 2025-10-02 13:32:40.714 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:32:40 np0005466031 nova_compute[235803]: 2025-10-02 13:32:40.715 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:32:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:40 np0005466031 nova_compute[235803]: 2025-10-02 13:32:40.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:32:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3679501352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.160 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.322 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.324 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4154MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.324 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.324 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.426 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.427 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.460 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:32:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:32:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/556147337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.911 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.920 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:32:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:41.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.968 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.971 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:32:41 np0005466031 nova_compute[235803]: 2025-10-02 13:32:41.972 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:42.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:42 np0005466031 nova_compute[235803]: 2025-10-02 13:32:42.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:43.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:43 np0005466031 nova_compute[235803]: 2025-10-02 13:32:43.972 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:44 np0005466031 nova_compute[235803]: 2025-10-02 13:32:44.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:44 np0005466031 nova_compute[235803]: 2025-10-02 13:32:44.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:44.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:45 np0005466031 nova_compute[235803]: 2025-10-02 13:32:45.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:45.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:46 np0005466031 nova_compute[235803]: 2025-10-02 13:32:46.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:46.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:47.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:47 np0005466031 nova_compute[235803]: 2025-10-02 13:32:47.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:48 np0005466031 nova_compute[235803]: 2025-10-02 13:32:48.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:48 np0005466031 nova_compute[235803]: 2025-10-02 13:32:48.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:32:48 np0005466031 nova_compute[235803]: 2025-10-02 13:32:48.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:32:48 np0005466031 nova_compute[235803]: 2025-10-02 13:32:48.753 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:32:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:48.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:49.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:50 np0005466031 nova_compute[235803]: 2025-10-02 13:32:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:50 np0005466031 nova_compute[235803]: 2025-10-02 13:32:50.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:32:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:50.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:50 np0005466031 nova_compute[235803]: 2025-10-02 13:32:50.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:51.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:52.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:52 np0005466031 nova_compute[235803]: 2025-10-02 13:32:52.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:53.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:54.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:55 np0005466031 nova_compute[235803]: 2025-10-02 13:32:55.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:55.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:56 np0005466031 nova_compute[235803]: 2025-10-02 13:32:56.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:56.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:57 np0005466031 podman[342839]: 2025-10-02 13:32:57.627894257 +0000 UTC m=+0.058561639 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:32:57 np0005466031 podman[342840]: 2025-10-02 13:32:57.663374809 +0000 UTC m=+0.089849660 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 09:32:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:57 np0005466031 nova_compute[235803]: 2025-10-02 13:32:57.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:32:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:58.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:32:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:32:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:59.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:00.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:00 np0005466031 nova_compute[235803]: 2025-10-02 13:33:00.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:01.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:02.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:02 np0005466031 nova_compute[235803]: 2025-10-02 13:33:02.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:03.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:04 np0005466031 podman[342940]: 2025-10-02 13:33:04.632797439 +0000 UTC m=+0.059570818 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:33:04 np0005466031 podman[342941]: 2025-10-02 13:33:04.655120102 +0000 UTC m=+0.079922104 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:33:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:04.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:05 np0005466031 nova_compute[235803]: 2025-10-02 13:33:05.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:05.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:06.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:07.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:07 np0005466031 nova_compute[235803]: 2025-10-02 13:33:07.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:08.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:09.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:10 np0005466031 nova_compute[235803]: 2025-10-02 13:33:10.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:10.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:10 np0005466031 nova_compute[235803]: 2025-10-02 13:33:10.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:11.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:12.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:12 np0005466031 nova_compute[235803]: 2025-10-02 13:33:12.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:14.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:15 np0005466031 nova_compute[235803]: 2025-10-02 13:33:15.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:33:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:33:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:16.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:17.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:18 np0005466031 nova_compute[235803]: 2025-10-02 13:33:18.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:18.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:19 np0005466031 ovn_controller[132413]: 2025-10-02T13:33:19Z|00900|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct  2 09:33:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:33:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:33:20 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:33:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:20.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:20 np0005466031 nova_compute[235803]: 2025-10-02 13:33:20.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:21.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:22.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:23 np0005466031 nova_compute[235803]: 2025-10-02 13:33:23.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:24.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:33:25.900 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:33:25.901 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:33:25.901 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:25 np0005466031 nova_compute[235803]: 2025-10-02 13:33:25.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:33:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:26.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:33:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:33:27 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:33:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:28 np0005466031 nova_compute[235803]: 2025-10-02 13:33:28.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:28.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:28 np0005466031 podman[343224]: 2025-10-02 13:33:28.630864265 +0000 UTC m=+0.058573338 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:33:28 np0005466031 podman[343225]: 2025-10-02 13:33:28.662408954 +0000 UTC m=+0.088441989 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:33:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:33:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:28.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:33:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:30.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:30.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:30 np0005466031 nova_compute[235803]: 2025-10-02 13:33:30.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:31 np0005466031 nova_compute[235803]: 2025-10-02 13:33:31.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:32.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:33:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:32.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:33:33 np0005466031 nova_compute[235803]: 2025-10-02 13:33:33.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:34.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:35 np0005466031 podman[343273]: 2025-10-02 13:33:35.647300814 +0000 UTC m=+0.079774750 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:33:35 np0005466031 podman[343272]: 2025-10-02 13:33:35.652852464 +0000 UTC m=+0.087505133 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:33:35 np0005466031 nova_compute[235803]: 2025-10-02 13:33:35.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:36.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:36.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:38 np0005466031 nova_compute[235803]: 2025-10-02 13:33:38.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:38.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:38.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:40.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:40.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:40 np0005466031 nova_compute[235803]: 2025-10-02 13:33:40.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:41 np0005466031 nova_compute[235803]: 2025-10-02 13:33:41.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:41 np0005466031 nova_compute[235803]: 2025-10-02 13:33:41.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:42.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:42 np0005466031 nova_compute[235803]: 2025-10-02 13:33:42.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:42 np0005466031 nova_compute[235803]: 2025-10-02 13:33:42.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:42 np0005466031 nova_compute[235803]: 2025-10-02 13:33:42.731 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:42 np0005466031 nova_compute[235803]: 2025-10-02 13:33:42.732 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:42 np0005466031 nova_compute[235803]: 2025-10-02 13:33:42.732 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:42 np0005466031 nova_compute[235803]: 2025-10-02 13:33:42.733 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:33:42 np0005466031 nova_compute[235803]: 2025-10-02 13:33:42.733 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:33:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:42.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:33:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2390697569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.230 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.387 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.388 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4146MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.388 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.389 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.531 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.531 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:33:43 np0005466031 nova_compute[235803]: 2025-10-02 13:33:43.558 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:33:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:44.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:33:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3466644390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:33:44 np0005466031 nova_compute[235803]: 2025-10-02 13:33:44.047 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:33:44 np0005466031 nova_compute[235803]: 2025-10-02 13:33:44.054 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:33:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:44.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:45 np0005466031 nova_compute[235803]: 2025-10-02 13:33:45.556 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:33:45 np0005466031 nova_compute[235803]: 2025-10-02 13:33:45.558 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:33:45 np0005466031 nova_compute[235803]: 2025-10-02 13:33:45.558 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:45 np0005466031 nova_compute[235803]: 2025-10-02 13:33:45.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:33:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:33:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:46.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:48 np0005466031 nova_compute[235803]: 2025-10-02 13:33:48.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:48.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:48.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:50.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:50 np0005466031 nova_compute[235803]: 2025-10-02 13:33:50.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:50.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:51 np0005466031 nova_compute[235803]: 2025-10-02 13:33:51.559 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:51 np0005466031 nova_compute[235803]: 2025-10-02 13:33:51.560 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:33:51 np0005466031 nova_compute[235803]: 2025-10-02 13:33:51.560 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:33:51 np0005466031 nova_compute[235803]: 2025-10-02 13:33:51.583 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:33:51 np0005466031 nova_compute[235803]: 2025-10-02 13:33:51.583 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:51 np0005466031 nova_compute[235803]: 2025-10-02 13:33:51.585 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:51 np0005466031 nova_compute[235803]: 2025-10-02 13:33:51.585 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:33:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:52.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:33:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:52.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:33:53 np0005466031 nova_compute[235803]: 2025-10-02 13:33:53.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:54.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:54.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:55 np0005466031 nova_compute[235803]: 2025-10-02 13:33:55.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:56.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:56.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:57 np0005466031 nova_compute[235803]: 2025-10-02 13:33:57.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:58 np0005466031 nova_compute[235803]: 2025-10-02 13:33:58.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:58.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.334715) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038334779, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1987, "num_deletes": 252, "total_data_size": 4783196, "memory_usage": 4836896, "flush_reason": "Manual Compaction"}
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038344833, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1865443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87321, "largest_seqno": 89303, "table_properties": {"data_size": 1859405, "index_size": 3048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16082, "raw_average_key_size": 21, "raw_value_size": 1845919, "raw_average_value_size": 2422, "num_data_blocks": 136, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411862, "oldest_key_time": 1759411862, "file_creation_time": 1759412038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 10166 microseconds, and 5185 cpu microseconds.
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.344890) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1865443 bytes OK
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.344909) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346512) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346523) EVENT_LOG_v1 {"time_micros": 1759412038346519, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346559) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 4774303, prev total WAL file size 4774303, number of live WAL files 2.
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.347942) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303038' seq:72057594037927935, type:22 .. '6D6772737461740033323630' seq:0, type:0; will stop at (end)
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1821KB)], [180(12MB)]
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038347981, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 15061013, "oldest_snapshot_seqno": -1}
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 10882 keys, 12449559 bytes, temperature: kUnknown
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038434667, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 12449559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12381740, "index_size": 39482, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27269, "raw_key_size": 286733, "raw_average_key_size": 26, "raw_value_size": 12194148, "raw_average_value_size": 1120, "num_data_blocks": 1496, "num_entries": 10882, "num_filter_entries": 10882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759412038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.434965) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12449559 bytes
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.436073) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.6 rd, 143.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(14.7) write-amplify(6.7) OK, records in: 11323, records dropped: 441 output_compression: NoCompression
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.436089) EVENT_LOG_v1 {"time_micros": 1759412038436081, "job": 116, "event": "compaction_finished", "compaction_time_micros": 86758, "compaction_time_cpu_micros": 31865, "output_level": 6, "num_output_files": 1, "total_output_size": 12449559, "num_input_records": 11323, "num_output_records": 10882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038436585, "job": 116, "event": "table_file_deletion", "file_number": 182}
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038439188, "job": 116, "event": "table_file_deletion", "file_number": 180}
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.347863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.439251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.439256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.439258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.439260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:33:58.439262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:33:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:58.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:59 np0005466031 podman[343414]: 2025-10-02 13:33:59.617370109 +0000 UTC m=+0.050898188 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:33:59 np0005466031 podman[343415]: 2025-10-02 13:33:59.643968905 +0000 UTC m=+0.074354083 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 09:34:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:00.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:00 np0005466031 nova_compute[235803]: 2025-10-02 13:34:00.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:00.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:02.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:02.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:03 np0005466031 nova_compute[235803]: 2025-10-02 13:34:03.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:03 np0005466031 systemd-logind[786]: New session 71 of user zuul.
Oct  2 09:34:03 np0005466031 systemd[1]: Started Session 71 of User zuul.
Oct  2 09:34:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:04.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:04.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:34:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4257618237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:34:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:34:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4257618237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:34:05 np0005466031 nova_compute[235803]: 2025-10-02 13:34:05.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:06.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:06 np0005466031 podman[343686]: 2025-10-02 13:34:06.651878778 +0000 UTC m=+0.073788407 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:34:06 np0005466031 podman[343687]: 2025-10-02 13:34:06.65404128 +0000 UTC m=+0.074930230 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:34:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:06.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:34:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/630458637' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:34:08 np0005466031 nova_compute[235803]: 2025-10-02 13:34:08.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:08.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:08.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:10.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:10 np0005466031 nova_compute[235803]: 2025-10-02 13:34:10.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:10.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:11 np0005466031 ovs-vsctl[343837]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  2 09:34:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:12.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:12 np0005466031 virtqemud[235323]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  2 09:34:12 np0005466031 virtqemud[235323]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  2 09:34:12 np0005466031 virtqemud[235323]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:34:12 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: cache status {prefix=cache status} (starting...)
Oct  2 09:34:12 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: client ls {prefix=client ls} (starting...)
Oct  2 09:34:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:12.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:13 np0005466031 lvm[344193]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 09:34:13 np0005466031 lvm[344193]: VG ceph_vg0 finished
Oct  2 09:34:13 np0005466031 nova_compute[235803]: 2025-10-02 13:34:13.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:13 np0005466031 kernel: block sr0: the capability attribute has been deprecated.
Oct  2 09:34:13 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: damage ls {prefix=damage ls} (starting...)
Oct  2 09:34:13 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump loads {prefix=dump loads} (starting...)
Oct  2 09:34:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  2 09:34:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/25412291' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  2 09:34:13 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  2 09:34:13 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  2 09:34:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:14.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:14 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  2 09:34:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 09:34:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2033142661' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 09:34:14 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  2 09:34:14 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  2 09:34:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 09:34:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1531075006' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 09:34:14 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  2 09:34:14 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: ops {prefix=ops} (starting...)
Oct  2 09:34:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  2 09:34:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/133173445' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  2 09:34:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  2 09:34:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/278414120' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  2 09:34:15 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: session ls {prefix=session ls} (starting...)
Oct  2 09:34:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:34:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4036943251' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:34:15 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: status {prefix=status} (starting...)
Oct  2 09:34:15 np0005466031 nova_compute[235803]: 2025-10-02 13:34:15.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:34:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3869755304' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2540074245' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  2 09:34:16 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/977154715' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3242153421' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:34:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2460489288' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:34:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:17.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 09:34:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2199224408' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 09:34:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  2 09:34:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3429833707' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  2 09:34:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:34:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/113161508' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:34:18 np0005466031 nova_compute[235803]: 2025-10-02 13:34:18.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:18.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  2 09:34:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2595115548' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  2 09:34:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:34:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2588293621' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:34:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:19.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:34:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1278800428' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486760448 unmapped: 69779456 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 heartbeat osd_stat(store_statfs(0x19db4d000/0x0/0x1bfc00000, data 0x2e0ab85/0x2fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e321a7000 session 0x559e345625a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e3296b400 session 0x559e363d4d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 69894144 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4736086 data_alloc: 234881024 data_used: 14077952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e31e32000 session 0x559e3732cd20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480796672 unmapped: 75743232 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480796672 unmapped: 75743232 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480796672 unmapped: 75743232 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480772096 unmapped: 75767808 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480772096 unmapped: 75767808 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 heartbeat osd_stat(store_statfs(0x19e4d8000/0x0/0x1bfc00000, data 0x248aae0/0x2666000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4734926 data_alloc: 234881024 data_used: 14073856
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480772096 unmapped: 75767808 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.582529068s of 11.423514366s, submitted: 150
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e32973400 session 0x559e3487ba40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480772096 unmapped: 75767808 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480788480 unmapped: 75751424 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e320d5400 session 0x559e31f8ef00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 474284032 unmapped: 82255872 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 474284032 unmapped: 82255872 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4566974 data_alloc: 218103808 data_used: 5292032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 heartbeat osd_stat(store_statfs(0x19f2de000/0x0/0x1bfc00000, data 0x1684abd/0x185f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 474284032 unmapped: 82255872 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 474284032 unmapped: 82255872 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 heartbeat osd_stat(store_statfs(0x19f2de000/0x0/0x1bfc00000, data 0x1684abd/0x185f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 474284032 unmapped: 82255872 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 474284032 unmapped: 82255872 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 474284032 unmapped: 82255872 heap: 556539904 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e321a7000 session 0x559e3732d2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e33dd2400 session 0x559e32d4cf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e36875800 session 0x559e32d83e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e33dd2400 session 0x559e321c81e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4567134 data_alloc: 218103808 data_used: 5296128
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e31e32000 session 0x559e380b5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e320d5400 session 0x559e31c0ed20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e321a7000 session 0x559e346ca3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e321a7000 session 0x559e344201e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 80420864 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 heartbeat osd_stat(store_statfs(0x19d559000/0x0/0x1bfc00000, data 0x226aabd/0x2445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e31e32000 session 0x559e31c0f0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 80420864 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e320d5400 session 0x559e380b4960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 80420864 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e33dd2400 session 0x559e321fba40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.354463577s of 11.609065056s, submitted: 44
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e36875800 session 0x559e321f0780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 87056384 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e320d5400 session 0x559e321350e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e321a7000 session 0x559e346450e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e33dd2400 session 0x559e346b2960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e32973400 session 0x559e32ee54a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 heartbeat osd_stat(store_statfs(0x19d558000/0x0/0x1bfc00000, data 0x226aae0/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,0,0,0,0,2,6])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e36876c00 session 0x559e346b3c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e320d5400 session 0x559e34562780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477208576 unmapped: 89604096 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e321a7000 session 0x559e34645860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e32973400 session 0x559e328eb0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 ms_handle_reset con 0x559e33dd2400 session 0x559e346b2960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4782520 data_alloc: 218103808 data_used: 11575296
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 89595904 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 89595904 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 89595904 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 89595904 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 89595904 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 heartbeat osd_stat(store_statfs(0x19caa4000/0x0/0x1bfc00000, data 0x2d1cb52/0x2efa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4782520 data_alloc: 218103808 data_used: 11575296
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 89595904 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477224960 unmapped: 89587712 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 handle_osd_map epochs [358,358], i have 358, src has [1,358]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477175808 unmapped: 89636864 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 heartbeat osd_stat(store_statfs(0x19caa5000/0x0/0x1bfc00000, data 0x2d1caf0/0x2ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.739198685s of 10.098731995s, submitted: 74
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 ms_handle_reset con 0x559e34c1dc00 session 0x559e321f0780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477175808 unmapped: 89636864 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 heartbeat osd_stat(store_statfs(0x19d09e000/0x0/0x1bfc00000, data 0x272378d/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477175808 unmapped: 89636864 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4736513 data_alloc: 218103808 data_used: 7593984
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477175808 unmapped: 89636864 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 heartbeat osd_stat(store_statfs(0x19d09e000/0x0/0x1bfc00000, data 0x272378d/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 heartbeat osd_stat(store_statfs(0x19d09e000/0x0/0x1bfc00000, data 0x272378d/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477143040 unmapped: 89669632 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 ms_handle_reset con 0x559e320d5400 session 0x559e346ca3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 ms_handle_reset con 0x559e321a7000 session 0x559e380b5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 ms_handle_reset con 0x559e32973400 session 0x559e321c81e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 ms_handle_reset con 0x559e33dd2400 session 0x559e32d83e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 ms_handle_reset con 0x559e34c1dc00 session 0x559e32d4cf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 ms_handle_reset con 0x559e320d5400 session 0x559e34644780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 477216768 unmapped: 89595904 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478265344 unmapped: 88547328 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e34420f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19c211000/0x0/0x1bfc00000, data 0x35ad32e/0x378c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478101504 unmapped: 88711168 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849814 data_alloc: 218103808 data_used: 7606272
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478101504 unmapped: 88711168 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19c212000/0x0/0x1bfc00000, data 0x35ad32e/0x378c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478101504 unmapped: 88711168 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4951302 data_alloc: 234881024 data_used: 20369408
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19c212000/0x0/0x1bfc00000, data 0x35ad32e/0x378c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19c212000/0x0/0x1bfc00000, data 0x35ad32e/0x378c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e33dd2400 session 0x559e348ea780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e36875800 session 0x559e31c110e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.550229073s of 16.384210587s, submitted: 55
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e31e32000 session 0x559e31c10780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e380b41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e32973400 session 0x559e348ebe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4950830 data_alloc: 234881024 data_used: 20377600
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19c212000/0x0/0x1bfc00000, data 0x35ad32e/0x378c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e380b43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e33dd2400 session 0x559e346cbc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 88743936 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 86941696 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19c211000/0x0/0x1bfc00000, data 0x35ad33e/0x378d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5060348 data_alloc: 251658240 data_used: 35524608
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19c211000/0x0/0x1bfc00000, data 0x35ad33e/0x378d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.630571365s of 10.123828888s, submitted: 6
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5060436 data_alloc: 251658240 data_used: 35524608
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 64K writes, 263K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.06 MB/s#012Cumulative WAL: 64K writes, 23K syncs, 2.75 writes per sync, written: 0.27 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 52.86 MB, 0.09 MB/s#012Interval WAL: 10K writes, 3712 syncs, 2.77 writes per sync, written: 0.05 GB, 0.09 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19c211000/0x0/0x1bfc00000, data 0x35ad33e/0x378d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 483516416 unmapped: 83296256 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486047744 unmapped: 80764928 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19bf0a000/0x0/0x1bfc00000, data 0x38b433e/0x3a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,11,22])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137956 data_alloc: 251658240 data_used: 35528704
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #57. Immutable memtables: 13.
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 493502464 unmapped: 73310208 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19a7f6000/0x0/0x1bfc00000, data 0x3e2833e/0x4008000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1,0,4])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19a73b000/0x0/0x1bfc00000, data 0x3edb33e/0x40bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2,2,2])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 491012096 unmapped: 75800576 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 491012096 unmapped: 75800576 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 492208128 unmapped: 74604544 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.353443623s of 10.068296432s, submitted: 76
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689e000 session 0x559e347ff4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490708992 unmapped: 76103680 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5144388 data_alloc: 251658240 data_used: 35512320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490717184 unmapped: 76095488 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490807296 unmapped: 76005376 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19a4f9000/0x0/0x1bfc00000, data 0x411f33e/0x42ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490840064 unmapped: 75972608 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490840064 unmapped: 75972608 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490840064 unmapped: 75972608 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5165102 data_alloc: 251658240 data_used: 35954688
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490840064 unmapped: 75972608 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490848256 unmapped: 75964416 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19a4e3000/0x0/0x1bfc00000, data 0x413b33e/0x431b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 490848256 unmapped: 75964416 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500146176 unmapped: 66666496 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.384799957s of 10.006175041s, submitted: 204
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 501309440 unmapped: 65503232 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5325958 data_alloc: 251658240 data_used: 37175296
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 501317632 unmapped: 65495040 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x199127000/0x0/0x1bfc00000, data 0x54f733e/0x56d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 501317632 unmapped: 65495040 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 501317632 unmapped: 65495040 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689e000 session 0x559e31c0e000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500383744 unmapped: 66428928 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e321f1680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500383744 unmapped: 66428928 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5008304 data_alloc: 234881024 data_used: 25026560
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19af91000/0x0/0x1bfc00000, data 0x368d2cc/0x386b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e31f8f680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500400128 unmapped: 66412544 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500400128 unmapped: 66412544 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e32973400 session 0x559e321f3860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e33dd2400 session 0x559e355a4d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500400128 unmapped: 66412544 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e33dd2400 session 0x559e32e45a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500400128 unmapped: 66412544 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e355a5e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500400128 unmapped: 66412544 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039442 data_alloc: 234881024 data_used: 25026560
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689ec00 session 0x559e3732cb40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.862977982s of 11.071019173s, submitted: 74
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e379c3400 session 0x559e321dab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19ac54000/0x0/0x1bfc00000, data 0x39cc2cc/0x3baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500400128 unmapped: 66412544 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494272512 unmapped: 72540160 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e31c101e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494288896 unmapped: 72523776 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e3732d860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b83e000/0x0/0x1bfc00000, data 0x2de22a9/0x2fbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494288896 unmapped: 72523776 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e344214a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494321664 unmapped: 72491008 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4908380 data_alloc: 234881024 data_used: 20881408
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494313472 unmapped: 72499200 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e33dd2400 session 0x559e32e67a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689ec00 session 0x559e346b2d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b83e000/0x0/0x1bfc00000, data 0x2de22b9/0x2fc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4910290 data_alloc: 234881024 data_used: 20889600
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b83e000/0x0/0x1bfc00000, data 0x2de22b9/0x2fc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.792360306s of 11.808106422s, submitted: 283
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b83c000/0x0/0x1bfc00000, data 0x2de32b9/0x2fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4925638 data_alloc: 234881024 data_used: 23060480
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b83c000/0x0/0x1bfc00000, data 0x2de32b9/0x2fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4925638 data_alloc: 234881024 data_used: 23060480
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b83c000/0x0/0x1bfc00000, data 0x2de32b9/0x2fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 72466432 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 496730112 unmapped: 70082560 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.473401070s of 10.073335648s, submitted: 45
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b147000/0x0/0x1bfc00000, data 0x34d02b9/0x36ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 496074752 unmapped: 70737920 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 496074752 unmapped: 70737920 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 496074752 unmapped: 70737920 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5000894 data_alloc: 234881024 data_used: 23818240
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 496074752 unmapped: 70737920 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 496074752 unmapped: 70737920 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b126000/0x0/0x1bfc00000, data 0x34f92b9/0x36d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497139712 unmapped: 69672960 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497139712 unmapped: 69672960 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497147904 unmapped: 69664768 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e379c3400 session 0x559e355a5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e32973400 session 0x559e380b5e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4999278 data_alloc: 234881024 data_used: 23805952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e32e45680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497115136 unmapped: 69697536 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b89b000/0x0/0x1bfc00000, data 0x2aa42a9/0x2c81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497115136 unmapped: 69697536 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497115136 unmapped: 69697536 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b89b000/0x0/0x1bfc00000, data 0x2aa42a9/0x2c81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497115136 unmapped: 69697536 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b89b000/0x0/0x1bfc00000, data 0x2aa42a9/0x2c81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497115136 unmapped: 69697536 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4887639 data_alloc: 234881024 data_used: 20869120
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b89b000/0x0/0x1bfc00000, data 0x2aa42a9/0x2c81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497115136 unmapped: 69697536 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.976449966s of 14.115625381s, submitted: 47
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b89b000/0x0/0x1bfc00000, data 0x2aa42a9/0x2c81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e34c1dc00 session 0x559e32d82960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e36875800 session 0x559e346972c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497115136 unmapped: 69697536 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 485384192 unmapped: 81428480 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b89b000/0x0/0x1bfc00000, data 0x2aa42a9/0x2c81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e363f4b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581858 data_alloc: 218103808 data_used: 5312512
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19d593000/0x0/0x1bfc00000, data 0x108d237/0x1268000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581858 data_alloc: 218103808 data_used: 5312512
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19d593000/0x0/0x1bfc00000, data 0x108d237/0x1268000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e346cba40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e34562960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19d593000/0x0/0x1bfc00000, data 0x108d237/0x1268000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.396873474s of 13.006913185s, submitted: 28
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e32973400 session 0x559e363f45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e34c1dc00 session 0x559e348eaf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480501760 unmapped: 86310912 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4583686 data_alloc: 218103808 data_used: 5312512
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19d592000/0x0/0x1bfc00000, data 0x108d247/0x1269000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480509952 unmapped: 86302720 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480509952 unmapped: 86302720 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e36875800 session 0x559e321c85a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e36875800 session 0x559e348ebe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e34421860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e34563680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480460800 unmapped: 86351872 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e32973400 session 0x559e321d5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e34c1dc00 session 0x559e3487a000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e321345a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e35ba92c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19d18b000/0x0/0x1bfc00000, data 0x1495280/0x1673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e36875800 session 0x559e34420780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e32973400 session 0x559e32135680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19d18b000/0x0/0x1bfc00000, data 0x1495280/0x1673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [0,3,0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481591296 unmapped: 85221376 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e33dd2400 session 0x559e347ffe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481796096 unmapped: 85016576 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e3732d0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4678130 data_alloc: 218103808 data_used: 5312512
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e321f21e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e32973400 session 0x559e346454a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481607680 unmapped: 85204992 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19cc4a000/0x0/0x1bfc00000, data 0x19d52e2/0x1bb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481607680 unmapped: 85204992 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481607680 unmapped: 85204992 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e36875800 session 0x559e321c8780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481615872 unmapped: 85196800 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689ec00 session 0x559e363f4000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481615872 unmapped: 85196800 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19cc4a000/0x0/0x1bfc00000, data 0x19d52b9/0x1bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e321d43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.919178963s of 10.886935234s, submitted: 153
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e321d94a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4679426 data_alloc: 218103808 data_used: 5316608
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481574912 unmapped: 85237760 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481574912 unmapped: 85237760 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481378304 unmapped: 85434368 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19cc4a000/0x0/0x1bfc00000, data 0x19d52c8/0x1bb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481378304 unmapped: 85434368 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481378304 unmapped: 85434368 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4710278 data_alloc: 218103808 data_used: 9547776
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481378304 unmapped: 85434368 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481378304 unmapped: 85434368 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481378304 unmapped: 85434368 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19cc4a000/0x0/0x1bfc00000, data 0x19d52c8/0x1bb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481533952 unmapped: 85278720 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689e000 session 0x559e321c94a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3296a800 session 0x559e3487ab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e34ebd000 session 0x559e380b4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e363d50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e32e67a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481566720 unmapped: 85245952 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19cc26000/0x0/0x1bfc00000, data 0x19f92c8/0x1bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3296a800 session 0x559e344214a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689e000 session 0x559e321db860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3566bc00 session 0x559e34697e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.626361847s of 10.112470627s, submitted: 31
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3566bc00 session 0x559e3732de00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4743375 data_alloc: 218103808 data_used: 9551872
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e320d5400 session 0x559e346cb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481509376 unmapped: 85303296 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481509376 unmapped: 85303296 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481509376 unmapped: 85303296 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 479649792 unmapped: 87162880 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 86687744 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843165 data_alloc: 218103808 data_used: 13611008
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19bf2e000/0x0/0x1bfc00000, data 0x26e933a/0x28ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 86687744 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19bf2e000/0x0/0x1bfc00000, data 0x26e933a/0x28ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481009664 unmapped: 85803008 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19bf0a000/0x0/0x1bfc00000, data 0x271133a/0x28f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481099776 unmapped: 85712896 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19bef0000/0x0/0x1bfc00000, data 0x272533a/0x2906000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e31fafa40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481107968 unmapped: 85704704 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481107968 unmapped: 85704704 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19bef0000/0x0/0x1bfc00000, data 0x272533a/0x2906000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3296a800 session 0x559e32dda1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4863853 data_alloc: 234881024 data_used: 14508032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 481107968 unmapped: 85704704 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689e000 session 0x559e321d52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.347574234s of 11.403801918s, submitted: 106
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486400000 unmapped: 80412672 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486023168 unmapped: 80789504 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 486146048 unmapped: 80666624 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689e000 session 0x559e321c9c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19b21c000/0x0/0x1bfc00000, data 0x340035d/0x35e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 79503360 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4964782 data_alloc: 234881024 data_used: 16080896
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e34421860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3296a800 session 0x559e328eb0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3566bc00 session 0x559e344205a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 79560704 heap: 566812672 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e36517000 session 0x559e3487b4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 86671360 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e363f4780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3296a800 session 0x559e32d83e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3566bc00 session 0x559e34644d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e3689e000 session 0x559e345632c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321b2c00 session 0x559e3732d4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 487997440 unmapped: 86163456 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e32973400 session 0x559e3487b2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 86056960 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e36875800 session 0x559e3487a1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 heartbeat osd_stat(store_statfs(0x19a503000/0x0/0x1bfc00000, data 0x41173cf/0x42fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 488366080 unmapped: 85794816 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 ms_handle_reset con 0x559e321a7000 session 0x559e363d4d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5113303 data_alloc: 234881024 data_used: 20332544
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 488366080 unmapped: 85794816 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 ms_handle_reset con 0x559e3566bc00 session 0x559e321c9860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 ms_handle_reset con 0x559e3296a800 session 0x559e31c10b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.156619072s of 10.096358299s, submitted: 218
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 488382464 unmapped: 85778432 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 ms_handle_reset con 0x559e321a7000 session 0x559e32e66d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 ms_handle_reset con 0x559e32973400 session 0x559e3487b860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 ms_handle_reset con 0x559e3296a800 session 0x559e363f41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 heartbeat osd_stat(store_statfs(0x19a4b2000/0x0/0x1bfc00000, data 0x41630fc/0x434b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 ms_handle_reset con 0x559e36875800 session 0x559e321f2960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 498540544 unmapped: 75620352 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 498565120 unmapped: 75595776 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 ms_handle_reset con 0x559e3566bc00 session 0x559e363d4000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 492740608 unmapped: 81420288 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5203678 data_alloc: 234881024 data_used: 29523968
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 ms_handle_reset con 0x559e3689e000 session 0x559e31c101e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 492740608 unmapped: 81420288 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 361 ms_handle_reset con 0x559e321a7000 session 0x559e32ee5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 492806144 unmapped: 81354752 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 361 heartbeat osd_stat(store_statfs(0x199e83000/0x0/0x1bfc00000, data 0x47930aa/0x497b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 494845952 unmapped: 79314944 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500105216 unmapped: 74055680 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 362 ms_handle_reset con 0x559e36875800 session 0x559e363d5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500105216 unmapped: 74055680 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5362280 data_alloc: 251658240 data_used: 41517056
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 362 ms_handle_reset con 0x559e3a858000 session 0x559e321daf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 362 heartbeat osd_stat(store_statfs(0x19955f000/0x0/0x1bfc00000, data 0x50b49cc/0x529f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500219904 unmapped: 73940992 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 362 ms_handle_reset con 0x559e36010c00 session 0x559e32a01860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 362 ms_handle_reset con 0x559e36010c00 session 0x559e32ee5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500293632 unmapped: 73867264 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.039307594s of 11.103222847s, submitted: 114
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500301824 unmapped: 73859072 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500310016 unmapped: 73850880 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 500310016 unmapped: 73850880 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5378386 data_alloc: 251658240 data_used: 42221568
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 heartbeat osd_stat(store_statfs(0x199558000/0x0/0x1bfc00000, data 0x50b950b/0x52a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x213ff9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 501358592 unmapped: 72802304 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 heartbeat osd_stat(store_statfs(0x199b36000/0x0/0x1bfc00000, data 0x43304fb/0x451b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2180f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e36875800 session 0x559e328eb0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497434624 unmapped: 76726272 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3689e000 session 0x559e32d82960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e321a7000 session 0x559e3487b2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 497426432 unmapped: 76734464 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 501956608 unmapped: 72204288 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 heartbeat osd_stat(store_statfs(0x198bb7000/0x0/0x1bfc00000, data 0x564d4ec/0x5837000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2180f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 501989376 unmapped: 72171520 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5382223 data_alloc: 234881024 data_used: 28753920
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 502030336 unmapped: 72130560 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 502259712 unmapped: 71901184 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e320d5400 session 0x559e34420f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3dcf7000 session 0x559e363d5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.209811211s of 10.031907082s, submitted: 213
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3a858000 session 0x559e321345a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 502259712 unmapped: 71901184 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 heartbeat osd_stat(store_statfs(0x198b11000/0x0/0x1bfc00000, data 0x56f34ec/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2180f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 heartbeat osd_stat(store_statfs(0x198b11000/0x0/0x1bfc00000, data 0x56f34ec/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2180f9c6), peers [0,1] op hist [0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 502292480 unmapped: 71868416 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e321a7000 session 0x559e348eb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e36875800 session 0x559e355a5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3689e000 session 0x559e346b23c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3689e000 session 0x559e380b52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e36875800 session 0x559e31fca3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3dcf7000 session 0x559e32135c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3a858000 session 0x559e32ddb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 502554624 unmapped: 71606272 heap: 574160896 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e31fe7800 session 0x559e34644d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e36875800 session 0x559e3487a1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3689e000 session 0x559e321f21e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3a858000 session 0x559e31c101e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5299584 data_alloc: 234881024 data_used: 31076352
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 heartbeat osd_stat(store_statfs(0x1997ee000/0x0/0x1bfc00000, data 0x4a154c9/0x4bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2180f9c6), peers [0,1] op hist [0,0,2])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3dcf7000 session 0x559e380b5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 509935616 unmapped: 71712768 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e36434800 session 0x559e348ea780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e36434800 session 0x559e34421a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 509935616 unmapped: 71712768 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 509943808 unmapped: 71704576 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e36875800 session 0x559e363f5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 ms_handle_reset con 0x559e3689e000 session 0x559e31f12d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 509952000 unmapped: 71696384 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 509992960 unmapped: 71655424 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5498801 data_alloc: 251658240 data_used: 35454976
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 509992960 unmapped: 71655424 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 heartbeat osd_stat(store_statfs(0x197c89000/0x0/0x1bfc00000, data 0x615b4d9/0x6345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21c2f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 506142720 unmapped: 75505664 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.541443825s of 10.034086227s, submitted: 121
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 513875968 unmapped: 67772416 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 364 ms_handle_reset con 0x559e3c70a000 session 0x559e32dda3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 513875968 unmapped: 67772416 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 364 ms_handle_reset con 0x559e33dd1400 session 0x559e34421e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 364 ms_handle_reset con 0x559e36434800 session 0x559e31f13a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 513875968 unmapped: 67772416 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5525589 data_alloc: 251658240 data_used: 42741760
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 518275072 unmapped: 63373312 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 364 heartbeat osd_stat(store_statfs(0x198696000/0x0/0x1bfc00000, data 0x6ba7114/0x6d90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 518365184 unmapped: 63283200 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 517251072 unmapped: 64397312 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 518299648 unmapped: 63348736 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 518299648 unmapped: 63348736 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5655716 data_alloc: 251658240 data_used: 43765760
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 518299648 unmapped: 63348736 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x19866e000/0x0/0x1bfc00000, data 0x6bd4c53/0x6dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36875800 session 0x559e31fcb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 518299648 unmapped: 63348736 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x19866e000/0x0/0x1bfc00000, data 0x6bd4c53/0x6dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519421952 unmapped: 62226432 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x19866e000/0x0/0x1bfc00000, data 0x6bd4c53/0x6dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3689e000 session 0x559e363f45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.187506676s of 10.907769203s, submitted: 178
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519479296 unmapped: 62169088 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519479296 unmapped: 62169088 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5681652 data_alloc: 251658240 data_used: 45346816
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519479296 unmapped: 62169088 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519479296 unmapped: 62169088 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x19866c000/0x0/0x1bfc00000, data 0x6bd7c53/0x6dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519479296 unmapped: 62169088 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519495680 unmapped: 62152704 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3c70a000 session 0x559e31c10b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36876800 session 0x559e345621e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434800 session 0x559e355a4780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36875800 session 0x559e32d832c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519512064 unmapped: 62136320 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x19866c000/0x0/0x1bfc00000, data 0x6bd7c53/0x6dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3689e000 session 0x559e35ba9680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3c70a000 session 0x559e34562000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3bdaa000 session 0x559e344210e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5764709 data_alloc: 251658240 data_used: 45346816
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434800 session 0x559e380b45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36875800 session 0x559e346ca000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3689e000 session 0x559e34645860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3c70a000 session 0x559e31c10b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34ebd400 session 0x559e31f13a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34ebd400 session 0x559e31f12d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434800 session 0x559e363f5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 520282112 unmapped: 61366272 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36875800 session 0x559e34644d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3689e000 session 0x559e32ddb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3c70a000 session 0x559e31fca3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34ebd400 session 0x559e346b23c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 520339456 unmapped: 61308928 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3a858000 session 0x559e321c9e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3dcf7000 session 0x559e34644960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434800 session 0x559e355a5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x196f61000/0x0/0x1bfc00000, data 0x82ded37/0x84cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 516800512 unmapped: 64847872 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 516800512 unmapped: 64847872 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 516800512 unmapped: 64847872 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5571265 data_alloc: 234881024 data_used: 30314496
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36875800 session 0x559e31c0ed20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x197900000/0x0/0x1bfc00000, data 0x6bbccb5/0x6da8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 516800512 unmapped: 64847872 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34ebd400 session 0x559e321d52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.725527763s of 13.054359436s, submitted: 116
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 516800512 unmapped: 64847872 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434800 session 0x559e32ddbe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3a858000 session 0x559e32e44d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3dcf7000 session 0x559e321dab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 516808704 unmapped: 64839680 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x197900000/0x0/0x1bfc00000, data 0x6bbccb5/0x6da8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x198684000/0x0/0x1bfc00000, data 0x6bbcce7/0x6daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 516808704 unmapped: 64839680 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 516808704 unmapped: 64839680 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5674418 data_alloc: 251658240 data_used: 41496576
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 520871936 unmapped: 60776448 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34a04800 session 0x559e34644f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34ebd400 session 0x559e348eab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 520871936 unmapped: 60776448 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 520871936 unmapped: 60776448 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434800 session 0x559e34420960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 520871936 unmapped: 60776448 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x198684000/0x0/0x1bfc00000, data 0x6bbcce7/0x6daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 520871936 unmapped: 60776448 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5725458 data_alloc: 251658240 data_used: 47484928
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3a858000 session 0x559e32135a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3dcf7000 session 0x559e346b34a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 520871936 unmapped: 60776448 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36877000 session 0x559e37f710e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x198683000/0x0/0x1bfc00000, data 0x6bbcd10/0x6dab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.965042114s of 10.320645332s, submitted: 40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 521158656 unmapped: 60489728 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34ebd400 session 0x559e32e44f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434800 session 0x559e363f5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3a858000 session 0x559e363f50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3dcf7000 session 0x559e32dda3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3c70b800 session 0x559e37f701e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 521273344 unmapped: 60375040 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34ebd400 session 0x559e31fcb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434800 session 0x559e321f21e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3a858000 session 0x559e35ba92c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3dcf7000 session 0x559e321db2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36434c00 session 0x559e363d4960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 521281536 unmapped: 60366848 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x197146000/0x0/0x1bfc00000, data 0x80f8d59/0x82e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x197144000/0x0/0x1bfc00000, data 0x80f9d59/0x82e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 521281536 unmapped: 60366848 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5898608 data_alloc: 251658240 data_used: 47505408
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 523272192 unmapped: 58376192 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524075008 unmapped: 57573376 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34ebd400 session 0x559e321c9c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524353536 unmapped: 57294848 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525795328 unmapped: 55853056 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529580032 unmapped: 52068352 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6152805 data_alloc: 268435456 data_used: 56471552
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3a858000 session 0x559e32d4cf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x195a62000/0x0/0x1bfc00000, data 0x97ccd7c/0x99bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3dcf7000 session 0x559e3732d4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532152320 unmapped: 49496064 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3bdab800 session 0x559e363d5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3e6c5000 session 0x559e321fba40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532185088 unmapped: 49463296 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532185088 unmapped: 49463296 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.349049568s of 11.603716850s, submitted: 206
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3bdab800 session 0x559e355a5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3dcf7000 session 0x559e34644960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 541065216 unmapped: 40583168 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542195712 unmapped: 39452672 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6233137 data_alloc: 285212672 data_used: 68235264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542195712 unmapped: 39452672 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x195a6f000/0x0/0x1bfc00000, data 0x97ccdaf/0x99bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542228480 unmapped: 39419904 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542228480 unmapped: 39419904 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542244864 unmapped: 39403520 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x195a6f000/0x0/0x1bfc00000, data 0x97ccdaf/0x99bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 39321600 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x195a6f000/0x0/0x1bfc00000, data 0x97ccdaf/0x99bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6243579 data_alloc: 285212672 data_used: 68296704
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544415744 unmapped: 37232640 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544276480 unmapped: 37371904 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34a04400 session 0x559e348eab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544710656 unmapped: 36937728 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3689e400 session 0x559e348eb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e31fe6000 session 0x559e34696f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e34a04400 session 0x559e363f5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3689e400 session 0x559e31c101e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.094532013s of 10.553043365s, submitted: 106
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 36864000 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x192e36000/0x0/0x1bfc00000, data 0xb265daf/0xb458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2196f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 34529280 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6449782 data_alloc: 285212672 data_used: 69476352
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547569664 unmapped: 34078720 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547594240 unmapped: 34054144 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547594240 unmapped: 34054144 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3bdab800 session 0x559e321db860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e3c70a000 session 0x559e346b3860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36516400 session 0x559e34644000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36875c00 session 0x559e346b23c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547610624 unmapped: 34037760 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547610624 unmapped: 34037760 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 heartbeat osd_stat(store_statfs(0x192d8d000/0x0/0x1bfc00000, data 0xb30ddd2/0xb501000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2196f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6461753 data_alloc: 285212672 data_used: 69599232
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547610624 unmapped: 34037760 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548044800 unmapped: 33603584 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554401792 unmapped: 27246592 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554508288 unmapped: 27140096 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 ms_handle_reset con 0x559e36516400 session 0x559e346ca000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.793738365s of 10.213954926s, submitted: 128
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 ms_handle_reset con 0x559e3689e400 session 0x559e34421860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 ms_handle_reset con 0x559e3bdab800 session 0x559e32d832c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 ms_handle_reset con 0x559e3c70a000 session 0x559e31fca3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554631168 unmapped: 27017216 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6536455 data_alloc: 285212672 data_used: 76734464
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 ms_handle_reset con 0x559e36516400 session 0x559e321345a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554631168 unmapped: 27017216 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 ms_handle_reset con 0x559e34ebd400 session 0x559e32e66780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 ms_handle_reset con 0x559e36875c00 session 0x559e34421680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 ms_handle_reset con 0x559e3a858000 session 0x559e321f0780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 heartbeat osd_stat(store_statfs(0x192d41000/0x0/0x1bfc00000, data 0xb357a2b/0xb54c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2196f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 27009024 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554713088 unmapped: 26935296 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554827776 unmapped: 26820608 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554827776 unmapped: 26820608 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6536287 data_alloc: 285212672 data_used: 76816384
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 heartbeat osd_stat(store_statfs(0x192d41000/0x0/0x1bfc00000, data 0xb357a2b/0xb54c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2196f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554827776 unmapped: 26820608 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554827776 unmapped: 26820608 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 heartbeat osd_stat(store_statfs(0x192d3f000/0x0/0x1bfc00000, data 0xb35aa2b/0xb54f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2196f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 554827776 unmapped: 26820608 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 557744128 unmapped: 23904256 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.639342308s of 10.106882095s, submitted: 38
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 ms_handle_reset con 0x559e34c1cc00 session 0x559e32d832c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 558546944 unmapped: 23101440 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6605412 data_alloc: 285212672 data_used: 76816384
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 558383104 unmapped: 23265280 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 367 ms_handle_reset con 0x559e34ebd400 session 0x559e34421860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 367 ms_handle_reset con 0x559e3493e400 session 0x559e346965a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 558424064 unmapped: 23224320 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 367 heartbeat osd_stat(store_statfs(0x19234c000/0x0/0x1bfc00000, data 0xbd4e6a5/0xbf42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2196f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 560799744 unmapped: 20848640 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 20717568 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561012736 unmapped: 20635648 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6635130 data_alloc: 285212672 data_used: 78225408
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561012736 unmapped: 20635648 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 367 heartbeat osd_stat(store_statfs(0x191116000/0x0/0x1bfc00000, data 0xbdda6a5/0xbfce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561045504 unmapped: 20602880 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 367 ms_handle_reset con 0x559e34a04400 session 0x559e31c0fc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561070080 unmapped: 20578304 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561127424 unmapped: 20520960 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 heartbeat osd_stat(store_statfs(0x191fcc000/0x0/0x1bfc00000, data 0xaa021e4/0xabf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.964483261s of 10.102113724s, submitted: 136
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 heartbeat osd_stat(store_statfs(0x191fcc000/0x0/0x1bfc00000, data 0xaa021e4/0xabf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e3bdab000 session 0x559e348eaf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 20512768 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6418890 data_alloc: 268435456 data_used: 67563520
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e36517400 session 0x559e31f125a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 20488192 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e3493e400 session 0x559e363d45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e34a04400 session 0x559e321f03c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e36516400 session 0x559e321d5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 20488192 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 20488192 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 heartbeat osd_stat(store_statfs(0x1924f8000/0x0/0x1bfc00000, data 0xaa001d1/0xabf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561201152 unmapped: 20447232 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 561201152 unmapped: 20447232 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e3689e000 session 0x559e31f8f680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6417740 data_alloc: 268435456 data_used: 67624960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 heartbeat osd_stat(store_statfs(0x1924f8000/0x0/0x1bfc00000, data 0xaa001d1/0xabf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555507712 unmapped: 26140672 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e3bdab000 session 0x559e32d823c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555507712 unmapped: 26140672 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555507712 unmapped: 26140672 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 heartbeat osd_stat(store_statfs(0x193dc1000/0x0/0x1bfc00000, data 0x913915f/0x932c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555507712 unmapped: 26140672 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.504439354s of 10.049857140s, submitted: 61
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555524096 unmapped: 26124288 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6160446 data_alloc: 268435456 data_used: 56479744
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555524096 unmapped: 26124288 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e36434800 session 0x559e3732d680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e36434c00 session 0x559e32ddbe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e36012000 session 0x559e346ca960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555524096 unmapped: 26124288 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 heartbeat osd_stat(store_statfs(0x193db4000/0x0/0x1bfc00000, data 0x914715f/0x933a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555524096 unmapped: 26124288 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555458560 unmapped: 26189824 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555474944 unmapped: 26173440 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 heartbeat osd_stat(store_statfs(0x193dab000/0x0/0x1bfc00000, data 0x914d15f/0x9340000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6173468 data_alloc: 268435456 data_used: 57487360
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 heartbeat osd_stat(store_statfs(0x193dab000/0x0/0x1bfc00000, data 0x914d15f/0x9340000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555474944 unmapped: 26173440 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555474944 unmapped: 26173440 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555474944 unmapped: 26173440 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555474944 unmapped: 26173440 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555474944 unmapped: 26173440 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6177468 data_alloc: 268435456 data_used: 57847808
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 ms_handle_reset con 0x559e3493e400 session 0x559e321c8780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.750239372s of 11.802731514s, submitted: 16
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 heartbeat osd_stat(store_statfs(0x193dab000/0x0/0x1bfc00000, data 0x914d15f/0x9340000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 ms_handle_reset con 0x559e34a04400 session 0x559e321f2b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 ms_handle_reset con 0x559e3493e400 session 0x559e363d5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 ms_handle_reset con 0x559e36012000 session 0x559e32a00f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 heartbeat osd_stat(store_statfs(0x193daa000/0x0/0x1bfc00000, data 0x914edb8/0x9343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6180482 data_alloc: 268435456 data_used: 57860096
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 heartbeat osd_stat(store_statfs(0x193daa000/0x0/0x1bfc00000, data 0x914edb8/0x9343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 heartbeat osd_stat(store_statfs(0x193dab000/0x0/0x1bfc00000, data 0x914edb8/0x9343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 heartbeat osd_stat(store_statfs(0x193dab000/0x0/0x1bfc00000, data 0x914edb8/0x9343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6181602 data_alloc: 268435456 data_used: 57929728
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 heartbeat osd_stat(store_statfs(0x193dab000/0x0/0x1bfc00000, data 0x914edb8/0x9343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 ms_handle_reset con 0x559e3689e000 session 0x559e380b5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.067761421s of 13.115862846s, submitted: 11
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 heartbeat osd_stat(store_statfs(0x193dab000/0x0/0x1bfc00000, data 0x914edb8/0x9343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,1,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6182178 data_alloc: 268435456 data_used: 57929728
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555483136 unmapped: 26165248 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 370 ms_handle_reset con 0x559e36875c00 session 0x559e355a43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555491328 unmapped: 26157056 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 370 heartbeat osd_stat(store_statfs(0x193da7000/0x0/0x1bfc00000, data 0x9150a65/0x9346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555491328 unmapped: 26157056 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555491328 unmapped: 26157056 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 370 ms_handle_reset con 0x559e34ebd400 session 0x559e346cba40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 370 ms_handle_reset con 0x559e36517400 session 0x559e34697680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555491328 unmapped: 26157056 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6202128 data_alloc: 268435456 data_used: 59547648
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555491328 unmapped: 26157056 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 370 heartbeat osd_stat(store_statfs(0x193da8000/0x0/0x1bfc00000, data 0x9150a65/0x9346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555507712 unmapped: 26140672 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 370 ms_handle_reset con 0x559e36012000 session 0x559e34696d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555515904 unmapped: 26132480 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555573248 unmapped: 26075136 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x1938d1000/0x0/0x1bfc00000, data 0x96255f6/0x981c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.991728783s of 10.027108192s, submitted: 100
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3493e400 session 0x559e31c10d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36875c00 session 0x559e321f21e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555802624 unmapped: 25845760 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6253892 data_alloc: 268435456 data_used: 59555840
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555802624 unmapped: 25845760 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555802624 unmapped: 25845760 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555819008 unmapped: 25829376 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3bdab800 session 0x559e34421680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x1938d1000/0x0/0x1bfc00000, data 0x96255f6/0x981c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3493e400 session 0x559e32ddab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 555819008 unmapped: 25829376 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36012000 session 0x559e34644960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x1938d1000/0x0/0x1bfc00000, data 0x96255f6/0x981c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36517400 session 0x559e32d4dc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543047680 unmapped: 38600704 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36434c00 session 0x559e346cb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5969142 data_alloc: 251658240 data_used: 44797952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19504c000/0x0/0x1bfc00000, data 0x7ea9594/0x809f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36875c00 session 0x559e32e45a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19504c000/0x0/0x1bfc00000, data 0x7ea9594/0x809f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543047680 unmapped: 38600704 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543047680 unmapped: 38600704 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3493e400 session 0x559e32ddb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543047680 unmapped: 38600704 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36012000 session 0x559e355a50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543137792 unmapped: 38510592 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.412221909s of 10.707092285s, submitted: 36
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543137792 unmapped: 38510592 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5974821 data_alloc: 251658240 data_used: 44797952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19502a000/0x0/0x1bfc00000, data 0x7ecd5a4/0x80c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3689e000 session 0x559e32ddb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3a858000 session 0x559e32e45a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19502a000/0x0/0x1bfc00000, data 0x7ecd5a4/0x80c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36875400 session 0x559e32d4dc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6010405 data_alloc: 251658240 data_used: 49643520
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3493e400 session 0x559e34644960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36012000 session 0x559e32ddab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19502a000/0x0/0x1bfc00000, data 0x7ecd5a4/0x80c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22b0f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 38502400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6010405 data_alloc: 251658240 data_used: 49643520
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.543809891s of 10.713222504s, submitted: 6
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543162368 unmapped: 38486016 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545669120 unmapped: 35979264 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 546209792 unmapped: 35438592 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547037184 unmapped: 34611200 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x193f34000/0x0/0x1bfc00000, data 0x8bb35a4/0x8daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3689e000 session 0x559e34421680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547241984 unmapped: 34406400 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6111023 data_alloc: 251658240 data_used: 49762304
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3a858000 session 0x559e31f8f680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3daf7800 session 0x559e363f4b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3493e400 session 0x559e31c110e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547569664 unmapped: 34078720 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x1939ea000/0x0/0x1bfc00000, data 0x90f75a4/0x92ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 33890304 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547823616 unmapped: 33824768 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19395c000/0x0/0x1bfc00000, data 0x91855a4/0x937c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547823616 unmapped: 33824768 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547823616 unmapped: 33824768 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6166981 data_alloc: 268435456 data_used: 50216960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19395c000/0x0/0x1bfc00000, data 0x91855a4/0x937c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547823616 unmapped: 33824768 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547823616 unmapped: 33824768 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.606687546s of 12.285678864s, submitted: 223
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 33890304 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 33890304 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19393e000/0x0/0x1bfc00000, data 0x91a95a4/0x93a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 33890304 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6166057 data_alloc: 268435456 data_used: 50221056
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 33890304 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 33890304 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e36012000 session 0x559e363f5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 33890304 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3689e000 session 0x559e321db2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 33882112 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3a858000 session 0x559e31faf2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 ms_handle_reset con 0x559e3daf7800 session 0x559e3487b860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547733504 unmapped: 33914880 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6168996 data_alloc: 268435456 data_used: 50221056
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19393d000/0x0/0x1bfc00000, data 0x91a95b4/0x93a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547733504 unmapped: 33914880 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 heartbeat osd_stat(store_statfs(0x19393d000/0x0/0x1bfc00000, data 0x91a95b4/0x93a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 33906688 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 33906688 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.756382942s of 11.207899094s, submitted: 12
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548790272 unmapped: 32858112 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 372 ms_handle_reset con 0x559e3689e000 session 0x559e355a54a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548798464 unmapped: 32849920 heap: 581648384 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 372 ms_handle_reset con 0x559e3daf7800 session 0x559e35ba8780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 372 ms_handle_reset con 0x559e3493f800 session 0x559e37f710e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 372 ms_handle_reset con 0x559e31fe4400 session 0x559e321c8960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 372 ms_handle_reset con 0x559e34947c00 session 0x559e34696960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6204922 data_alloc: 268435456 data_used: 54685696
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e31fe4400 session 0x559e347fe3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3493f800 session 0x559e348ea000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3689e000 session 0x559e345623c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3a858000 session 0x559e3487ba40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548904960 unmapped: 36421632 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3daf7800 session 0x559e363f5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e31fe4400 session 0x559e348ea780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 heartbeat osd_stat(store_statfs(0x19319d000/0x0/0x1bfc00000, data 0x9944e76/0x9b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548904960 unmapped: 36421632 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 heartbeat osd_stat(store_statfs(0x19319d000/0x0/0x1bfc00000, data 0x9944e76/0x9b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3493e400 session 0x559e380b4000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e36012000 session 0x559e346ca3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548904960 unmapped: 36421632 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548904960 unmapped: 36421632 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 heartbeat osd_stat(store_statfs(0x19319f000/0x0/0x1bfc00000, data 0x9944e66/0x9b3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3493f800 session 0x559e32ee54a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 heartbeat osd_stat(store_statfs(0x19319f000/0x0/0x1bfc00000, data 0x9944e66/0x9b3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 36413440 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6265619 data_alloc: 268435456 data_used: 54693888
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3689e000 session 0x559e31c101e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 36413440 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e31fe4400 session 0x559e34421860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 36413440 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3493e400 session 0x559e363f52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e36012000 session 0x559e31c10000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3493f800 session 0x559e346b2d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548913152 unmapped: 36413440 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 548921344 unmapped: 36405248 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.482668877s of 10.328689575s, submitted: 48
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 373 ms_handle_reset con 0x559e3a858000 session 0x559e34645a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 374 ms_handle_reset con 0x559e31fe4400 session 0x559e31f123c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 374 ms_handle_reset con 0x559e3493e400 session 0x559e321c8780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549093376 unmapped: 36233216 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6209083 data_alloc: 268435456 data_used: 50253824
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 374 heartbeat osd_stat(store_statfs(0x1935f1000/0x0/0x1bfc00000, data 0x94eeb46/0x96ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549093376 unmapped: 36233216 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 375 ms_handle_reset con 0x559e3493f800 session 0x559e34421860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549109760 unmapped: 36216832 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 375 ms_handle_reset con 0x559e36012000 session 0x559e32ee54a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549134336 unmapped: 36192256 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549134336 unmapped: 36192256 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 375 heartbeat osd_stat(store_statfs(0x1935ef000/0x0/0x1bfc00000, data 0x94f080f/0x96ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549134336 unmapped: 36192256 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6211837 data_alloc: 268435456 data_used: 50253824
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549142528 unmapped: 36184064 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551215104 unmapped: 34111488 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551215104 unmapped: 34111488 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551223296 unmapped: 34103296 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.881896019s of 10.017099380s, submitted: 160
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36874c00 session 0x559e355a54a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1935ef000/0x0/0x1bfc00000, data 0x94f080f/0x96ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551256064 unmapped: 34070528 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6270843 data_alloc: 268435456 data_used: 57999360
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1935eb000/0x0/0x1bfc00000, data 0x94f2386/0x96f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,1,0,3])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551264256 unmapped: 34062336 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1935eb000/0x0/0x1bfc00000, data 0x94f2386/0x96f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551288832 unmapped: 34037760 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e31fe4400 session 0x559e363f4b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3493e400 session 0x559e31f8f680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3493f800 session 0x559e34421680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36012000 session 0x559e32ddab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551288832 unmapped: 34037760 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1935ec000/0x0/0x1bfc00000, data 0x94f2386/0x96f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551288832 unmapped: 34037760 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551313408 unmapped: 34013184 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6270455 data_alloc: 268435456 data_used: 57999360
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551313408 unmapped: 34013184 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1935e5000/0x0/0x1bfc00000, data 0x94f9386/0x96f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e379e4400 session 0x559e348ea780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551313408 unmapped: 34013184 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551313408 unmapped: 34013184 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551321600 unmapped: 34004992 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.679452896s of 10.789185524s, submitted: 63
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36434800 session 0x559e32134000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36516400 session 0x559e31fca960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551321600 unmapped: 34004992 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6302541 data_alloc: 268435456 data_used: 57999360
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 552476672 unmapped: 32849920 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x192a39000/0x0/0x1bfc00000, data 0xa0a5386/0xa2a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3e6d7400 session 0x559e34644960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 552591360 unmapped: 32735232 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x192a39000/0x0/0x1bfc00000, data 0xa0a5386/0xa2a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549978112 unmapped: 35348480 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3a5dd000 session 0x559e363d4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x19399e000/0x0/0x1bfc00000, data 0x9141376/0x9340000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e31fe5800 session 0x559e32d834a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549986304 unmapped: 35340288 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e31fe4400 session 0x559e32e150e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36434800 session 0x559e348ebc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551264256 unmapped: 34062336 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6189151 data_alloc: 268435456 data_used: 51843072
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36516400 session 0x559e321fa1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551264256 unmapped: 34062336 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551264256 unmapped: 34062336 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551264256 unmapped: 34062336 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3689e400 session 0x559e3487bc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3dcf7000 session 0x559e31f13a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 551280640 unmapped: 34045952 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x193921000/0x0/0x1bfc00000, data 0x91be353/0x93bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.328345299s of 10.073065758s, submitted: 127
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549380096 unmapped: 35946496 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5968495 data_alloc: 251658240 data_used: 42414080
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549380096 unmapped: 35946496 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e35293000 session 0x559e321fb0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e379e4800 session 0x559e3732cd20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549388288 unmapped: 35938304 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x194def000/0x0/0x1bfc00000, data 0x7cf22f1/0x7eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 35872768 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3dcf7000 session 0x559e346cba40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 35864576 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538443776 unmapped: 46882816 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5776689 data_alloc: 234881024 data_used: 31526912
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538402816 unmapped: 46923776 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538419200 unmapped: 46907392 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x195e3f000/0x0/0x1bfc00000, data 0x6ca52af/0x6e9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538427392 unmapped: 46899200 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e31fe4400 session 0x559e363d41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36434800 session 0x559e321f34a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539820032 unmapped: 45506560 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540123136 unmapped: 45203456 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5825858 data_alloc: 234881024 data_used: 31522816
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.844871998s of 10.712604523s, submitted: 107
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x195729000/0x0/0x1bfc00000, data 0x73bc27c/0x75b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540098560 unmapped: 45228032 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x195729000/0x0/0x1bfc00000, data 0x73bc27c/0x75b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540123136 unmapped: 45203456 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e31fe4400 session 0x559e380b52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x19551f000/0x0/0x1bfc00000, data 0x75c727c/0x77bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 45170688 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e321a7000 session 0x559e363f5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36010c00 session 0x559e32e15a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36517400 session 0x559e346972c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36434c00 session 0x559e3732de00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540164096 unmapped: 45162496 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540172288 unmapped: 45154304 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5844932 data_alloc: 234881024 data_used: 31600640
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x195516000/0x0/0x1bfc00000, data 0x75cf27c/0x77c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540172288 unmapped: 45154304 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532709376 unmapped: 52617216 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 57999360 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e31fe4400 session 0x559e380b5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e321a7000 session 0x559e35ba8960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1976dd000/0x0/0x1bfc00000, data 0x4b3b1a8/0x4d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 57999360 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x197701000/0x0/0x1bfc00000, data 0x4b3b1a8/0x4d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 57999360 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428751 data_alloc: 234881024 data_used: 21417984
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 57999360 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 57999360 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 57999360 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.378453255s of 12.993147850s, submitted: 164
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x197701000/0x0/0x1bfc00000, data 0x4b3b1a8/0x4d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3296a800 session 0x559e3732d2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 57999360 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e32973400 session 0x559e346cb0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527343616 unmapped: 57982976 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5153599 data_alloc: 218103808 data_used: 13783040
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522133504 unmapped: 63193088 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522133504 unmapped: 63193088 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36010c00 session 0x559e32d834a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1999ef000/0x0/0x1bfc00000, data 0x30f7126/0x32e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5144200 data_alloc: 218103808 data_used: 13643776
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1999ef000/0x0/0x1bfc00000, data 0x30f7126/0x32e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e33dd3c00 session 0x559e346443c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e34a1ac00 session 0x559e34421c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x1999ef000/0x0/0x1bfc00000, data 0x30f7126/0x32e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.448621750s of 10.793024063s, submitted: 28
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e36010c00 session 0x559e363f4b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5137708 data_alloc: 218103808 data_used: 13533184
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x199a19000/0x0/0x1bfc00000, data 0x30d3126/0x32c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522117120 unmapped: 63209472 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e321a7000 session 0x559e380b5e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519831552 unmapped: 65495040 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 heartbeat osd_stat(store_statfs(0x199a19000/0x0/0x1bfc00000, data 0x30d3116/0x32c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [0,0,0,0,0,2,5,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e3296a800 session 0x559e31f13e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519847936 unmapped: 65478656 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 ms_handle_reset con 0x559e31fe4400 session 0x559e355a54a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519847936 unmapped: 65478656 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954668 data_alloc: 218103808 data_used: 9248768
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 377 ms_handle_reset con 0x559e321a7000 session 0x559e321c8780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519847936 unmapped: 65478656 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519847936 unmapped: 65478656 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 377 heartbeat osd_stat(store_statfs(0x19acc8000/0x0/0x1bfc00000, data 0x1e21d6f/0x2014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519847936 unmapped: 65478656 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 377 heartbeat osd_stat(store_statfs(0x19acc8000/0x0/0x1bfc00000, data 0x1e21d6f/0x2014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519847936 unmapped: 65478656 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.996538162s of 10.175495148s, submitted: 70
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 378 ms_handle_reset con 0x559e34a1ac00 session 0x559e346b3680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 378 ms_handle_reset con 0x559e33dd3c00 session 0x559e31f123c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 519856128 unmapped: 65470464 heap: 585326592 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4989572 data_alloc: 218103808 data_used: 12681216
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 378 ms_handle_reset con 0x559e36010c00 session 0x559e34645a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532471808 unmapped: 65478656 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 379 ms_handle_reset con 0x559e36010c00 session 0x559e363d5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532480000 unmapped: 65470464 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532496384 unmapped: 65454080 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e31fe4400 session 0x559e321c9a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532504576 unmapped: 65445888 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e321a7000 session 0x559e321d94a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e33dd3c00 session 0x559e321f2960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e34a1ac00 session 0x559e321c8960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 heartbeat osd_stat(store_statfs(0x199e99000/0x0/0x1bfc00000, data 0x2c4c34c/0x2e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532504576 unmapped: 65445888 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5130544 data_alloc: 234881024 data_used: 18800640
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532504576 unmapped: 65445888 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e31fe4400 session 0x559e321f21e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e34a1ac00 session 0x559e346b3860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e321a7000 session 0x559e32d823c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532504576 unmapped: 65445888 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 heartbeat osd_stat(store_statfs(0x199e98000/0x0/0x1bfc00000, data 0x2c4c35b/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532504576 unmapped: 65445888 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 heartbeat osd_stat(store_statfs(0x199e98000/0x0/0x1bfc00000, data 0x2c4c35b/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532504576 unmapped: 65445888 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e33dd3c00 session 0x559e32e45680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e36010c00 session 0x559e346b3c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e36010c00 session 0x559e355a4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532504576 unmapped: 65445888 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e31fe4400 session 0x559e31f8fe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.720003128s of 10.998088837s, submitted: 33
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5114500 data_alloc: 234881024 data_used: 18800640
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e33dd3c00 session 0x559e3732d4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e321a7000 session 0x559e347ff4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e34a1ac00 session 0x559e380b43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e34a1ac00 session 0x559e321d52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e31fe4400 session 0x559e32a00f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526229504 unmapped: 71720960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526229504 unmapped: 71720960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e321a7000 session 0x559e329d50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 ms_handle_reset con 0x559e33dd3c00 session 0x559e32e672c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 heartbeat osd_stat(store_statfs(0x199e98000/0x0/0x1bfc00000, data 0x2c4c3cd/0x2e46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526237696 unmapped: 71712768 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526237696 unmapped: 71712768 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e36010c00 session 0x559e32dda1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526237696 unmapped: 71712768 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5125720 data_alloc: 234881024 data_used: 18812928
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e31fe4400 session 0x559e3487a000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 523657216 unmapped: 74293248 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e34a1ac00 session 0x559e3732c000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 523624448 unmapped: 74326016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 heartbeat osd_stat(store_statfs(0x199e93000/0x0/0x1bfc00000, data 0x2c4df91/0x2e4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e32973400 session 0x559e32a01860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e35293000 session 0x559e346ca960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 523542528 unmapped: 74407936 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e379e4800 session 0x559e355a5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e31fe4400 session 0x559e380b5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e32973400 session 0x559e346970e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e36517400 session 0x559e32e44d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 heartbeat osd_stat(store_statfs(0x199e93000/0x0/0x1bfc00000, data 0x2c4df58/0x2e4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529014784 unmapped: 68935680 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e35293000 session 0x559e321c9a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e34a1ac00 session 0x559e31fafc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e31fe4400 session 0x559e321fab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e32973400 session 0x559e321f2b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e34a1ac00 session 0x559e31f8e5a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 ms_handle_reset con 0x559e35293000 session 0x559e348ea5a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525090816 unmapped: 72859648 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5236028 data_alloc: 234881024 data_used: 22736896
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525090816 unmapped: 72859648 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525090816 unmapped: 72859648 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.497164726s of 12.590846062s, submitted: 110
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525107200 unmapped: 72843264 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 382 ms_handle_reset con 0x559e379e4800 session 0x559e355a5e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525107200 unmapped: 72843264 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 383 heartbeat osd_stat(store_statfs(0x19957e000/0x0/0x1bfc00000, data 0x355fbfa/0x375f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 383 ms_handle_reset con 0x559e31fe4400 session 0x559e321f3e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525131776 unmapped: 72818688 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5270539 data_alloc: 234881024 data_used: 22745088
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 384 ms_handle_reset con 0x559e34a1ac00 session 0x559e34421e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525295616 unmapped: 72654848 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 385 ms_handle_reset con 0x559e35293000 session 0x559e321f1e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 385 ms_handle_reset con 0x559e379e4800 session 0x559e363d52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 385 ms_handle_reset con 0x559e36517400 session 0x559e321f0780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 385 heartbeat osd_stat(store_statfs(0x198e8a000/0x0/0x1bfc00000, data 0x3c5157e/0x3e54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525377536 unmapped: 72572928 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525402112 unmapped: 72548352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 385 ms_handle_reset con 0x559e36517400 session 0x559e321d5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525426688 unmapped: 72523776 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 385 ms_handle_reset con 0x559e34a1ac00 session 0x559e32d83c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529694720 unmapped: 68255744 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5431952 data_alloc: 234881024 data_used: 29618176
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 385 ms_handle_reset con 0x559e35293000 session 0x559e345623c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 386 ms_handle_reset con 0x559e379e4800 session 0x559e328eb0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 386 ms_handle_reset con 0x559e3dcf7000 session 0x559e347ff4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529997824 unmapped: 67952640 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 386 ms_handle_reset con 0x559e31fe4400 session 0x559e32a005a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530071552 unmapped: 67878912 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.292991638s of 10.000778198s, submitted: 98
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x198438000/0x0/0x1bfc00000, data 0x469e510/0x48a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526778368 unmapped: 71172096 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526778368 unmapped: 71172096 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526778368 unmapped: 71172096 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x198435000/0x0/0x1bfc00000, data 0x46a004f/0x48a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5461012 data_alloc: 234881024 data_used: 30130176
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527237120 unmapped: 70713344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530309120 unmapped: 67641344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530309120 unmapped: 67641344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529752064 unmapped: 68198400 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x197852000/0x0/0x1bfc00000, data 0x528404f/0x548c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x1977bb000/0x0/0x1bfc00000, data 0x531b04f/0x5523000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529825792 unmapped: 68124672 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5561792 data_alloc: 234881024 data_used: 30900224
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x1977b6000/0x0/0x1bfc00000, data 0x532004f/0x5528000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529825792 unmapped: 68124672 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529825792 unmapped: 68124672 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.722735405s of 10.039246559s, submitted: 81
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530046976 unmapped: 67903488 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530046976 unmapped: 67903488 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x197735000/0x0/0x1bfc00000, data 0x53a104f/0x55a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 66396160 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5578762 data_alloc: 251658240 data_used: 32792576
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 66396160 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 66396160 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 66396160 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 66396160 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 66256896 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x197733000/0x0/0x1bfc00000, data 0x53a204f/0x55aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5577650 data_alloc: 251658240 data_used: 32796672
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e34a1ac00 session 0x559e321f21e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x197733000/0x0/0x1bfc00000, data 0x53a204f/0x55aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531816448 unmapped: 66134016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531816448 unmapped: 66134016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 65757184 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 65757184 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 65757184 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5634904 data_alloc: 251658240 data_used: 39817216
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 65757184 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x1976ed000/0x0/0x1bfc00000, data 0x53e8072/0x55f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 65757184 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532455424 unmapped: 65495040 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.377390862s of 15.623140335s, submitted: 29
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 64446464 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e379e4800 session 0x559e321c8d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 64446464 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5657802 data_alloc: 251658240 data_used: 40771584
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e36516400 session 0x559e3732cd20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x1974e7000/0x0/0x1bfc00000, data 0x55ed272/0x57f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 64446464 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e321a7000 session 0x559e363d54a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e33dd3c00 session 0x559e355a4b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 64446464 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 64446464 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533504000 unmapped: 64446464 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533495808 unmapped: 64454656 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5711354 data_alloc: 251658240 data_used: 40787968
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x1971b3000/0x0/0x1bfc00000, data 0x5c4f262/0x5b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534093824 unmapped: 63856640 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 63750144 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535838720 unmapped: 62111744 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535887872 unmapped: 62062592 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.514727592s of 10.920069695s, submitted: 81
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535896064 unmapped: 62054400 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5711572 data_alloc: 251658240 data_used: 41148416
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535904256 unmapped: 62046208 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e321a7000 session 0x559e346452c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x19718c000/0x0/0x1bfc00000, data 0x5c761dd/0x5b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535904256 unmapped: 62046208 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 heartbeat osd_stat(store_statfs(0x19710a000/0x0/0x1bfc00000, data 0x5cf81dd/0x5bd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536379392 unmapped: 61571072 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e34a1ac00 session 0x559e329d41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e36516400 session 0x559e321db860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e379e4800 session 0x559e348ea5a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e379e4800 session 0x559e346b23c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536412160 unmapped: 61538304 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e321a7000 session 0x559e3732cd20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e33dd3c00 session 0x559e31c0ed20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e34a1ac00 session 0x559e32e150e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e36516400 session 0x559e321c9680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e36516400 session 0x559e3487ad20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e321a7000 session 0x559e321f10e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536420352 unmapped: 61530112 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5793007 data_alloc: 251658240 data_used: 41218048
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 ms_handle_reset con 0x559e33dd3c00 session 0x559e321f30e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536420352 unmapped: 61530112 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536436736 unmapped: 61513728 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e379e4800 session 0x559e321d43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536436736 unmapped: 61513728 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e34a1ac00 session 0x559e380b45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e34a1ac00 session 0x559e380b5e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 heartbeat osd_stat(store_statfs(0x19689b000/0x0/0x1bfc00000, data 0x6567c29/0x6442000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e321a7000 session 0x559e363d5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536444928 unmapped: 61505536 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e35293000 session 0x559e355a4000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e36516400 session 0x559e32d83c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.588279724s of 10.034754753s, submitted: 92
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e36517400 session 0x559e355a50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 heartbeat osd_stat(store_statfs(0x19769e000/0x0/0x1bfc00000, data 0x5764c29/0x563f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e379e4800 session 0x559e321f3860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 ms_handle_reset con 0x559e321a7000 session 0x559e321c8d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537493504 unmapped: 60456960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5621490 data_alloc: 234881024 data_used: 31567872
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 389 ms_handle_reset con 0x559e33dd3c00 session 0x559e321fa1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537493504 unmapped: 60456960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537501696 unmapped: 60448768 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 389 heartbeat osd_stat(store_statfs(0x1976be000/0x0/0x1bfc00000, data 0x57428f2/0x561e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538828800 unmapped: 59121664 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 390 heartbeat osd_stat(store_statfs(0x1976be000/0x0/0x1bfc00000, data 0x57428f2/0x561e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x22f1f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538828800 unmapped: 59121664 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 390 ms_handle_reset con 0x559e3e6d7400 session 0x559e346caf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 390 ms_handle_reset con 0x559e3a5dd000 session 0x559e355a4960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538836992 unmapped: 59113472 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5706174 data_alloc: 251658240 data_used: 42401792
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538836992 unmapped: 59113472 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535126016 unmapped: 62824448 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 392 ms_handle_reset con 0x559e33dd3c00 session 0x559e363f4960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 ms_handle_reset con 0x559e3e6d7400 session 0x559e363d52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 ms_handle_reset con 0x559e379e4800 session 0x559e363d4b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 ms_handle_reset con 0x559e36517400 session 0x559e346b30e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535134208 unmapped: 62816256 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 ms_handle_reset con 0x559e36516400 session 0x559e34696960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 ms_handle_reset con 0x559e35293000 session 0x559e321c9860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 ms_handle_reset con 0x559e321a7000 session 0x559e32d823c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534683648 unmapped: 63266816 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 heartbeat osd_stat(store_statfs(0x199558000/0x0/0x1bfc00000, data 0x2c811da/0x2e89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 63258624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5303289 data_alloc: 234881024 data_used: 23293952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 63258624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 63258624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 ms_handle_reset con 0x559e33dd3c00 session 0x559e346b34a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.688837051s of 13.421642303s, submitted: 158
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 394 ms_handle_reset con 0x559e36517400 session 0x559e31c0fa40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535912448 unmapped: 62038016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 394 heartbeat osd_stat(store_statfs(0x199a41000/0x0/0x1bfc00000, data 0x2c82d35/0x2e8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 394 heartbeat osd_stat(store_statfs(0x199a41000/0x0/0x1bfc00000, data 0x2c82d35/0x2e8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538697728 unmapped: 59252736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539295744 unmapped: 58654720 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5393331 data_alloc: 234881024 data_used: 23105536
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539295744 unmapped: 58654720 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539295744 unmapped: 58654720 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x198d96000/0x0/0x1bfc00000, data 0x3926d35/0x3b30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539295744 unmapped: 58654720 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539295744 unmapped: 58654720 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539295744 unmapped: 58654720 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5392273 data_alloc: 234881024 data_used: 23121920
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e34a1ac00 session 0x559e380b5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539295744 unmapped: 58654720 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539295744 unmapped: 58654720 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.741394997s of 10.047256470s, submitted: 149
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525803520 unmapped: 72146944 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a4a7000/0x0/0x1bfc00000, data 0x221d864/0x2427000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525803520 unmapped: 72146944 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525803520 unmapped: 72146944 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5132381 data_alloc: 218103808 data_used: 10317824
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e321c9c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a4a7000/0x0/0x1bfc00000, data 0x221d841/0x2426000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525803520 unmapped: 72146944 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525803520 unmapped: 72146944 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a4a7000/0x0/0x1bfc00000, data 0x221d841/0x2426000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525803520 unmapped: 72146944 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525803520 unmapped: 72146944 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e31f123c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525803520 unmapped: 72146944 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5132249 data_alloc: 218103808 data_used: 10317824
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e346443c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525811712 unmapped: 72138752 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525819904 unmapped: 72130560 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.989560127s of 10.074114799s, submitted: 26
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a4a9000/0x0/0x1bfc00000, data 0x221d831/0x2425000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525819904 unmapped: 72130560 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4963969 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e321f0b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4963969 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4963969 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.883484840s of 14.927380562s, submitted: 17
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e34a1ac00 session 0x559e321c9860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5030487 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525844480 unmapped: 72105984 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525844480 unmapped: 72105984 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e363d4b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e363d52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525844480 unmapped: 72105984 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e363f4960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525713408 unmapped: 72237056 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e355a4960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525713408 unmapped: 72237056 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5031236 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5092036 data_alloc: 218103808 data_used: 13369344
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5092036 data_alloc: 218103808 data_used: 13369344
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.852407455s of 18.952495575s, submitted: 18
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,0,0,0,1,16])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529211392 unmapped: 68739072 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5204464 data_alloc: 234881024 data_used: 14237696
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a195000/0x0/0x1bfc00000, data 0x2531831/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e379e4800 session 0x559e31c0fc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5198752 data_alloc: 234881024 data_used: 14245888
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a175000/0x0/0x1bfc00000, data 0x2551831/0x2759000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.727799416s of 13.010139465s, submitted: 88
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a175000/0x0/0x1bfc00000, data 0x2551831/0x2759000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e321daf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199008 data_alloc: 234881024 data_used: 14245888
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e36517400 session 0x559e355a4000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e355a50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524328960 unmapped: 73621504 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e345630e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975703 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975703 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975703 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e363f5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e3732c1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e321d5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e321c9a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.897541046s of 17.429567337s, submitted: 30
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e31c11e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e36517400 session 0x559e3487a000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e36517400 session 0x559e34644000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e32d4d2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e363d45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff9000/0x0/0x1bfc00000, data 0x16cc841/0x18d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5020167 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e34420b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e34696000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e3487a5a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e321fa000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff9000/0x0/0x1bfc00000, data 0x16cc841/0x18d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 72K writes, 294K keys, 72K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.06 MB/s#012Cumulative WAL: 72K writes, 26K syncs, 2.73 writes per sync, written: 0.30 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7949 writes, 30K keys, 7949 commit groups, 1.0 writes per commit group, ingest: 32.65 MB, 0.05 MB/s#012Interval WAL: 7949 writes, 3033 syncs, 2.62 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5035458 data_alloc: 218103808 data_used: 6471680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff8000/0x0/0x1bfc00000, data 0x16cc851/0x18d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff8000/0x0/0x1bfc00000, data 0x16cc851/0x18d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5067938 data_alloc: 218103808 data_used: 11075584
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: mgrc ms_handle_reset ms_handle_reset con 0x559e3493cc00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff8000/0x0/0x1bfc00000, data 0x16cc851/0x18d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff8000/0x0/0x1bfc00000, data 0x16cc851/0x18d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5067938 data_alloc: 218103808 data_used: 11075584
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.283088684s of 19.358438492s, submitted: 14
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526761984 unmapped: 71188480 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a6b1000/0x0/0x1bfc00000, data 0x2013851/0x221d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a689000/0x0/0x1bfc00000, data 0x203b851/0x2245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5145622 data_alloc: 218103808 data_used: 11075584
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a689000/0x0/0x1bfc00000, data 0x203b851/0x2245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5143902 data_alloc: 218103808 data_used: 11075584
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e36517400 session 0x559e31fcb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a686000/0x0/0x1bfc00000, data 0x203e851/0x2248000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a686000/0x0/0x1bfc00000, data 0x203e851/0x2248000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527417344 unmapped: 70533120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5143902 data_alloc: 218103808 data_used: 11075584
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e380b4d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527417344 unmapped: 70533120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e363d5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.799705505s of 15.938135147s, submitted: 58
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e3487ab40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527417344 unmapped: 70533120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e355a5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4986411 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e345630e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e355a50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522043392 unmapped: 75907072 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,0,0,0,0,1,1,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 73760768 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e380b5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4986555 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e31c0fa40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524214272 unmapped: 73736192 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.477128983s of 10.101735115s, submitted: 272
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524222464 unmapped: 73728000 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e31c0fc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e321f2960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4985842 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4985842 data_alloc: 218103808 data_used: 4780032
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e3732d860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5f9000/0x0/0x1bfc00000, data 0x10cc841/0x12d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.119676590s of 11.755991936s, submitted: 17
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 396 ms_handle_reset con 0x559e36517400 session 0x559e31faf0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524124160 unmapped: 73826304 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 396 ms_handle_reset con 0x559e36517400 session 0x559e346b2d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524132352 unmapped: 73818112 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4991828 data_alloc: 218103808 data_used: 4792320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 397 ms_handle_reset con 0x559e31fe4400 session 0x559e3732cf00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 397 heartbeat osd_stat(store_statfs(0x19b5f2000/0x0/0x1bfc00000, data 0x10d0147/0x12db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 397 ms_handle_reset con 0x559e321a7000 session 0x559e32e44d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 397 ms_handle_reset con 0x559e32973400 session 0x559e345632c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4994082 data_alloc: 218103808 data_used: 4792320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 397 heartbeat osd_stat(store_statfs(0x19b5f3000/0x0/0x1bfc00000, data 0x10d0137/0x12da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 397 heartbeat osd_stat(store_statfs(0x19b5f3000/0x0/0x1bfc00000, data 0x10d0137/0x12da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997056 data_alloc: 218103808 data_used: 4792320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997056 data_alloc: 218103808 data_used: 4792320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997056 data_alloc: 218103808 data_used: 4792320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524181504 unmapped: 73768960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524181504 unmapped: 73768960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997056 data_alloc: 218103808 data_used: 4792320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.285722733s of 27.344110489s, submitted: 24
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524181504 unmapped: 73768960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e33dd3c00 session 0x559e321db860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19adec000/0x0/0x1bfc00000, data 0x18d38df/0x1ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:34:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/585650995' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5057012 data_alloc: 218103808 data_used: 4800512
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e33dd3c00 session 0x559e31c0ed20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e31fe4400 session 0x559e32e150e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e321a7000 session 0x559e35ba9680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e32973400 session 0x559e321d5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 76775424 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e36517400 session 0x559e32d823c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 76775424 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19adec000/0x0/0x1bfc00000, data 0x18d38df/0x1ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e36517400 session 0x559e31c0e000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 76775424 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e31fe4400 session 0x559e348ea960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e321a7000 session 0x559e355a4960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5165625 data_alloc: 218103808 data_used: 11612160
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.993896484s of 13.169622421s, submitted: 39
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e32973400 session 0x559e355a5e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 75579392 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5165625 data_alloc: 218103808 data_used: 11612160
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 75489280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5241145 data_alloc: 234881024 data_used: 22200320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e3e6d7400 session 0x559e3732d0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36d000/0x0/0x1bfc00000, data 0x2351951/0x2561000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5241277 data_alloc: 234881024 data_used: 22200320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.166830063s of 12.190934181s, submitted: 2
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 400 ms_handle_reset con 0x559e31fe4400 session 0x559e363d41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536436736 unmapped: 69910528 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992ef000/0x0/0x1bfc00000, data 0x33ce5aa/0x35df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5376871 data_alloc: 234881024 data_used: 23175168
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992ce000/0x0/0x1bfc00000, data 0x33ef5aa/0x3600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5376655 data_alloc: 234881024 data_used: 23179264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992ce000/0x0/0x1bfc00000, data 0x33ef5aa/0x3600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992ce000/0x0/0x1bfc00000, data 0x33ef5aa/0x3600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.866960526s of 13.409070015s, submitted: 159
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537550848 unmapped: 68796416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5376603 data_alloc: 234881024 data_used: 23179264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537550848 unmapped: 68796416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992c3000/0x0/0x1bfc00000, data 0x33fa5aa/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537550848 unmapped: 68796416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537550848 unmapped: 68796416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 400 ms_handle_reset con 0x559e321a7000 session 0x559e346970e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537567232 unmapped: 68780032 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 ms_handle_reset con 0x559e32973400 session 0x559e346b2f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5385877 data_alloc: 234881024 data_used: 23183360
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 heartbeat osd_stat(store_statfs(0x1992ab000/0x0/0x1bfc00000, data 0x340f257/0x3621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.273681641s of 10.380677223s, submitted: 41
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 ms_handle_reset con 0x559e33dd3c00 session 0x559e363f50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 heartbeat osd_stat(store_statfs(0x1992a9000/0x0/0x1bfc00000, data 0x3413257/0x3625000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 ms_handle_reset con 0x559e3e6d7400 session 0x559e31f13e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5381037 data_alloc: 234881024 data_used: 23183360
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 heartbeat osd_stat(store_statfs(0x1992a9000/0x0/0x1bfc00000, data 0x3413257/0x3625000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a5000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5387419 data_alloc: 234881024 data_used: 23449600
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a5000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5387419 data_alloc: 234881024 data_used: 23449600
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a5000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537591808 unmapped: 68755456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.241802216s of 14.524361610s, submitted: 12
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5408483 data_alloc: 234881024 data_used: 24846336
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a4000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5406307 data_alloc: 234881024 data_used: 24854528
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5417507 data_alloc: 234881024 data_used: 27004928
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.198542595s of 13.242837906s, submitted: 25
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5416979 data_alloc: 234881024 data_used: 27004928
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e3e6d7400 session 0x559e355a5c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e321a7000 session 0x559e346b2000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e32973400 session 0x559e37f710e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5416099 data_alloc: 234881024 data_used: 27000832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e31fe4400 session 0x559e34420780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e33dd3c00 session 0x559e34562f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a7000/0x0/0x1bfc00000, data 0x3414d86/0x3627000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5102671 data_alloc: 218103808 data_used: 11649024
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x19aa39000/0x0/0x1bfc00000, data 0x18d8d24/0x1aea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x19aa39000/0x0/0x1bfc00000, data 0x18d8d24/0x1aea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.055377960s of 15.151789665s, submitted: 31
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 403 ms_handle_reset con 0x559e33dd3c00 session 0x559e31f8fe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 403 heartbeat osd_stat(store_statfs(0x19b5e1000/0x0/0x1bfc00000, data 0x10da9c1/0x12ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 404 ms_handle_reset con 0x559e31fe4400 session 0x559e31c110e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5091919 data_alloc: 218103808 data_used: 4849664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 404 heartbeat osd_stat(store_statfs(0x19addd000/0x0/0x1bfc00000, data 0x18dc659/0x1af0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 404 heartbeat osd_stat(store_statfs(0x19addd000/0x0/0x1bfc00000, data 0x18dc659/0x1af0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5092079 data_alloc: 218103808 data_used: 4853760
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533725184 unmapped: 72622080 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e321a7000 session 0x559e321c9860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e32973400 session 0x559e380b43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3e6d7400 session 0x559e34697c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533725184 unmapped: 72622080 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19adda000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 72613888 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3e6d7400 session 0x559e321f0b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 72613888 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5069037 data_alloc: 218103808 data_used: 10473472
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e31fe4400 session 0x559e355a4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 70254592 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 70254592 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19adda000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19adda000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 70254592 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e34563e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.949297905s of 16.216978073s, submitted: 36
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 70254592 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493e400 session 0x559e380b45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19acc6000/0x0/0x1bfc00000, data 0x19f21a8/0x1c08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493f800 session 0x559e34562b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493f800 session 0x559e34696000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e31fe4400 session 0x559e37f701e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5185125 data_alloc: 218103808 data_used: 10477568
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493e400 session 0x559e34562d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e3732da40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3e6d7400 session 0x559e321f3680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f81000/0x0/0x1bfc00000, data 0x27371a8/0x294d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e31fe4400 session 0x559e32ee4000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493e400 session 0x559e345621e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537706496 unmapped: 68640768 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f5b000/0x0/0x1bfc00000, data 0x275b1db/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537706496 unmapped: 68640768 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537714688 unmapped: 68632576 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5221372 data_alloc: 234881024 data_used: 14692352
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f5b000/0x0/0x1bfc00000, data 0x275b1db/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.183856964s of 11.327140808s, submitted: 31
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e36012000 session 0x559e344205a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5290644 data_alloc: 234881024 data_used: 24027136
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f5a000/0x0/0x1bfc00000, data 0x275b23d/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 541229056 unmapped: 65118208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 541229056 unmapped: 65118208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5290776 data_alloc: 234881024 data_used: 24027136
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f5a000/0x0/0x1bfc00000, data 0x275b23d/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 541229056 unmapped: 65118208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543694848 unmapped: 62652416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199799000/0x0/0x1bfc00000, data 0x2f1423d/0x312d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,6])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542760960 unmapped: 63586304 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1993f7000/0x0/0x1bfc00000, data 0x32be23d/0x34d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542760960 unmapped: 63586304 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543932416 unmapped: 62414848 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5387926 data_alloc: 234881024 data_used: 24592384
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544071680 unmapped: 62275584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.682239532s of 11.880360603s, submitted: 96
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544071680 unmapped: 62275584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544071680 unmapped: 62275584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1980fc000/0x0/0x1bfc00000, data 0x341823d/0x3631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544071680 unmapped: 62275584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1980fa000/0x0/0x1bfc00000, data 0x341b23d/0x3634000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544292864 unmapped: 62054400 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5404446 data_alloc: 234881024 data_used: 25628672
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544292864 unmapped: 62054400 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5413246 data_alloc: 234881024 data_used: 25882624
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1980be000/0x0/0x1bfc00000, data 0x345723d/0x3670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.788679123s of 10.885962486s, submitted: 27
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1980a3000/0x0/0x1bfc00000, data 0x347223d/0x368b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5413850 data_alloc: 234881024 data_used: 25890816
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x198092000/0x0/0x1bfc00000, data 0x348223d/0x369b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5415726 data_alloc: 234881024 data_used: 25903104
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x198092000/0x0/0x1bfc00000, data 0x348223d/0x369b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5415726 data_alloc: 234881024 data_used: 25903104
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x198092000/0x0/0x1bfc00000, data 0x348223d/0x369b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.196617126s of 16.837764740s, submitted: 12
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5416902 data_alloc: 234881024 data_used: 25935872
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493f800 session 0x559e3732d680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e31c10780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3691d000 session 0x559e35ba9680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5417846 data_alloc: 234881024 data_used: 25980928
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5408086 data_alloc: 234881024 data_used: 26005504
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5408086 data_alloc: 234881024 data_used: 26005504
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.805000305s of 16.312940598s, submitted: 7
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544391168 unmapped: 61956096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544391168 unmapped: 61956096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544391168 unmapped: 61956096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544391168 unmapped: 61956096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5421270 data_alloc: 234881024 data_used: 26669056
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807d000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5421398 data_alloc: 234881024 data_used: 26673152
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.173282623s of 11.213999748s, submitted: 11
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807d000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5420870 data_alloc: 234881024 data_used: 26673152
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e31fe4400 session 0x559e34421a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807d000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493f800 session 0x559e321d81e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x198075000/0x0/0x1bfc00000, data 0x349d23d/0x36b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,2])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e36012000 session 0x559e363f50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 61939712 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e329d41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493e400 session 0x559e34697680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 61939712 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e321db680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5086099 data_alloc: 218103808 data_used: 10477568
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199c3a000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.649071693s of 12.372345924s, submitted: 56
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199c3a000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 406 handle_osd_map epochs [406,406], i have 406, src has [1,406]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5090113 data_alloc: 218103808 data_used: 10420224
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 406 ms_handle_reset con 0x559e31fe4400 session 0x559e363f4d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 406 heartbeat osd_stat(store_statfs(0x19a437000/0x0/0x1bfc00000, data 0x10dfe22/0x12f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5015361 data_alloc: 218103808 data_used: 3670016
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 406 heartbeat osd_stat(store_statfs(0x19a437000/0x0/0x1bfc00000, data 0x10dfe22/0x12f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528818176 unmapped: 77529088 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528818176 unmapped: 77529088 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528818176 unmapped: 77529088 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528834560 unmapped: 77512704 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528834560 unmapped: 77512704 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528834560 unmapped: 77512704 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528834560 unmapped: 77512704 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528842752 unmapped: 77504512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528842752 unmapped: 77504512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528850944 unmapped: 77496320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528850944 unmapped: 77496320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e321f23c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e36012000 session 0x559e355a41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e31fe4400 session 0x559e32d823c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493e400 session 0x559e31c0f0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.496902466s of 51.781383514s, submitted: 33
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528850944 unmapped: 77496320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e32e67c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3689e400 session 0x559e348ea3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3691d000 session 0x559e329d41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3691d000 session 0x559e321d81e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e31fe4400 session 0x559e345621e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a434000/0x0/0x1bfc00000, data 0x10e199a/0x12fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199d11000/0x0/0x1bfc00000, data 0x18049d3/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5085455 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199d11000/0x0/0x1bfc00000, data 0x18049d3/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493e400 session 0x559e321f3680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e3732da40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199d11000/0x0/0x1bfc00000, data 0x18049d3/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3689e400 session 0x559e34562d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530006016 unmapped: 76341248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3689e400 session 0x559e37f701e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199d11000/0x0/0x1bfc00000, data 0x18049d3/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530006016 unmapped: 76341248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530006016 unmapped: 76341248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5122818 data_alloc: 218103808 data_used: 8413184
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199ce6000/0x0/0x1bfc00000, data 0x182e9f6/0x1a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5134178 data_alloc: 218103808 data_used: 10027008
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199ce6000/0x0/0x1bfc00000, data 0x182e9f6/0x1a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199ce6000/0x0/0x1bfc00000, data 0x182e9f6/0x1a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530210816 unmapped: 76136448 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.313760757s of 19.464603424s, submitted: 52
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5168926 data_alloc: 218103808 data_used: 10100736
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532914176 unmapped: 73433088 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533962752 unmapped: 72384512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533962752 unmapped: 72384512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533962752 unmapped: 72384512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533962752 unmapped: 72384512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5252884 data_alloc: 218103808 data_used: 11071488
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x198fa2000/0x0/0x1bfc00000, data 0x25719f6/0x278b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x198fa2000/0x0/0x1bfc00000, data 0x25719f6/0x278b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5252884 data_alloc: 218103808 data_used: 11071488
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x198fa2000/0x0/0x1bfc00000, data 0x25719f6/0x278b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.516111374s of 14.813647270s, submitted: 97
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5252720 data_alloc: 218103808 data_used: 11071488
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e31fe4400 session 0x559e380b45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493e400 session 0x559e321f0b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x198fa3000/0x0/0x1bfc00000, data 0x25719f6/0x278b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e380b52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 75325440 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 75325440 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 75325440 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 75292672 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 75292672 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 75292672 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531070976 unmapped: 75276288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531070976 unmapped: 75276288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531070976 unmapped: 75276288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 75268096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 75268096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 75259904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 75259904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.737773895s of 59.510211945s, submitted: 52
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 75259904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3691d000 session 0x559e355a4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 82935808 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5141627 data_alloc: 218103808 data_used: 3674112
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3691d000 session 0x559e3732cd20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199730000/0x0/0x1bfc00000, data 0x1de698a/0x1ffe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 82935808 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e31fe4400 session 0x559e321c9680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 82935808 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493e400 session 0x559e321c8d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 82935808 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e31fcb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3689e400 session 0x559e346452c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532488192 unmapped: 81731584 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19970b000/0x0/0x1bfc00000, data 0x1e0a9d3/0x2023000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532496384 unmapped: 81723392 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5144493 data_alloc: 218103808 data_used: 3678208
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 81879040 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534683648 unmapped: 79536128 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534683648 unmapped: 79536128 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19970b000/0x0/0x1bfc00000, data 0x1e0a9d3/0x2023000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5232653 data_alloc: 234881024 data_used: 14741504
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19970b000/0x0/0x1bfc00000, data 0x1e0a9d3/0x2023000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19970b000/0x0/0x1bfc00000, data 0x1e0a9d3/0x2023000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5232653 data_alloc: 234881024 data_used: 14741504
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.774681091s of 17.730520248s, submitted: 41
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 79314944 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19945e000/0x0/0x1bfc00000, data 0x20b79d3/0x22d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,0,0,3,2,2])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535044096 unmapped: 79175680 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x1992aa000/0x0/0x1bfc00000, data 0x225d9d3/0x2476000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535052288 unmapped: 79167488 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 79839232 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5276397 data_alloc: 234881024 data_used: 15114240
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 79839232 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x197cc7000/0x0/0x1bfc00000, data 0x22849d3/0x249d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291905 data_alloc: 234881024 data_used: 15024128
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.657398224s of 10.153116226s, submitted: 88
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x197cdf000/0x0/0x1bfc00000, data 0x22859d3/0x249e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 407 handle_osd_map epochs [408,408], i have 408, src has [1,408]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 77709312 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289251 data_alloc: 234881024 data_used: 15032320
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 408 ms_handle_reset con 0x559e3691d000 session 0x559e3487af00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 408 ms_handle_reset con 0x559e3493f800 session 0x559e321c9a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536969216 unmapped: 77250560 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544342016 unmapped: 69877760 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 408 ms_handle_reset con 0x559e32232400 session 0x559e32d834a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544366592 unmapped: 69853184 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 409 ms_handle_reset con 0x559e3dcf6400 session 0x559e344201e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 69836800 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 409 heartbeat osd_stat(store_statfs(0x1976c8000/0x0/0x1bfc00000, data 0x289a2d9/0x2ab5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 409 handle_osd_map epochs [410,410], i have 410, src has [1,410]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 410 ms_handle_reset con 0x559e3691c400 session 0x559e3732d2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 69820416 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5382451 data_alloc: 234881024 data_used: 21319680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 410 ms_handle_reset con 0x559e32232400 session 0x559e363f4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 410 ms_handle_reset con 0x559e3493f800 session 0x559e347ff4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 410 ms_handle_reset con 0x559e3691d000 session 0x559e355a50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 410 heartbeat osd_stat(store_statfs(0x1976c4000/0x0/0x1bfc00000, data 0x289bf4e/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5382451 data_alloc: 234881024 data_used: 21319680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.358633995s of 14.245987892s, submitted: 41
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 76177408 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3dcf6400 session 0x559e32e66d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3e6c5400 session 0x559e34697860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e32232400 session 0x559e34421680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3493f800 session 0x559e346cb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3dcf6400 session 0x559e346b3860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3691d000 session 0x559e346b30e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3296b800 session 0x559e321354a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 76177408 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e32232400 session 0x559e363d43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3493f800 session 0x559e346b2f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 76177408 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c2000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5367523 data_alloc: 234881024 data_used: 21327872
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3691d000 session 0x559e321d52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538058752 unmapped: 76161024 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c3000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538066944 unmapped: 76152832 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373524 data_alloc: 234881024 data_used: 21962752
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c3000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c3000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373524 data_alloc: 234881024 data_used: 21962752
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.111186981s of 16.436338425s, submitted: 30
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5374036 data_alloc: 234881024 data_used: 21954560
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c3000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5414037 data_alloc: 234881024 data_used: 22958080
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1973b3000/0x0/0x1bfc00000, data 0x2bada8d/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1973b3000/0x0/0x1bfc00000, data 0x2bada8d/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1973b3000/0x0/0x1bfc00000, data 0x2bada8d/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.186029434s of 10.228796005s, submitted: 5
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1973b3000/0x0/0x1bfc00000, data 0x2bada8d/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197345000/0x0/0x1bfc00000, data 0x2c1ba8d/0x2e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5414683 data_alloc: 234881024 data_used: 22958080
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197345000/0x0/0x1bfc00000, data 0x2c1ba8d/0x2e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5414683 data_alloc: 234881024 data_used: 22958080
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197345000/0x0/0x1bfc00000, data 0x2c1ba8d/0x2e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5444555 data_alloc: 234881024 data_used: 24694784
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197110000/0x0/0x1bfc00000, data 0x2e4da8d/0x306b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 72040448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5446475 data_alloc: 234881024 data_used: 24965120
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 72040448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e321a6800 session 0x559e363d4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.819005966s of 18.885263443s, submitted: 14
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197110000/0x0/0x1bfc00000, data 0x2e4da8d/0x306b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197113000/0x0/0x1bfc00000, data 0x2e4da8d/0x306b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539893760 unmapped: 74326016 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 412 handle_osd_map epochs [412,412], i have 412, src has [1,412]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 412 ms_handle_reset con 0x559e36435400 session 0x559e32135680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 412 ms_handle_reset con 0x559e32232400 session 0x559e346963c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 412 ms_handle_reset con 0x559e321a6800 session 0x559e34696960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 412 ms_handle_reset con 0x559e3493f800 session 0x559e321c81e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 550871040 unmapped: 67551232 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 412 handle_osd_map epochs [413,413], i have 413, src has [1,413]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 413 ms_handle_reset con 0x559e3691d000 session 0x559e363f43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545439744 unmapped: 72982528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 414 ms_handle_reset con 0x559e3689f000 session 0x559e321d43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545447936 unmapped: 72974336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5641245 data_alloc: 251658240 data_used: 32403456
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 414 ms_handle_reset con 0x559e3689f000 session 0x559e329d43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 414 ms_handle_reset con 0x559e321a6800 session 0x559e380b43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 414 ms_handle_reset con 0x559e32232400 session 0x559e321fa1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 414 heartbeat osd_stat(store_statfs(0x195acb000/0x0/0x1bfc00000, data 0x4490008/0x46b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545456128 unmapped: 72966144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545456128 unmapped: 72966144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545472512 unmapped: 72949760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 415 heartbeat osd_stat(store_statfs(0x195ac9000/0x0/0x1bfc00000, data 0x4491cd1/0x46b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 415 ms_handle_reset con 0x559e3493f800 session 0x559e321db0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545472512 unmapped: 72949760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 415 ms_handle_reset con 0x559e3dcf6400 session 0x559e31f8f680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 415 ms_handle_reset con 0x559e32f09400 session 0x559e35ba8780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 415 heartbeat osd_stat(store_statfs(0x197107000/0x0/0x1bfc00000, data 0x2e54cd1/0x3077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545480704 unmapped: 72941568 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5485319 data_alloc: 251658240 data_used: 32403456
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545488896 unmapped: 72933376 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.492953300s of 10.028572083s, submitted: 121
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 416 ms_handle_reset con 0x559e321a6800 session 0x559e31fca3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 72925184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 72925184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 72925184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 72925184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5423905 data_alloc: 234881024 data_used: 26447872
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 417 heartbeat osd_stat(store_statfs(0x1976b3000/0x0/0x1bfc00000, data 0x28a682c/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 417 handle_osd_map epochs [418,418], i have 418, src has [1,418]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545505280 unmapped: 72916992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 418 ms_handle_reset con 0x559e32232400 session 0x559e32135c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545521664 unmapped: 72900608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 418 ms_handle_reset con 0x559e31fe4400 session 0x559e363d50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 418 ms_handle_reset con 0x559e3493e400 session 0x559e348eb2c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545529856 unmapped: 72892416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534167552 unmapped: 84254720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 418 ms_handle_reset con 0x559e3493e400 session 0x559e346b23c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534167552 unmapped: 84254720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5107630 data_alloc: 218103808 data_used: 3715072
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534167552 unmapped: 84254720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 418 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f4fc2/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 418 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f4fc2/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 418 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f4fc2/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5107630 data_alloc: 218103808 data_used: 3715072
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.582468033s of 14.561615944s, submitted: 68
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534183936 unmapped: 84238336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e31fe4400 session 0x559e32ddb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534208512 unmapped: 84213760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e321a6800 session 0x559e321f3e00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534208512 unmapped: 84213760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e32232400 session 0x559e355a5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534208512 unmapped: 84213760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e32f09400 session 0x559e321c8000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.362262726s of 48.373008728s, submitted: 11
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534241280 unmapped: 84180992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e31fe4400 session 0x559e363f4780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 84172800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5115500 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 84172800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 84172800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e3d000/0x0/0x1bfc00000, data 0x111ab10/0x1341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 84172800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e321a6800 session 0x559e363f4780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e32232400 session 0x559e355a5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493e400 session 0x559e32ddb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111667 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493f800 session 0x559e363d50e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111667 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493f800 session 0x559e32135c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e31fe4400 session 0x559e35ba8780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e321a6800 session 0x559e321db0e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111667 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.037994385s of 17.063156128s, submitted: 6
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e32232400 session 0x559e321fa1e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 74K writes, 304K keys, 74K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.05 MB/s#012Cumulative WAL: 74K writes, 27K syncs, 2.73 writes per sync, written: 0.31 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2624 writes, 10K keys, 2624 commit groups, 1.0 writes per commit group, ingest: 10.09 MB, 0.02 MB/s#012Interval WAL: 2624 writes, 1007 syncs, 2.61 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b11/0x131d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b11/0x131d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493e400 session 0x559e329d43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5112727 data_alloc: 218103808 data_used: 3723264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493e400 session 0x559e363f43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534282240 unmapped: 84140032 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e31fe4400 session 0x559e34696960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e321a6800 session 0x559e346b30e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198ab1000/0x0/0x1bfc00000, data 0x14a6b11/0x16cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534609920 unmapped: 83812352 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e32232400 session 0x559e34697860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534568960 unmapped: 83853312 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5153874 data_alloc: 218103808 data_used: 3735552
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534568960 unmapped: 83853312 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534568960 unmapped: 83853312 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aae000/0x0/0x1bfc00000, data 0x14a876a/0x16d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5153874 data_alloc: 218103808 data_used: 3735552
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aae000/0x0/0x1bfc00000, data 0x14a876a/0x16d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.734244347s of 15.983036041s, submitted: 33
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3689f000 session 0x559e32ddbe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3493f800 session 0x559e321f3860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534585344 unmapped: 83836928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534585344 unmapped: 83836928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157176 data_alloc: 218103808 data_used: 3735552
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aad000/0x0/0x1bfc00000, data 0x14a877a/0x16d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3689f000 session 0x559e321fbe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aad000/0x0/0x1bfc00000, data 0x14a877a/0x16d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e31fe4400 session 0x559e31c0fc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e321a6800 session 0x559e363f5680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e32232400 session 0x559e31f134a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5184599 data_alloc: 218103808 data_used: 7045120
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aac000/0x0/0x1bfc00000, data 0x14a878a/0x16d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e32232400 session 0x559e346961e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e31fe4400 session 0x559e346cbc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.108129501s of 10.257454872s, submitted: 18
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e321a6800 session 0x559e34420960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aad000/0x0/0x1bfc00000, data 0x14a877a/0x16d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534609920 unmapped: 83812352 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3493f800 session 0x559e32d832c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3689f000 session 0x559e346b2000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535658496 unmapped: 82763776 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e31fe4400 session 0x559e34562f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535666688 unmapped: 82755584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5183696 data_alloc: 218103808 data_used: 7307264
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e321a6800 session 0x559e321d43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535674880 unmapped: 82747392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e32232400 session 0x559e363d4780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535691264 unmapped: 82731008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e3493f800 session 0x559e321f10e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 421 heartbeat osd_stat(store_statfs(0x198e5b000/0x0/0x1bfc00000, data 0x10fa407/0x1322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535691264 unmapped: 82731008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535691264 unmapped: 82731008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535642112 unmapped: 82780160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5128019 data_alloc: 218103808 data_used: 3743744
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535707648 unmapped: 82714624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535724032 unmapped: 82698240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.888588905s of 10.895521164s, submitted: 223
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e3493e400 session 0x559e31c10000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535724032 unmapped: 82698240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 421 heartbeat osd_stat(store_statfs(0x198e5c000/0x0/0x1bfc00000, data 0x10fa407/0x1322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [0,1])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e31fe4400 session 0x559e31f13a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535773184 unmapped: 82649088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535773184 unmapped: 82649088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127947 data_alloc: 218103808 data_used: 3743744
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198e58000/0x0/0x1bfc00000, data 0x10fbf46/0x1325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e321a6800 session 0x559e321c9c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198e58000/0x0/0x1bfc00000, data 0x10fbf46/0x1325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e32232400 session 0x559e3138f680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5132121 data_alloc: 218103808 data_used: 3751936
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198e58000/0x0/0x1bfc00000, data 0x10fbf46/0x1325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e3493f800 session 0x559e3732c3c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.618832588s of 10.136927605s, submitted: 109
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e3691d000 session 0x559e3487ba40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535732224 unmapped: 82690048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e3691d000 session 0x559e31f8fe00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535732224 unmapped: 82690048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220801 data_alloc: 218103808 data_used: 3751936
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220801 data_alloc: 218103808 data_used: 3751936
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220801 data_alloc: 218103808 data_used: 3751936
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220801 data_alloc: 218103808 data_used: 3751936
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e31fe4400 session 0x559e32d82b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e321a6800 session 0x559e346445a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e32232400 session 0x559e32134000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.890636444s of 19.963485718s, submitted: 15
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e3493f800 session 0x559e31c10780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 82329600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 82329600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536109056 unmapped: 82313216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5265161 data_alloc: 218103808 data_used: 9170944
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x1982fc000/0x0/0x1bfc00000, data 0x1c57f56/0x1e82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5307721 data_alloc: 234881024 data_used: 15204352
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x1982fc000/0x0/0x1bfc00000, data 0x1c57f56/0x1e82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x1982fc000/0x0/0x1bfc00000, data 0x1c57f56/0x1e82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5307721 data_alloc: 234881024 data_used: 15204352
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.728299141s of 12.746973038s, submitted: 3
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540131328 unmapped: 78290944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539566080 unmapped: 78856192 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539566080 unmapped: 78856192 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x197b97000/0x0/0x1bfc00000, data 0x23bcf56/0x25e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5377333 data_alloc: 234881024 data_used: 16367616
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x197b97000/0x0/0x1bfc00000, data 0x23bcf56/0x25e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x197b97000/0x0/0x1bfc00000, data 0x23bcf56/0x25e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5378173 data_alloc: 234881024 data_used: 16367616
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.133055687s of 11.368740082s, submitted: 81
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540008448 unmapped: 78413824 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x197b75000/0x0/0x1bfc00000, data 0x23def56/0x2609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e32232400 session 0x559e34697680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e321a6800 session 0x559e346b3a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b71000/0x0/0x1bfc00000, data 0x23e0d01/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5383844 data_alloc: 234881024 data_used: 16375808
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b6d000/0x0/0x1bfc00000, data 0x259ad01/0x2611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b6d000/0x0/0x1bfc00000, data 0x259ad01/0x2611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396572 data_alloc: 234881024 data_used: 16375808
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b6d000/0x0/0x1bfc00000, data 0x259ad01/0x2611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.175535202s of 11.225886345s, submitted: 12
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b6a000/0x0/0x1bfc00000, data 0x259dd01/0x2614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3ac5a400 session 0x559e3487ad20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3691d000 session 0x559e31c101e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b69000/0x0/0x1bfc00000, data 0x259dd63/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540024832 unmapped: 78397440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5399792 data_alloc: 234881024 data_used: 16375808
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540024832 unmapped: 78397440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540024832 unmapped: 78397440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e32f09c00 session 0x559e34696b40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b69000/0x0/0x1bfc00000, data 0x259dd63/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540033024 unmapped: 78389248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e321a6800 session 0x559e355a4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540033024 unmapped: 78389248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e32232400 session 0x559e355a4780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3691d000 session 0x559e355a5a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 78381056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5401113 data_alloc: 234881024 data_used: 16375808
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 78381056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b69000/0x0/0x1bfc00000, data 0x259dd63/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 78381056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 78372864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 78372864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.440385818s of 11.490693092s, submitted: 14
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b69000/0x0/0x1bfc00000, data 0x259dd63/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406917 data_alloc: 234881024 data_used: 16429056
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406917 data_alloc: 234881024 data_used: 16429056
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5416031 data_alloc: 234881024 data_used: 18796544
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.040010452s of 15.073230743s, submitted: 10
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415935 data_alloc: 234881024 data_used: 18796544
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415935 data_alloc: 234881024 data_used: 18796544
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415935 data_alloc: 234881024 data_used: 18796544
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415935 data_alloc: 234881024 data_used: 18796544
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.186006546s of 18.193222046s, submitted: 2
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3ac5a400 session 0x559e32e67c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3a5dd000 session 0x559e380b43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540073984 unmapped: 78348288 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e321a6800 session 0x559e32e45680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 78331904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5414851 data_alloc: 234881024 data_used: 18800640
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 78331904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 78331904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 78331904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e32232400 session 0x559e348ebc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540098560 unmapped: 78323712 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3691d000 session 0x559e31fcb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413563 data_alloc: 234881024 data_used: 18796544
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d01/0x2618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d01/0x2618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3a5dd000 session 0x559e345630e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.828241348s of 11.966302872s, submitted: 44
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540123136 unmapped: 78299136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409117 data_alloc: 234881024 data_used: 18804736
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 424 ms_handle_reset con 0x559e3ac5a400 session 0x559e346b30e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540123136 unmapped: 78299136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 424 ms_handle_reset con 0x559e3493f800 session 0x559e321d4f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 424 ms_handle_reset con 0x559e31fe4400 session 0x559e3138f680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540123136 unmapped: 78299136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 424 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x23eb9ae/0x2618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540131328 unmapped: 78290944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 424 heartbeat osd_stat(store_statfs(0x197b67000/0x0/0x1bfc00000, data 0x23eb99e/0x2617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540139520 unmapped: 78282752 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 424 ms_handle_reset con 0x559e321a6800 session 0x559e346b25a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540139520 unmapped: 78282752 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5399798 data_alloc: 234881024 data_used: 18669568
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 424 heartbeat osd_stat(store_statfs(0x197b91000/0x0/0x1bfc00000, data 0x23c199e/0x25ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 425 ms_handle_reset con 0x559e32232400 session 0x559e31c10000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5403972 data_alloc: 234881024 data_used: 18677760
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 425 heartbeat osd_stat(store_statfs(0x197b8d000/0x0/0x1bfc00000, data 0x23c34dd/0x25f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.849381447s of 11.910986900s, submitted: 28
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 425 heartbeat osd_stat(store_statfs(0x198e50000/0x0/0x1bfc00000, data 0x11014dd/0x132e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 425 ms_handle_reset con 0x559e3691d000 session 0x559e321c9c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 425 heartbeat osd_stat(store_statfs(0x198e50000/0x0/0x1bfc00000, data 0x11014dd/0x132e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5159680 data_alloc: 218103808 data_used: 3776512
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 425 heartbeat osd_stat(store_statfs(0x198e50000/0x0/0x1bfc00000, data 0x11014dd/0x132e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 426 heartbeat osd_stat(store_statfs(0x198e4c000/0x0/0x1bfc00000, data 0x110318a/0x1331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5163854 data_alloc: 218103808 data_used: 3784704
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 426 ms_handle_reset con 0x559e31fe4400 session 0x559e380b54a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 426 heartbeat osd_stat(store_statfs(0x198e4d000/0x0/0x1bfc00000, data 0x110318a/0x1331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5162542 data_alloc: 218103808 data_used: 3788800
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 426 heartbeat osd_stat(store_statfs(0x198e4d000/0x0/0x1bfc00000, data 0x110318a/0x1331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.663026810s of 14.793736458s, submitted: 42
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104cc9/0x1334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5166716 data_alloc: 218103808 data_used: 3796992
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104cc9/0x1334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e321a6800 session 0x559e3732c960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e32232400 session 0x559e321c8d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3493f800 session 0x559e32ee5860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5169092 data_alloc: 218103808 data_used: 3796992
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104d2b/0x1335000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534405120 unmapped: 84017152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104d2b/0x1335000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.876494408s of 10.890866280s, submitted: 12
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3a5dd000 session 0x559e3732da40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534413312 unmapped: 84008960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104d2b/0x1335000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e31fe4400 session 0x559e34562f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534421504 unmapped: 84000768 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e321a6800 session 0x559e31c10000
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e32232400 session 0x559e346b30e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199550 data_alloc: 218103808 data_used: 3796992
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199550 data_alloc: 218103808 data_used: 3796992
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199550 data_alloc: 218103808 data_used: 3796992
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199550 data_alloc: 218103808 data_used: 3796992
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3493f800 session 0x559e31fcb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3dcf6000 session 0x559e348ebc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3dcf6000 session 0x559e380b43c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.670154572s of 21.787570953s, submitted: 36
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e31fe4400 session 0x559e32e67c20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5204921 data_alloc: 218103808 data_used: 3796992
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534528000 unmapped: 83894272 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b3a000/0x0/0x1bfc00000, data 0x1413cd9/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5219613 data_alloc: 218103808 data_used: 5652480
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b3a000/0x0/0x1bfc00000, data 0x1413cd9/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b3a000/0x0/0x1bfc00000, data 0x1413cd9/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5219613 data_alloc: 218103808 data_used: 5652480
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b3a000/0x0/0x1bfc00000, data 0x1413cd9/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.476041794s of 14.511431694s, submitted: 9
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537731072 unmapped: 80691200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 6332416
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x1982a1000/0x0/0x1bfc00000, data 0x1cabcd9/0x1edc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5294823 data_alloc: 218103808 data_used: 6479872
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829f000/0x0/0x1bfc00000, data 0x1caecd9/0x1edf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e321a6800 session 0x559e31c101e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e32232400 session 0x559e34697680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537763840 unmapped: 80658432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.910843849s of 10.110754967s, submitted: 90
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3493f800 session 0x559e329d45a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283446 data_alloc: 218103808 data_used: 6369280
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x1982c3000/0x0/0x1bfc00000, data 0x1c8bcc9/0x1ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283446 data_alloc: 218103808 data_used: 6369280
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x1982c3000/0x0/0x1bfc00000, data 0x1c8bcc9/0x1ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283446 data_alloc: 218103808 data_used: 6369280
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e31fe4400 session 0x559e348eb4a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x1982c3000/0x0/0x1bfc00000, data 0x1c8bcc9/0x1ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e321a6800 session 0x559e348eb680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e32232400 session 0x559e32e15a40
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.753578186s of 15.813142776s, submitted: 17
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 80478208 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3dcf6000 session 0x559e321c85a0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5288358 data_alloc: 218103808 data_used: 6373376
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 80478208 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289318 data_alloc: 218103808 data_used: 6451200
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289318 data_alloc: 218103808 data_used: 6451200
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.299868584s of 12.453051567s, submitted: 3
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5306294 data_alloc: 218103808 data_used: 8187904
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5306294 data_alloc: 218103808 data_used: 8187904
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5309654 data_alloc: 218103808 data_used: 8790016
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.314167023s of 14.153972626s, submitted: 2
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e34c1c400 session 0x559e3487ad20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e321a6800 session 0x559e346b21e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e31fe4400 session 0x559e32134960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5356888 data_alloc: 218103808 data_used: 8798208
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198294000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 81608704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 81608704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536821760 unmapped: 81600512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198294000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536821760 unmapped: 81600512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5356888 data_alloc: 218103808 data_used: 8798208
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e32232400 session 0x559e32a00f00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e3dcf6000 session 0x559e32e66780
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198294000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e35292800 session 0x559e321c8960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.009097099s of 11.039314270s, submitted: 5
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e31fe4400 session 0x559e32e672c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198294000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198295000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5357093 data_alloc: 218103808 data_used: 8802304
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5357093 data_alloc: 218103808 data_used: 8802304
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536838144 unmapped: 81584128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198295000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536846336 unmapped: 81575936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536846336 unmapped: 81575936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536846336 unmapped: 81575936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198295000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536854528 unmapped: 81567744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.068496704s of 13.099369049s, submitted: 7
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5387847 data_alloc: 218103808 data_used: 9965568
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801c000/0x0/0x1bfc00000, data 0x2461993/0x2162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801c000/0x0/0x1bfc00000, data 0x2461993/0x2162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801c000/0x0/0x1bfc00000, data 0x2461993/0x2162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5388487 data_alloc: 218103808 data_used: 9981952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801c000/0x0/0x1bfc00000, data 0x2461993/0x2162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534913024 unmapped: 83509248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.394811630s of 10.438727379s, submitted: 7
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5389671 data_alloc: 218103808 data_used: 9981952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534913024 unmapped: 83509248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534913024 unmapped: 83509248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534913024 unmapped: 83509248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534921216 unmapped: 83501056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801b000/0x0/0x1bfc00000, data 0x2461993/0x2163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534921216 unmapped: 83501056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5398279 data_alloc: 218103808 data_used: 9981952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535044096 unmapped: 83378176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535044096 unmapped: 83378176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535044096 unmapped: 83378176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535101440 unmapped: 83320832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fad000/0x0/0x1bfc00000, data 0x24cf993/0x21d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535101440 unmapped: 83320832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fad000/0x0/0x1bfc00000, data 0x24cf993/0x21d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396103 data_alloc: 218103808 data_used: 9981952
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534642688 unmapped: 83779584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534642688 unmapped: 83779584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fad000/0x0/0x1bfc00000, data 0x24cf993/0x21d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5397863 data_alloc: 218103808 data_used: 10248192
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.697258949s of 15.726952553s, submitted: 6
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e321a6800 session 0x559e363d4960
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e32232400 session 0x559e34562d20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e3dcf6000 session 0x559e3732cd20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fad000/0x0/0x1bfc00000, data 0x24cf993/0x21d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e379c5000 session 0x559e3487b680
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396787 data_alloc: 218103808 data_used: 10248192
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e31fe4400 session 0x559e380b52c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e321a6800 session 0x559e346b3860
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 429 ms_handle_reset con 0x559e32232400 session 0x559e363f41e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 429 heartbeat osd_stat(store_statfs(0x198298000/0x0/0x1bfc00000, data 0x1cb35de/0x1ee6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5346385 data_alloc: 218103808 data_used: 9920512
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 429 ms_handle_reset con 0x559e36898400 session 0x559e321c92c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 429 ms_handle_reset con 0x559e3ac5a000 session 0x559e346452c0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.201784134s of 11.308134079s, submitted: 33
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534675456 unmapped: 83746816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 429 heartbeat osd_stat(store_statfs(0x1982bd000/0x0/0x1bfc00000, data 0x1c8f5cf/0x1ec1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 429 ms_handle_reset con 0x559e3ac5a000 session 0x559e344210e0
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534675456 unmapped: 83746816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534675456 unmapped: 83746816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5339505 data_alloc: 218103808 data_used: 9814016
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534675456 unmapped: 83746816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x1982b9000/0x0/0x1bfc00000, data 0x1c9110e/0x1ec4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 83730432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 ms_handle_reset con 0x559e31fe4400 session 0x559e3732dc20
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 83730432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 ms_handle_reset con 0x559e321a6800 session 0x559e3732de00
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531636224 unmapped: 86786048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531636224 unmapped: 86786048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'config show' '{prefix=config show}'
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531070976 unmapped: 87351296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530735104 unmapped: 87687168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:34:19 np0005466031 ceph-osd[79023]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:34:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:34:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/847947838' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:34:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:20.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:34:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2831220437' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:34:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  2 09:34:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1738238686' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  2 09:34:20 np0005466031 nova_compute[235803]: 2025-10-02 13:34:20.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:21.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  2 09:34:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3386889593' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  2 09:34:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:22.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  2 09:34:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1179389264' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  2 09:34:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 09:34:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/209078430' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 09:34:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  2 09:34:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/564963111' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  2 09:34:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:23.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:23 np0005466031 nova_compute[235803]: 2025-10-02 13:34:23.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/990446423' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2317346084' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3780658384' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1810802137' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  2 09:34:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2877372303' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  2 09:34:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:24.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  2 09:34:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3258619475' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  2 09:34:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct  2 09:34:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/11611765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  2 09:34:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  2 09:34:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3605085164' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  2 09:34:24 np0005466031 systemd[1]: Starting Hostname Service...
Oct  2 09:34:24 np0005466031 systemd[1]: Started Hostname Service.
Oct  2 09:34:24 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct  2 09:34:24 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1683149492' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  2 09:34:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:25.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct  2 09:34:25 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1263180927' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  2 09:34:25 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct  2 09:34:25 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3341206882' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  2 09:34:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:34:25.901 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:34:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:34:25.903 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:34:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:34:25.903 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:34:25 np0005466031 nova_compute[235803]: 2025-10-02 13:34:25.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:26.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:34:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:34:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:34:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:34:26 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct  2 09:34:26 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1392752194' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  2 09:34:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:27.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct  2 09:34:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2709166255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  2 09:34:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:34:27 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2576054269' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:34:28 np0005466031 nova_compute[235803]: 2025-10-02 13:34:28.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:28.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct  2 09:34:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1619898129' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  2 09:34:28 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct  2 09:34:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1544442175' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  2 09:34:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:34:28 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:29.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1577056816' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.484187) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069484226, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 855, "num_deletes": 251, "total_data_size": 1252814, "memory_usage": 1268672, "flush_reason": "Manual Compaction"}
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069490154, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 825939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89308, "largest_seqno": 90158, "table_properties": {"data_size": 821354, "index_size": 1980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13647, "raw_average_key_size": 22, "raw_value_size": 811086, "raw_average_value_size": 1320, "num_data_blocks": 84, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412039, "oldest_key_time": 1759412039, "file_creation_time": 1759412069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 6002 microseconds, and 3238 cpu microseconds.
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.490189) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 825939 bytes OK
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.490209) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.492039) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.492060) EVENT_LOG_v1 {"time_micros": 1759412069492053, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.492077) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1247742, prev total WAL file size 1247742, number of live WAL files 2.
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.492736) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(806KB)], [183(11MB)]
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069492775, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13275498, "oldest_snapshot_seqno": -1}
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 10980 keys, 11368750 bytes, temperature: kUnknown
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069565861, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 11368750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11301544, "index_size": 38677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 290834, "raw_average_key_size": 26, "raw_value_size": 11113428, "raw_average_value_size": 1012, "num_data_blocks": 1453, "num_entries": 10980, "num_filter_entries": 10980, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759412069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.566155) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11368750 bytes
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.567756) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.4 rd, 155.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(29.8) write-amplify(13.8) OK, records in: 11496, records dropped: 516 output_compression: NoCompression
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.567775) EVENT_LOG_v1 {"time_micros": 1759412069567766, "job": 118, "event": "compaction_finished", "compaction_time_micros": 73180, "compaction_time_cpu_micros": 28991, "output_level": 6, "num_output_files": 1, "total_output_size": 11368750, "num_input_records": 11496, "num_output_records": 10980, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069568194, "job": 118, "event": "table_file_deletion", "file_number": 185}
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069570508, "job": 118, "event": "table_file_deletion", "file_number": 183}
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.492661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.570639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.570644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.570646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.570647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:34:29.570649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct  2 09:34:29 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4115939123' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  2 09:34:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:34:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:34:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct  2 09:34:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3804244376' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct  2 09:34:30 np0005466031 podman[346817]: 2025-10-02 13:34:30.51358241 +0000 UTC m=+0.068373541 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:34:30 np0005466031 podman[346819]: 2025-10-02 13:34:30.539758774 +0000 UTC m=+0.095378059 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:34:30 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct  2 09:34:30 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2364329330' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct  2 09:34:30 np0005466031 nova_compute[235803]: 2025-10-02 13:34:30.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:31.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:31 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct  2 09:34:31 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1326738878' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct  2 09:34:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct  2 09:34:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1448114110' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct  2 09:34:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct  2 09:34:32 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3814933560' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct  2 09:34:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:33.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:33 np0005466031 nova_compute[235803]: 2025-10-02 13:34:33.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:33 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct  2 09:34:33 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3043116930' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct  2 09:34:33 np0005466031 nova_compute[235803]: 2025-10-02 13:34:33.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:35.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct  2 09:34:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3885141461' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct  2 09:34:35 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct  2 09:34:35 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2454552929' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct  2 09:34:35 np0005466031 nova_compute[235803]: 2025-10-02 13:34:35.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:36 np0005466031 podman[347983]: 2025-10-02 13:34:36.942993873 +0000 UTC m=+0.062950915 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:34:36 np0005466031 podman[347988]: 2025-10-02 13:34:36.943070115 +0000 UTC m=+0.059035602 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:34:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct  2 09:34:37 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/534626633' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct  2 09:34:37 np0005466031 ovs-appctl[348532]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:34:37 np0005466031 ovs-appctl[348537]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:34:37 np0005466031 ovs-appctl[348549]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:34:38 np0005466031 nova_compute[235803]: 2025-10-02 13:34:38.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:38.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:38 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:34:38 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1102273128' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:34:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:39.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct  2 09:34:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2047886250' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct  2 09:34:39 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:39 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1969284233' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:40.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:41 np0005466031 nova_compute[235803]: 2025-10-02 13:34:41.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:41.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:34:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/369431865' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:34:41 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct  2 09:34:41 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4127268521' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct  2 09:34:41 np0005466031 nova_compute[235803]: 2025-10-02 13:34:41.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/594778116' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:42.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct  2 09:34:42 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1645206274' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct  2 09:34:43 np0005466031 nova_compute[235803]: 2025-10-02 13:34:43.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct  2 09:34:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2475723777' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct  2 09:34:43 np0005466031 nova_compute[235803]: 2025-10-02 13:34:43.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:43 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:43 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4235838842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:44 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct  2 09:34:44 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2540540668' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct  2 09:34:44 np0005466031 nova_compute[235803]: 2025-10-02 13:34:44.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:44 np0005466031 nova_compute[235803]: 2025-10-02 13:34:44.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:44 np0005466031 nova_compute[235803]: 2025-10-02 13:34:44.844 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:34:44 np0005466031 nova_compute[235803]: 2025-10-02 13:34:44.845 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:34:44 np0005466031 nova_compute[235803]: 2025-10-02 13:34:44.845 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:34:44 np0005466031 nova_compute[235803]: 2025-10-02 13:34:44.845 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:34:44 np0005466031 nova_compute[235803]: 2025-10-02 13:34:44.845 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:34:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:45.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:34:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3215334327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:34:45 np0005466031 nova_compute[235803]: 2025-10-02 13:34:45.305 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:34:45 np0005466031 nova_compute[235803]: 2025-10-02 13:34:45.469 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:34:45 np0005466031 nova_compute[235803]: 2025-10-02 13:34:45.470 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3964MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:34:45 np0005466031 nova_compute[235803]: 2025-10-02 13:34:45.470 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:34:45 np0005466031 nova_compute[235803]: 2025-10-02 13:34:45.471 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:34:45 np0005466031 nova_compute[235803]: 2025-10-02 13:34:45.602 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:34:45 np0005466031 nova_compute[235803]: 2025-10-02 13:34:45.603 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:34:45 np0005466031 nova_compute[235803]: 2025-10-02 13:34:45.629 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:34:45 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:45 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2736487162' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:46 np0005466031 nova_compute[235803]: 2025-10-02 13:34:46.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:46.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:34:46 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4129652381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:34:46 np0005466031 nova_compute[235803]: 2025-10-02 13:34:46.120 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:34:46 np0005466031 nova_compute[235803]: 2025-10-02 13:34:46.126 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:34:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct  2 09:34:46 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2347324164' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct  2 09:34:46 np0005466031 nova_compute[235803]: 2025-10-02 13:34:46.167 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:34:46 np0005466031 nova_compute[235803]: 2025-10-02 13:34:46.169 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:34:46 np0005466031 nova_compute[235803]: 2025-10-02 13:34:46.169 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:34:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:47.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:34:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1946319729' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:34:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct  2 09:34:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2707107688' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct  2 09:34:48 np0005466031 nova_compute[235803]: 2025-10-02 13:34:48.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:48.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 09:34:48 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/229797035' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 09:34:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:49.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:49 np0005466031 virtqemud[235323]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:34:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:50.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:50 np0005466031 nova_compute[235803]: 2025-10-02 13:34:50.170 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:50 np0005466031 nova_compute[235803]: 2025-10-02 13:34:50.170 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:34:50 np0005466031 nova_compute[235803]: 2025-10-02 13:34:50.170 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:34:50 np0005466031 systemd[1]: Starting Time & Date Service...
Oct  2 09:34:50 np0005466031 nova_compute[235803]: 2025-10-02 13:34:50.210 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:34:50 np0005466031 systemd[1]: Started Time & Date Service.
Oct  2 09:34:50 np0005466031 nova_compute[235803]: 2025-10-02 13:34:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:50 np0005466031 nova_compute[235803]: 2025-10-02 13:34:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:50 np0005466031 nova_compute[235803]: 2025-10-02 13:34:50.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:34:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:34:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:51.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:34:51 np0005466031 nova_compute[235803]: 2025-10-02 13:34:51.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:52.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:53 np0005466031 nova_compute[235803]: 2025-10-02 13:34:53.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:53.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:54.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:55.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:56 np0005466031 nova_compute[235803]: 2025-10-02 13:34:56.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:56.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:57.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:58 np0005466031 nova_compute[235803]: 2025-10-02 13:34:58.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:58.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:34:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:34:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:34:59 np0005466031 nova_compute[235803]: 2025-10-02 13:34:59.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:00.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:00 np0005466031 podman[350586]: 2025-10-02 13:35:00.631993988 +0000 UTC m=+0.057548309 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct  2 09:35:00 np0005466031 podman[350587]: 2025-10-02 13:35:00.663916808 +0000 UTC m=+0.088772879 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:35:01 np0005466031 nova_compute[235803]: 2025-10-02 13:35:01.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:01.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:02.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:03 np0005466031 nova_compute[235803]: 2025-10-02 13:35:03.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:03.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:04.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:05.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:35:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4065200134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:35:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:35:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4065200134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:35:06 np0005466031 nova_compute[235803]: 2025-10-02 13:35:06.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:06.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:07.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:07 np0005466031 podman[350685]: 2025-10-02 13:35:07.217319643 +0000 UTC m=+0.061033830 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:35:07 np0005466031 podman[350684]: 2025-10-02 13:35:07.217512249 +0000 UTC m=+0.061133023 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:35:08 np0005466031 nova_compute[235803]: 2025-10-02 13:35:08.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:08.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:10.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:11 np0005466031 nova_compute[235803]: 2025-10-02 13:35:11.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:12.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:12 np0005466031 nova_compute[235803]: 2025-10-02 13:35:12.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:13 np0005466031 nova_compute[235803]: 2025-10-02 13:35:13.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:13.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:14.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:15.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:16 np0005466031 nova_compute[235803]: 2025-10-02 13:35:16.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:17.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:18 np0005466031 nova_compute[235803]: 2025-10-02 13:35:18.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:19.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:35:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:35:20 np0005466031 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 09:35:20 np0005466031 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 09:35:21 np0005466031 nova_compute[235803]: 2025-10-02 13:35:21.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:21.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:23 np0005466031 nova_compute[235803]: 2025-10-02 13:35:23.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:23.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:24.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:25.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:35:25.902 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:35:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:35:25.903 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:35:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:35:25.903 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:35:26 np0005466031 nova_compute[235803]: 2025-10-02 13:35:26.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:35:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:26.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:35:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:27.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:28 np0005466031 nova_compute[235803]: 2025-10-02 13:35:28.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:28.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:29.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:30.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:31 np0005466031 nova_compute[235803]: 2025-10-02 13:35:31.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:31 np0005466031 podman[350789]: 2025-10-02 13:35:31.653042685 +0000 UTC m=+0.063747318 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:35:31 np0005466031 podman[350790]: 2025-10-02 13:35:31.695456717 +0000 UTC m=+0.109655121 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:35:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:33 np0005466031 nova_compute[235803]: 2025-10-02 13:35:33.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:33.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:33 np0005466031 nova_compute[235803]: 2025-10-02 13:35:33.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:34 np0005466031 systemd[1]: session-71.scope: Deactivated successfully.
Oct  2 09:35:34 np0005466031 systemd[1]: session-71.scope: Consumed 2min 49.305s CPU time, 957.6M memory peak, read 405.6M from disk, written 304.2M to disk.
Oct  2 09:35:34 np0005466031 systemd-logind[786]: Session 71 logged out. Waiting for processes to exit.
Oct  2 09:35:34 np0005466031 systemd-logind[786]: Removed session 71.
Oct  2 09:35:34 np0005466031 systemd-logind[786]: New session 72 of user zuul.
Oct  2 09:35:34 np0005466031 systemd[1]: Started Session 72 of User zuul.
Oct  2 09:35:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:35.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:35 np0005466031 systemd[1]: session-72.scope: Deactivated successfully.
Oct  2 09:35:35 np0005466031 systemd-logind[786]: Session 72 logged out. Waiting for processes to exit.
Oct  2 09:35:35 np0005466031 systemd-logind[786]: Removed session 72.
Oct  2 09:35:35 np0005466031 systemd-logind[786]: New session 73 of user zuul.
Oct  2 09:35:35 np0005466031 systemd[1]: Started Session 73 of User zuul.
Oct  2 09:35:35 np0005466031 systemd[1]: session-73.scope: Deactivated successfully.
Oct  2 09:35:35 np0005466031 systemd-logind[786]: Session 73 logged out. Waiting for processes to exit.
Oct  2 09:35:35 np0005466031 systemd-logind[786]: Removed session 73.
Oct  2 09:35:36 np0005466031 nova_compute[235803]: 2025-10-02 13:35:36.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:35:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:36.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:35:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:37.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:37 np0005466031 podman[350897]: 2025-10-02 13:35:37.666647105 +0000 UTC m=+0.070239875 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  2 09:35:37 np0005466031 podman[350898]: 2025-10-02 13:35:37.67964732 +0000 UTC m=+0.082784917 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct  2 09:35:38 np0005466031 nova_compute[235803]: 2025-10-02 13:35:38.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:39.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:41 np0005466031 nova_compute[235803]: 2025-10-02 13:35:41.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:41.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:42.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:43 np0005466031 nova_compute[235803]: 2025-10-02 13:35:43.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:35:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:35:43 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:35:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:43.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:43 np0005466031 nova_compute[235803]: 2025-10-02 13:35:43.639 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:44.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:44 np0005466031 nova_compute[235803]: 2025-10-02 13:35:44.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:45.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:45 np0005466031 nova_compute[235803]: 2025-10-02 13:35:45.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:45 np0005466031 nova_compute[235803]: 2025-10-02 13:35:45.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:45 np0005466031 nova_compute[235803]: 2025-10-02 13:35:45.684 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:35:45 np0005466031 nova_compute[235803]: 2025-10-02 13:35:45.684 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:35:45 np0005466031 nova_compute[235803]: 2025-10-02 13:35:45.684 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:35:45 np0005466031 nova_compute[235803]: 2025-10-02 13:35:45.685 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:35:45 np0005466031 nova_compute[235803]: 2025-10-02 13:35:45.686 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:35:46 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:35:46 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2360798426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:46.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.146 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.296 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.297 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4082MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.297 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.298 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.748 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.748 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:35:46 np0005466031 nova_compute[235803]: 2025-10-02 13:35:46.774 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:35:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:47.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:35:47 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/970104711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:35:47 np0005466031 nova_compute[235803]: 2025-10-02 13:35:47.205 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:35:47 np0005466031 nova_compute[235803]: 2025-10-02 13:35:47.213 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:35:47 np0005466031 nova_compute[235803]: 2025-10-02 13:35:47.295 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:35:47 np0005466031 nova_compute[235803]: 2025-10-02 13:35:47.297 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:35:47 np0005466031 nova_compute[235803]: 2025-10-02 13:35:47.297 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:35:48 np0005466031 nova_compute[235803]: 2025-10-02 13:35:48.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:48.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:49.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:50.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:35:50 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:35:50 np0005466031 nova_compute[235803]: 2025-10-02 13:35:50.297 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:50 np0005466031 nova_compute[235803]: 2025-10-02 13:35:50.298 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:35:50 np0005466031 nova_compute[235803]: 2025-10-02 13:35:50.298 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:35:50 np0005466031 nova_compute[235803]: 2025-10-02 13:35:50.330 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:35:51 np0005466031 nova_compute[235803]: 2025-10-02 13:35:51.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:51.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:51 np0005466031 nova_compute[235803]: 2025-10-02 13:35:51.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:51 np0005466031 nova_compute[235803]: 2025-10-02 13:35:51.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:51 np0005466031 nova_compute[235803]: 2025-10-02 13:35:51.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:35:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:53 np0005466031 nova_compute[235803]: 2025-10-02 13:35:53.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:53.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:54.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:35:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:55.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:35:56 np0005466031 nova_compute[235803]: 2025-10-02 13:35:56.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:56.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:57.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:58 np0005466031 nova_compute[235803]: 2025-10-02 13:35:58.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:58.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:35:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:59 np0005466031 nova_compute[235803]: 2025-10-02 13:35:59.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:00.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:01 np0005466031 nova_compute[235803]: 2025-10-02 13:36:01.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:02.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:02 np0005466031 podman[351224]: 2025-10-02 13:36:02.641510195 +0000 UTC m=+0.071782760 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:36:02 np0005466031 podman[351225]: 2025-10-02 13:36:02.665047893 +0000 UTC m=+0.088221923 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:36:03 np0005466031 nova_compute[235803]: 2025-10-02 13:36:03.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:03.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:04.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:05.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:06 np0005466031 nova_compute[235803]: 2025-10-02 13:36:06.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:06.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:07.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:08 np0005466031 nova_compute[235803]: 2025-10-02 13:36:08.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:08.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:36:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 18K writes, 91K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1547 writes, 7725 keys, 1547 commit groups, 1.0 writes per commit group, ingest: 16.14 MB, 0.03 MB/s#012Interval WAL: 1547 writes, 1547 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     62.7      1.78              0.32        59    0.030       0      0       0.0       0.0#012  L6      1/0   10.84 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4    111.5     95.7      6.37              1.67        58    0.110    457K    31K       0.0       0.0#012 Sum      1/0   10.84 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     87.1     88.5      8.16              1.99       117    0.070    457K    31K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    117.7    117.6      0.58              0.18        10    0.058     55K   2527       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    111.5     95.7      6.37              1.67        58    0.110    457K    31K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     62.8      1.78              0.32        58    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.109, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.70 GB write, 0.11 MB/s write, 0.69 GB read, 0.11 MB/s read, 8.2 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 78.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000556 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4901,75.33 MB,24.779%) FilterBlock(117,1.23 MB,0.404895%) IndexBlock(117,2.03 MB,0.66694%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:36:08 np0005466031 podman[351322]: 2025-10-02 13:36:08.625108171 +0000 UTC m=+0.051265448 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:36:08 np0005466031 podman[351321]: 2025-10-02 13:36:08.628282182 +0000 UTC m=+0.059000021 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:36:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:09.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:10.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:11 np0005466031 nova_compute[235803]: 2025-10-02 13:36:11.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:11.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:12.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:13 np0005466031 nova_compute[235803]: 2025-10-02 13:36:13.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:13.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:14.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:15.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:16 np0005466031 nova_compute[235803]: 2025-10-02 13:36:16.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:16.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:17.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:18 np0005466031 nova_compute[235803]: 2025-10-02 13:36:18.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:18.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:19.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:20.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:21 np0005466031 nova_compute[235803]: 2025-10-02 13:36:21.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:21.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:22.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:23 np0005466031 nova_compute[235803]: 2025-10-02 13:36:23.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:24 np0005466031 nova_compute[235803]: 2025-10-02 13:36:24.146 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:24.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:36:25.905 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:36:25.905 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:36:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:36:25.905 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:36:26 np0005466031 nova_compute[235803]: 2025-10-02 13:36:26.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:26.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:28 np0005466031 nova_compute[235803]: 2025-10-02 13:36:28.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:30.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:31 np0005466031 nova_compute[235803]: 2025-10-02 13:36:31.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:31.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:32.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:33 np0005466031 nova_compute[235803]: 2025-10-02 13:36:33.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:33.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:33 np0005466031 podman[351422]: 2025-10-02 13:36:33.621313573 +0000 UTC m=+0.053243055 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  2 09:36:33 np0005466031 podman[351423]: 2025-10-02 13:36:33.651623656 +0000 UTC m=+0.079326256 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:36:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:34.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:34 np0005466031 nova_compute[235803]: 2025-10-02 13:36:34.639 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:35.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:36 np0005466031 nova_compute[235803]: 2025-10-02 13:36:36.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:37.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:38 np0005466031 nova_compute[235803]: 2025-10-02 13:36:38.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:38.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:39.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:39 np0005466031 podman[351467]: 2025-10-02 13:36:39.626195243 +0000 UTC m=+0.056799768 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:36:39 np0005466031 podman[351468]: 2025-10-02 13:36:39.649588637 +0000 UTC m=+0.070448101 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 09:36:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:40.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:41 np0005466031 nova_compute[235803]: 2025-10-02 13:36:41.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:41.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:42.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:36:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 76K writes, 309K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.05 MB/s#012Cumulative WAL: 76K writes, 28K syncs, 2.72 writes per sync, written: 0.31 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1617 writes, 4926 keys, 1617 commit groups, 1.0 writes per commit group, ingest: 3.44 MB, 0.01 MB/s#012Interval WAL: 1617 writes, 737 syncs, 2.19 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:36:43 np0005466031 nova_compute[235803]: 2025-10-02 13:36:43.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:43.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:43 np0005466031 nova_compute[235803]: 2025-10-02 13:36:43.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:44.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:44 np0005466031 nova_compute[235803]: 2025-10-02 13:36:44.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:45.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:36:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:36:46 np0005466031 nova_compute[235803]: 2025-10-02 13:36:46.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:46 np0005466031 nova_compute[235803]: 2025-10-02 13:36:46.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:47.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:47 np0005466031 nova_compute[235803]: 2025-10-02 13:36:47.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:47 np0005466031 nova_compute[235803]: 2025-10-02 13:36:47.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:36:47 np0005466031 nova_compute[235803]: 2025-10-02 13:36:47.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:36:47 np0005466031 nova_compute[235803]: 2025-10-02 13:36:47.671 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:36:47 np0005466031 nova_compute[235803]: 2025-10-02 13:36:47.672 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:36:47 np0005466031 nova_compute[235803]: 2025-10-02 13:36:47.672 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:48 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:36:48 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1800985970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.112 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:36:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:36:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.263 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.264 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4102MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.264 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.265 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.517 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.517 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.540 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.708 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.709 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.725 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.773 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:36:48 np0005466031 nova_compute[235803]: 2025-10-02 13:36:48.798 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:36:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:36:49 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3282365555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:36:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:49.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:49 np0005466031 nova_compute[235803]: 2025-10-02 13:36:49.263 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:36:49 np0005466031 nova_compute[235803]: 2025-10-02 13:36:49.268 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:36:49 np0005466031 nova_compute[235803]: 2025-10-02 13:36:49.295 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:36:49 np0005466031 nova_compute[235803]: 2025-10-02 13:36:49.297 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:36:49 np0005466031 nova_compute[235803]: 2025-10-02 13:36:49.297 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:36:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:50.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:36:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:36:51 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:36:51 np0005466031 nova_compute[235803]: 2025-10-02 13:36:51.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:51 np0005466031 nova_compute[235803]: 2025-10-02 13:36:51.297 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:51 np0005466031 nova_compute[235803]: 2025-10-02 13:36:51.298 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:36:51 np0005466031 nova_compute[235803]: 2025-10-02 13:36:51.298 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:36:51 np0005466031 nova_compute[235803]: 2025-10-02 13:36:51.408 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:36:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:36:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:52.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:36:52 np0005466031 nova_compute[235803]: 2025-10-02 13:36:52.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:52 np0005466031 nova_compute[235803]: 2025-10-02 13:36:52.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:36:53 np0005466031 nova_compute[235803]: 2025-10-02 13:36:53.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:53.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:53 np0005466031 nova_compute[235803]: 2025-10-02 13:36:53.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:53 np0005466031 nova_compute[235803]: 2025-10-02 13:36:53.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:55.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:56 np0005466031 nova_compute[235803]: 2025-10-02 13:36:56.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:56.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:57.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:36:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:36:58 np0005466031 nova_compute[235803]: 2025-10-02 13:36:58.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:36:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:58.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:36:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:36:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:59.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:00.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:00 np0005466031 nova_compute[235803]: 2025-10-02 13:37:00.694 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:01 np0005466031 nova_compute[235803]: 2025-10-02 13:37:01.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:01.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:02.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:03 np0005466031 nova_compute[235803]: 2025-10-02 13:37:03.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:03.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:04.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:04 np0005466031 podman[351794]: 2025-10-02 13:37:04.67460573 +0000 UTC m=+0.095323908 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:37:04 np0005466031 podman[351795]: 2025-10-02 13:37:04.678827072 +0000 UTC m=+0.094477144 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:37:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:37:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:05.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:37:06 np0005466031 nova_compute[235803]: 2025-10-02 13:37:06.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:06.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:08 np0005466031 nova_compute[235803]: 2025-10-02 13:37:08.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:08.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:08.949127) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228949192, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1910, "num_deletes": 256, "total_data_size": 4371435, "memory_usage": 4452480, "flush_reason": "Manual Compaction"}
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228963960, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 2863530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90163, "largest_seqno": 92068, "table_properties": {"data_size": 2855298, "index_size": 4917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18571, "raw_average_key_size": 20, "raw_value_size": 2838286, "raw_average_value_size": 3174, "num_data_blocks": 216, "num_entries": 894, "num_filter_entries": 894, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412069, "oldest_key_time": 1759412069, "file_creation_time": 1759412228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 14995 microseconds, and 7506 cpu microseconds.
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:08.964114) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 2863530 bytes OK
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:08.964165) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:08.966031) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:08.966046) EVENT_LOG_v1 {"time_micros": 1759412228966041, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:08.966073) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 4362489, prev total WAL file size 4362489, number of live WAL files 2.
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:08.967677) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353230' seq:72057594037927935, type:22 .. '6C6F676D0033373733' seq:0, type:0; will stop at (end)
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(2796KB)], [186(10MB)]
Oct  2 09:37:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228967728, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 14232280, "oldest_snapshot_seqno": -1}
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 11349 keys, 14109693 bytes, temperature: kUnknown
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229032913, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 14109693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14037073, "index_size": 43131, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 300236, "raw_average_key_size": 26, "raw_value_size": 13839575, "raw_average_value_size": 1219, "num_data_blocks": 1641, "num_entries": 11349, "num_filter_entries": 11349, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759412228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:09.033293) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14109693 bytes
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:09.034664) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.9 rd, 216.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 11874, records dropped: 525 output_compression: NoCompression
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:09.034687) EVENT_LOG_v1 {"time_micros": 1759412229034676, "job": 120, "event": "compaction_finished", "compaction_time_micros": 65313, "compaction_time_cpu_micros": 32711, "output_level": 6, "num_output_files": 1, "total_output_size": 14109693, "num_input_records": 11874, "num_output_records": 11349, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229035328, "job": 120, "event": "table_file_deletion", "file_number": 188}
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229037800, "job": 120, "event": "table_file_deletion", "file_number": 186}
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:08.967592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:09.037867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:09.037874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:09.037876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:09.037878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:37:09.037879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:09.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:10.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:10 np0005466031 podman[351891]: 2025-10-02 13:37:10.627730818 +0000 UTC m=+0.049098446 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:37:10 np0005466031 podman[351890]: 2025-10-02 13:37:10.62988699 +0000 UTC m=+0.052573216 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:37:11 np0005466031 nova_compute[235803]: 2025-10-02 13:37:11.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:11.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:12.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:12 np0005466031 nova_compute[235803]: 2025-10-02 13:37:12.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:13 np0005466031 nova_compute[235803]: 2025-10-02 13:37:13.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:13.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:14.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:15.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:16.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:16 np0005466031 nova_compute[235803]: 2025-10-02 13:37:16.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:17.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:17 np0005466031 radosgw[82465]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct  2 09:37:18 np0005466031 nova_compute[235803]: 2025-10-02 13:37:18.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:18.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:37:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:19.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:37:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:20.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:21 np0005466031 nova_compute[235803]: 2025-10-02 13:37:21.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:21.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:22.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:23 np0005466031 nova_compute[235803]: 2025-10-02 13:37:23.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:23.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:24.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:25.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:37:25.907 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:37:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:37:25.908 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:37:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:37:25.908 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:37:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:26.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:26 np0005466031 nova_compute[235803]: 2025-10-02 13:37:26.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:27.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:28 np0005466031 nova_compute[235803]: 2025-10-02 13:37:28.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:28.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:29.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:30.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:31 np0005466031 nova_compute[235803]: 2025-10-02 13:37:31.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:31.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:37:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:32.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:37:33 np0005466031 nova_compute[235803]: 2025-10-02 13:37:33.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:33.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:37:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:34.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:37:34 np0005466031 nova_compute[235803]: 2025-10-02 13:37:34.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:35.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:35 np0005466031 podman[351989]: 2025-10-02 13:37:35.631232438 +0000 UTC m=+0.057240290 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:37:35 np0005466031 nova_compute[235803]: 2025-10-02 13:37:35.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:35 np0005466031 nova_compute[235803]: 2025-10-02 13:37:35.638 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:37:35 np0005466031 podman[351990]: 2025-10-02 13:37:35.664785985 +0000 UTC m=+0.087440670 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:37:35 np0005466031 nova_compute[235803]: 2025-10-02 13:37:35.726 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:37:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:36.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:36 np0005466031 nova_compute[235803]: 2025-10-02 13:37:36.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:37.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:38 np0005466031 nova_compute[235803]: 2025-10-02 13:37:38.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:38.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:39.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:40.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:41 np0005466031 nova_compute[235803]: 2025-10-02 13:37:41.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:41.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:41 np0005466031 podman[352034]: 2025-10-02 13:37:41.641354461 +0000 UTC m=+0.061863364 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:37:41 np0005466031 podman[352033]: 2025-10-02 13:37:41.647369504 +0000 UTC m=+0.070917604 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 09:37:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:42.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:43 np0005466031 nova_compute[235803]: 2025-10-02 13:37:43.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:43.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:43 np0005466031 nova_compute[235803]: 2025-10-02 13:37:43.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:43 np0005466031 nova_compute[235803]: 2025-10-02 13:37:43.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:37:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:44.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:44 np0005466031 nova_compute[235803]: 2025-10-02 13:37:44.644 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:45.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:45 np0005466031 nova_compute[235803]: 2025-10-02 13:37:45.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:45 np0005466031 auditd[703]: Audit daemon rotating log files
Oct  2 09:37:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:46.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:46 np0005466031 nova_compute[235803]: 2025-10-02 13:37:46.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:47.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:48 np0005466031 nova_compute[235803]: 2025-10-02 13:37:48.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:37:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:37:48 np0005466031 nova_compute[235803]: 2025-10-02 13:37:48.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:48 np0005466031 nova_compute[235803]: 2025-10-02 13:37:48.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:48 np0005466031 nova_compute[235803]: 2025-10-02 13:37:48.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:37:48 np0005466031 nova_compute[235803]: 2025-10-02 13:37:48.658 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:37:48 np0005466031 nova_compute[235803]: 2025-10-02 13:37:48.659 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:37:48 np0005466031 nova_compute[235803]: 2025-10-02 13:37:48.659 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:37:48 np0005466031 nova_compute[235803]: 2025-10-02 13:37:48.659 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:37:49 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:37:49 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3332069202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:37:49 np0005466031 nova_compute[235803]: 2025-10-02 13:37:49.108 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:37:49 np0005466031 nova_compute[235803]: 2025-10-02 13:37:49.259 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:37:49 np0005466031 nova_compute[235803]: 2025-10-02 13:37:49.260 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4104MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:37:49 np0005466031 nova_compute[235803]: 2025-10-02 13:37:49.260 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:37:49 np0005466031 nova_compute[235803]: 2025-10-02 13:37:49.261 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:37:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:49.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:49 np0005466031 nova_compute[235803]: 2025-10-02 13:37:49.747 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:37:49 np0005466031 nova_compute[235803]: 2025-10-02 13:37:49.748 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:37:49 np0005466031 nova_compute[235803]: 2025-10-02 13:37:49.771 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:37:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:37:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1459108784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:37:50 np0005466031 nova_compute[235803]: 2025-10-02 13:37:50.248 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:37:50 np0005466031 nova_compute[235803]: 2025-10-02 13:37:50.254 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:37:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:50 np0005466031 nova_compute[235803]: 2025-10-02 13:37:50.275 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:37:50 np0005466031 nova_compute[235803]: 2025-10-02 13:37:50.276 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:37:50 np0005466031 nova_compute[235803]: 2025-10-02 13:37:50.277 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:37:51 np0005466031 nova_compute[235803]: 2025-10-02 13:37:51.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:51 np0005466031 nova_compute[235803]: 2025-10-02 13:37:51.278 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:51 np0005466031 nova_compute[235803]: 2025-10-02 13:37:51.279 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:37:51 np0005466031 nova_compute[235803]: 2025-10-02 13:37:51.279 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:37:51 np0005466031 nova_compute[235803]: 2025-10-02 13:37:51.301 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:37:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:53 np0005466031 nova_compute[235803]: 2025-10-02 13:37:53.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:53.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:54 np0005466031 nova_compute[235803]: 2025-10-02 13:37:54.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:54 np0005466031 nova_compute[235803]: 2025-10-02 13:37:54.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:37:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:55.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:55 np0005466031 nova_compute[235803]: 2025-10-02 13:37:55.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:56.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:56 np0005466031 nova_compute[235803]: 2025-10-02 13:37:56.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:37:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:37:57 np0005466031 podman[352345]: 2025-10-02 13:37:57.7354569 +0000 UTC m=+0.085569046 container exec b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 09:37:57 np0005466031 podman[352345]: 2025-10-02 13:37:57.8649038 +0000 UTC m=+0.215015946 container exec_died b4dc2d85fe294aa629054d0996042f201c3913d3f5c6d73e12731d1731a41d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 09:37:58 np0005466031 nova_compute[235803]: 2025-10-02 13:37:58.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:58 np0005466031 podman[352481]: 2025-10-02 13:37:58.556049966 +0000 UTC m=+0.068396602 container exec f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 09:37:58 np0005466031 podman[352481]: 2025-10-02 13:37:58.566670972 +0000 UTC m=+0.079017498 container exec_died f1d8ad42c88366c8a996982a027ee8302cfc4877821bb53a767c1be4d188762d (image=quay.io/ceph/haproxy:2.3, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-haproxy-rgw-default-compute-2-zptkij)
Oct  2 09:37:58 np0005466031 podman[352547]: 2025-10-02 13:37:58.82343026 +0000 UTC m=+0.070167292 container exec ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct  2 09:37:58 np0005466031 podman[352547]: 2025-10-02 13:37:58.837195137 +0000 UTC m=+0.083932109 container exec_died ec9c79ee50b94fd25d29a8f5acd978085501448c1953f7428f03057005bd298f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-keepalived-rgw-default-compute-2-emwnjv, distribution-scope=public, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph)
Oct  2 09:37:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:37:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:59.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:00 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:38:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:01 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:38:01 np0005466031 nova_compute[235803]: 2025-10-02 13:38:01.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:01.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:01 np0005466031 nova_compute[235803]: 2025-10-02 13:38:01.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:02.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:03 np0005466031 nova_compute[235803]: 2025-10-02 13:38:03.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 09:38:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:03.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 09:38:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:38:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/535136869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:38:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:38:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/535136869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:38:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:05.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:05 np0005466031 podman[352754]: 2025-10-02 13:38:05.906834469 +0000 UTC m=+0.075565388 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:38:05 np0005466031 podman[352755]: 2025-10-02 13:38:05.93358584 +0000 UTC m=+0.106694075 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  2 09:38:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:06.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:06 np0005466031 nova_compute[235803]: 2025-10-02 13:38:06.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:07.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:08 np0005466031 nova_compute[235803]: 2025-10-02 13:38:08.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.581393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288581495, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 817, "num_deletes": 251, "total_data_size": 1572384, "memory_usage": 1592120, "flush_reason": "Manual Compaction"}
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288590664, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 1037761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92073, "largest_seqno": 92885, "table_properties": {"data_size": 1033885, "index_size": 1655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8768, "raw_average_key_size": 19, "raw_value_size": 1026112, "raw_average_value_size": 2295, "num_data_blocks": 73, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412229, "oldest_key_time": 1759412229, "file_creation_time": 1759412288, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 9319 microseconds, and 5053 cpu microseconds.
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.590731) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 1037761 bytes OK
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.590760) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.592287) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.592302) EVENT_LOG_v1 {"time_micros": 1759412288592298, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.592328) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1568154, prev total WAL file size 1568154, number of live WAL files 2.
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.593079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(1013KB)], [189(13MB)]
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288593120, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 15147454, "oldest_snapshot_seqno": -1}
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 11280 keys, 13208148 bytes, temperature: kUnknown
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288685093, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 13208148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13136827, "index_size": 42024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299483, "raw_average_key_size": 26, "raw_value_size": 12941261, "raw_average_value_size": 1147, "num_data_blocks": 1589, "num_entries": 11280, "num_filter_entries": 11280, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759412288, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.685458) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13208148 bytes
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.686748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.5 rd, 143.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.5 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(27.3) write-amplify(12.7) OK, records in: 11796, records dropped: 516 output_compression: NoCompression
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.686766) EVENT_LOG_v1 {"time_micros": 1759412288686757, "job": 122, "event": "compaction_finished", "compaction_time_micros": 92075, "compaction_time_cpu_micros": 31882, "output_level": 6, "num_output_files": 1, "total_output_size": 13208148, "num_input_records": 11796, "num_output_records": 11280, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288687053, "job": 122, "event": "table_file_deletion", "file_number": 191}
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288689875, "job": 122, "event": "table_file_deletion", "file_number": 189}
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.592973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.689938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.689944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.689946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.689948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:38:08.689950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:09.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:09 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:10.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:11 np0005466031 nova_compute[235803]: 2025-10-02 13:38:11.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:11.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:12.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:12 np0005466031 podman[352881]: 2025-10-02 13:38:12.672569915 +0000 UTC m=+0.092659301 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 09:38:12 np0005466031 podman[352882]: 2025-10-02 13:38:12.675885181 +0000 UTC m=+0.095987627 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:38:13 np0005466031 nova_compute[235803]: 2025-10-02 13:38:13.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:13.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:14 np0005466031 nova_compute[235803]: 2025-10-02 13:38:14.094 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:14.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:15.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:16.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:16 np0005466031 nova_compute[235803]: 2025-10-02 13:38:16.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:17.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:18 np0005466031 nova_compute[235803]: 2025-10-02 13:38:18.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:18.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:19.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:20.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:21 np0005466031 nova_compute[235803]: 2025-10-02 13:38:21.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:21.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:22.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:23 np0005466031 nova_compute[235803]: 2025-10-02 13:38:23.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:25.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:38:25.908 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:38:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:38:25.909 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:38:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:38:25.909 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:38:26 np0005466031 nova_compute[235803]: 2025-10-02 13:38:26.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:26.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:27.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:28 np0005466031 nova_compute[235803]: 2025-10-02 13:38:28.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:28.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:29.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:30.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:31 np0005466031 nova_compute[235803]: 2025-10-02 13:38:31.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:31.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:32.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:33 np0005466031 nova_compute[235803]: 2025-10-02 13:38:33.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:34.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:34 np0005466031 nova_compute[235803]: 2025-10-02 13:38:34.955 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:35.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:36 np0005466031 nova_compute[235803]: 2025-10-02 13:38:36.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:36.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:36 np0005466031 podman[352982]: 2025-10-02 13:38:36.698764076 +0000 UTC m=+0.127036162 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:38:36 np0005466031 podman[352983]: 2025-10-02 13:38:36.717500926 +0000 UTC m=+0.145213276 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:38:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:37.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:38 np0005466031 nova_compute[235803]: 2025-10-02 13:38:38.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:38.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:39.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:40.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:41 np0005466031 nova_compute[235803]: 2025-10-02 13:38:41.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:41.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:42.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:42 np0005466031 nova_compute[235803]: 2025-10-02 13:38:42.830 2 DEBUG oslo_concurrency.processutils [None req-f37077f4-0dab-40b0-9f70-91a74b690f75 c004f5628e4845ada3addf46ef5dfd33 c3a6b94d2b4945a487dafe07f533efd6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:38:42 np0005466031 nova_compute[235803]: 2025-10-02 13:38:42.883 2 DEBUG oslo_concurrency.processutils [None req-f37077f4-0dab-40b0-9f70-91a74b690f75 c004f5628e4845ada3addf46ef5dfd33 c3a6b94d2b4945a487dafe07f533efd6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:38:43 np0005466031 nova_compute[235803]: 2025-10-02 13:38:43.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:43.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:43 np0005466031 podman[353026]: 2025-10-02 13:38:43.618473097 +0000 UTC m=+0.049788566 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:38:43 np0005466031 podman[353025]: 2025-10-02 13:38:43.624573232 +0000 UTC m=+0.059490695 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 09:38:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:44.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:45.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:45 np0005466031 nova_compute[235803]: 2025-10-02 13:38:45.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:45 np0005466031 nova_compute[235803]: 2025-10-02 13:38:45.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:46 np0005466031 nova_compute[235803]: 2025-10-02 13:38:46.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:38:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:46.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:38:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:47.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:48 np0005466031 nova_compute[235803]: 2025-10-02 13:38:48.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:48.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:49.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:49 np0005466031 nova_compute[235803]: 2025-10-02 13:38:49.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:49 np0005466031 nova_compute[235803]: 2025-10-02 13:38:49.683 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:38:49 np0005466031 nova_compute[235803]: 2025-10-02 13:38:49.683 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:38:49 np0005466031 nova_compute[235803]: 2025-10-02 13:38:49.684 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:38:49 np0005466031 nova_compute[235803]: 2025-10-02 13:38:49.684 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:38:49 np0005466031 nova_compute[235803]: 2025-10-02 13:38:49.684 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:38:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:38:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4086869222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.126 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.274 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.275 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4112MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.275 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.276 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:38:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:50.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.371 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.372 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.395 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:38:50 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:38:50 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2539287375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.849 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.854 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.880 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.882 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:38:50 np0005466031 nova_compute[235803]: 2025-10-02 13:38:50.882 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:38:51 np0005466031 nova_compute[235803]: 2025-10-02 13:38:51.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:51.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:38:51.509 141898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:38:51 np0005466031 nova_compute[235803]: 2025-10-02 13:38:51.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:51 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:38:51.513 141898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:38:51 np0005466031 nova_compute[235803]: 2025-10-02 13:38:51.882 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:51 np0005466031 nova_compute[235803]: 2025-10-02 13:38:51.882 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:38:51 np0005466031 nova_compute[235803]: 2025-10-02 13:38:51.882 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:38:51 np0005466031 nova_compute[235803]: 2025-10-02 13:38:51.908 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:38:51 np0005466031 nova_compute[235803]: 2025-10-02 13:38:51.908 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:52.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:52 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:38:52.516 141898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b9588630-ee40-495c-89d2-4219f6b0f0b5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:38:53 np0005466031 nova_compute[235803]: 2025-10-02 13:38:53.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:53.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:55.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:55 np0005466031 nova_compute[235803]: 2025-10-02 13:38:55.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:56 np0005466031 nova_compute[235803]: 2025-10-02 13:38:56.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:56.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:56 np0005466031 nova_compute[235803]: 2025-10-02 13:38:56.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:56 np0005466031 nova_compute[235803]: 2025-10-02 13:38:56.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:38:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:57.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:58 np0005466031 nova_compute[235803]: 2025-10-02 13:38:58.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:58.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:38:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:00.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:01 np0005466031 nova_compute[235803]: 2025-10-02 13:39:01.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:01.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:02.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:03 np0005466031 nova_compute[235803]: 2025-10-02 13:39:03.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:03.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:03 np0005466031 nova_compute[235803]: 2025-10-02 13:39:03.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:39:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:04.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:39:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:05.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:06 np0005466031 nova_compute[235803]: 2025-10-02 13:39:06.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:06.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:07.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:07 np0005466031 podman[353222]: 2025-10-02 13:39:07.649230159 +0000 UTC m=+0.074114926 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 09:39:07 np0005466031 podman[353223]: 2025-10-02 13:39:07.722696646 +0000 UTC m=+0.143805285 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:39:08 np0005466031 nova_compute[235803]: 2025-10-02 13:39:08.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:39:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:08.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:39:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:09.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:39:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:39:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:39:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:10.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:11 np0005466031 nova_compute[235803]: 2025-10-02 13:39:11.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:11.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:12.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:13 np0005466031 nova_compute[235803]: 2025-10-02 13:39:13.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:13.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:14.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:14 np0005466031 podman[353404]: 2025-10-02 13:39:14.627308182 +0000 UTC m=+0.053615816 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:39:14 np0005466031 podman[353403]: 2025-10-02 13:39:14.656086681 +0000 UTC m=+0.084550427 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:39:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:15.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:16 np0005466031 nova_compute[235803]: 2025-10-02 13:39:16.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:17 np0005466031 nova_compute[235803]: 2025-10-02 13:39:17.633 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:18 np0005466031 nova_compute[235803]: 2025-10-02 13:39:18.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:39:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:39:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:39:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:18.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:39:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:19.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:20.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:21 np0005466031 nova_compute[235803]: 2025-10-02 13:39:21.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:21.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:22.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:23 np0005466031 nova_compute[235803]: 2025-10-02 13:39:23.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:23.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:24.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:25.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:39:25.909 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:39:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:39:25.910 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:39:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:39:25.910 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:39:26 np0005466031 nova_compute[235803]: 2025-10-02 13:39:26.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:26.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:27.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:28 np0005466031 nova_compute[235803]: 2025-10-02 13:39:28.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:28.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:29.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:30.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:31 np0005466031 nova_compute[235803]: 2025-10-02 13:39:31.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:31.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:32.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:33 np0005466031 nova_compute[235803]: 2025-10-02 13:39:33.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:33.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:34.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:35.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:36 np0005466031 nova_compute[235803]: 2025-10-02 13:39:36.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:39:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:36.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:39:36 np0005466031 nova_compute[235803]: 2025-10-02 13:39:36.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:37.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:38 np0005466031 nova_compute[235803]: 2025-10-02 13:39:38.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:38.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:38 np0005466031 podman[353551]: 2025-10-02 13:39:38.667751616 +0000 UTC m=+0.096655727 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:39:38 np0005466031 podman[353552]: 2025-10-02 13:39:38.681474201 +0000 UTC m=+0.096446130 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:39:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:40.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:41 np0005466031 nova_compute[235803]: 2025-10-02 13:39:41.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:41.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:42.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:43 np0005466031 nova_compute[235803]: 2025-10-02 13:39:43.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:44.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:45.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:45 np0005466031 podman[353601]: 2025-10-02 13:39:45.622467775 +0000 UTC m=+0.048766486 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 09:39:45 np0005466031 podman[353600]: 2025-10-02 13:39:45.625284946 +0000 UTC m=+0.055059707 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:39:45 np0005466031 nova_compute[235803]: 2025-10-02 13:39:45.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:46 np0005466031 nova_compute[235803]: 2025-10-02 13:39:46.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:46.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:46 np0005466031 nova_compute[235803]: 2025-10-02 13:39:46.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:48 np0005466031 nova_compute[235803]: 2025-10-02 13:39:48.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:48.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:49.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:39:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:50.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:50 np0005466031 nova_compute[235803]: 2025-10-02 13:39:50.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:50 np0005466031 nova_compute[235803]: 2025-10-02 13:39:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:50 np0005466031 nova_compute[235803]: 2025-10-02 13:39:50.660 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:39:50 np0005466031 nova_compute[235803]: 2025-10-02 13:39:50.661 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:39:50 np0005466031 nova_compute[235803]: 2025-10-02 13:39:50.661 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:39:50 np0005466031 nova_compute[235803]: 2025-10-02 13:39:50.661 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:39:50 np0005466031 nova_compute[235803]: 2025-10-02 13:39:50.661 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:39:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:39:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3052700554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.108 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.260 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.261 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4103MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.262 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.262 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.331 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.331 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.349 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:51 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:39:51 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/557114497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.819 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:39:51 np0005466031 nova_compute[235803]: 2025-10-02 13:39:51.825 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:39:52 np0005466031 nova_compute[235803]: 2025-10-02 13:39:52.009 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:39:52 np0005466031 nova_compute[235803]: 2025-10-02 13:39:52.011 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:39:52 np0005466031 nova_compute[235803]: 2025-10-02 13:39:52.011 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:39:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:52 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:39:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:52.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:52 np0005466031 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:39:53 np0005466031 nova_compute[235803]: 2025-10-02 13:39:53.012 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:53 np0005466031 nova_compute[235803]: 2025-10-02 13:39:53.013 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:39:53 np0005466031 nova_compute[235803]: 2025-10-02 13:39:53.013 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:39:53 np0005466031 nova_compute[235803]: 2025-10-02 13:39:53.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:53 np0005466031 nova_compute[235803]: 2025-10-02 13:39:53.431 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:39:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:54.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:55.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:56.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:56 np0005466031 nova_compute[235803]: 2025-10-02 13:39:56.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:56 np0005466031 nova_compute[235803]: 2025-10-02 13:39:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:56 np0005466031 nova_compute[235803]: 2025-10-02 13:39:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:56 np0005466031 nova_compute[235803]: 2025-10-02 13:39:56.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:39:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:57.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:58 np0005466031 nova_compute[235803]: 2025-10-02 13:39:58.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:58.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:39:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:39:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:59.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:40:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:40:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:00.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:40:00 np0005466031 ceph-mon[76340]: overall HEALTH_OK
Oct  2 09:40:01 np0005466031 nova_compute[235803]: 2025-10-02 13:40:01.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:02.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:03 np0005466031 nova_compute[235803]: 2025-10-02 13:40:03.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:40:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:40:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:04.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:04 np0005466031 nova_compute[235803]: 2025-10-02 13:40:04.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:05.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:06.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:06 np0005466031 nova_compute[235803]: 2025-10-02 13:40:06.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:08 np0005466031 nova_compute[235803]: 2025-10-02 13:40:08.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:08.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:09.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:09 np0005466031 podman[353796]: 2025-10-02 13:40:09.639478953 +0000 UTC m=+0.070440521 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 09:40:09 np0005466031 podman[353797]: 2025-10-02 13:40:09.675512031 +0000 UTC m=+0.103897455 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:40:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:10.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:11 np0005466031 nova_compute[235803]: 2025-10-02 13:40:11.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:11.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:12.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:13 np0005466031 nova_compute[235803]: 2025-10-02 13:40:13.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:13.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:14.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:15.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:16.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:16 np0005466031 nova_compute[235803]: 2025-10-02 13:40:16.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:16 np0005466031 podman[353846]: 2025-10-02 13:40:16.643384189 +0000 UTC m=+0.063671506 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct  2 09:40:16 np0005466031 podman[353845]: 2025-10-02 13:40:16.646811598 +0000 UTC m=+0.072693136 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:40:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:17.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:18 np0005466031 nova_compute[235803]: 2025-10-02 13:40:18.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:18.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:40:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:40:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:40:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:19.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:20.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:21 np0005466031 nova_compute[235803]: 2025-10-02 13:40:21.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:21.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:22.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:23 np0005466031 nova_compute[235803]: 2025-10-02 13:40:23.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:23.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:24.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:25.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:40:25.910 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:40:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:40:25.911 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:40:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:40:25.911 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:40:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:26 np0005466031 nova_compute[235803]: 2025-10-02 13:40:26.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:27.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:28 np0005466031 nova_compute[235803]: 2025-10-02 13:40:28.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:28.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:40:29 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:40:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:40:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:30.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:40:31 np0005466031 nova_compute[235803]: 2025-10-02 13:40:31.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:31.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:32.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:33 np0005466031 nova_compute[235803]: 2025-10-02 13:40:33.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:33.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:40:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:34.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:40:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:35.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:40:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:36.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:40:36 np0005466031 nova_compute[235803]: 2025-10-02 13:40:36.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:37.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:38 np0005466031 nova_compute[235803]: 2025-10-02 13:40:38.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:40:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:38.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:40:38 np0005466031 nova_compute[235803]: 2025-10-02 13:40:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:40.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:40 np0005466031 podman[354129]: 2025-10-02 13:40:40.616712227 +0000 UTC m=+0.051797324 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:40:40 np0005466031 podman[354130]: 2025-10-02 13:40:40.671892777 +0000 UTC m=+0.105366517 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:40:41 np0005466031 nova_compute[235803]: 2025-10-02 13:40:41.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:41.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:42.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:43 np0005466031 nova_compute[235803]: 2025-10-02 13:40:43.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:40:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:43.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:40:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:45.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:46 np0005466031 nova_compute[235803]: 2025-10-02 13:40:46.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:47 np0005466031 podman[354197]: 2025-10-02 13:40:47.081296404 +0000 UTC m=+0.071366278 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:40:47 np0005466031 podman[354198]: 2025-10-02 13:40:47.106727596 +0000 UTC m=+0.081315114 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:40:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:47 np0005466031 nova_compute[235803]: 2025-10-02 13:40:47.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:47 np0005466031 nova_compute[235803]: 2025-10-02 13:40:47.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:47.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:48 np0005466031 nova_compute[235803]: 2025-10-02 13:40:48.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:40:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:48.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:40:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:40:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:49.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:40:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:50.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:50 np0005466031 nova_compute[235803]: 2025-10-02 13:40:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:51 np0005466031 nova_compute[235803]: 2025-10-02 13:40:51.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:51 np0005466031 nova_compute[235803]: 2025-10-02 13:40:51.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:51.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:51 np0005466031 nova_compute[235803]: 2025-10-02 13:40:51.686 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:40:51 np0005466031 nova_compute[235803]: 2025-10-02 13:40:51.687 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:40:51 np0005466031 nova_compute[235803]: 2025-10-02 13:40:51.687 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:40:51 np0005466031 nova_compute[235803]: 2025-10-02 13:40:51.687 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:40:51 np0005466031 nova_compute[235803]: 2025-10-02 13:40:51.688 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:40:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:40:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4087168575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:40:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.121 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.293 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.294 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4103MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.294 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.294 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:40:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:52.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.514 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.514 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.551 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:40:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:40:52 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/814707070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.962 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:40:52 np0005466031 nova_compute[235803]: 2025-10-02 13:40:52.967 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:40:53 np0005466031 nova_compute[235803]: 2025-10-02 13:40:53.016 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:40:53 np0005466031 nova_compute[235803]: 2025-10-02 13:40:53.018 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:40:53 np0005466031 nova_compute[235803]: 2025-10-02 13:40:53.018 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:40:53 np0005466031 nova_compute[235803]: 2025-10-02 13:40:53.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:53.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:55 np0005466031 nova_compute[235803]: 2025-10-02 13:40:55.018 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:55 np0005466031 nova_compute[235803]: 2025-10-02 13:40:55.019 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:40:55 np0005466031 nova_compute[235803]: 2025-10-02 13:40:55.019 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:40:55 np0005466031 nova_compute[235803]: 2025-10-02 13:40:55.039 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:40:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:55.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:56 np0005466031 nova_compute[235803]: 2025-10-02 13:40:56.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:56.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:57 np0005466031 nova_compute[235803]: 2025-10-02 13:40:57.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:57.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:58 np0005466031 nova_compute[235803]: 2025-10-02 13:40:58.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:40:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:58.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:40:58 np0005466031 nova_compute[235803]: 2025-10-02 13:40:58.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:58 np0005466031 nova_compute[235803]: 2025-10-02 13:40:58.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:40:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:40:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:40:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:59.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:00.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:01 np0005466031 nova_compute[235803]: 2025-10-02 13:41:01.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:01.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:03 np0005466031 nova_compute[235803]: 2025-10-02 13:41:03.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:41:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:04.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:41:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:06 np0005466031 nova_compute[235803]: 2025-10-02 13:41:06.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:06 np0005466031 nova_compute[235803]: 2025-10-02 13:41:06.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:08 np0005466031 nova_compute[235803]: 2025-10-02 13:41:08.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:08.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:09.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:11 np0005466031 nova_compute[235803]: 2025-10-02 13:41:11.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:11 np0005466031 podman[354366]: 2025-10-02 13:41:11.656394912 +0000 UTC m=+0.081603713 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:41:11 np0005466031 podman[354367]: 2025-10-02 13:41:11.658490672 +0000 UTC m=+0.081925101 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:41:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:11.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:12.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:13 np0005466031 nova_compute[235803]: 2025-10-02 13:41:13.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:13.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:15.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:16.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:16 np0005466031 nova_compute[235803]: 2025-10-02 13:41:16.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:17 np0005466031 podman[354415]: 2025-10-02 13:41:17.618704385 +0000 UTC m=+0.052908776 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:41:17 np0005466031 podman[354416]: 2025-10-02 13:41:17.618717625 +0000 UTC m=+0.048024005 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 09:41:17 np0005466031 nova_compute[235803]: 2025-10-02 13:41:17.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:17.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:18 np0005466031 nova_compute[235803]: 2025-10-02 13:41:18.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:18.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:19.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:20.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:21 np0005466031 nova_compute[235803]: 2025-10-02 13:41:21.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:21.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:22.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:23 np0005466031 nova_compute[235803]: 2025-10-02 13:41:23.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:23.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:25.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:41:25.912 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:41:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:41:25.913 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:41:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:41:25.913 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:41:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:26.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:26 np0005466031 nova_compute[235803]: 2025-10-02 13:41:26.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:27.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:28 np0005466031 nova_compute[235803]: 2025-10-02 13:41:28.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:28.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:30 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:41:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:41:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:41:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:31 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:41:31 np0005466031 nova_compute[235803]: 2025-10-02 13:41:31.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:31.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:33 np0005466031 nova_compute[235803]: 2025-10-02 13:41:33.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:33.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:34.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:35.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:36 np0005466031 nova_compute[235803]: 2025-10-02 13:41:36.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:36.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:37.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:37 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:38 np0005466031 nova_compute[235803]: 2025-10-02 13:41:38.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:38.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:38 np0005466031 nova_compute[235803]: 2025-10-02 13:41:38.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:39.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:40.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:41 np0005466031 nova_compute[235803]: 2025-10-02 13:41:41.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:41.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:42.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:42 np0005466031 podman[354699]: 2025-10-02 13:41:42.620620506 +0000 UTC m=+0.055774469 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:41:42 np0005466031 podman[354700]: 2025-10-02 13:41:42.666493957 +0000 UTC m=+0.093725301 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:41:43 np0005466031 nova_compute[235803]: 2025-10-02 13:41:43.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:43.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:44.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:45.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:46 np0005466031 nova_compute[235803]: 2025-10-02 13:41:46.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:46.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:47.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:48 np0005466031 nova_compute[235803]: 2025-10-02 13:41:48.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:48.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:48 np0005466031 podman[354795]: 2025-10-02 13:41:48.621383776 +0000 UTC m=+0.053367579 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:41:48 np0005466031 nova_compute[235803]: 2025-10-02 13:41:48.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:48 np0005466031 podman[354796]: 2025-10-02 13:41:48.654421468 +0000 UTC m=+0.081697285 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:41:49 np0005466031 nova_compute[235803]: 2025-10-02 13:41:49.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:50.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:50 np0005466031 nova_compute[235803]: 2025-10-02 13:41:50.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:51 np0005466031 nova_compute[235803]: 2025-10-02 13:41:51.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:52.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:53 np0005466031 nova_compute[235803]: 2025-10-02 13:41:53.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:53 np0005466031 nova_compute[235803]: 2025-10-02 13:41:53.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:53 np0005466031 nova_compute[235803]: 2025-10-02 13:41:53.668 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:41:53 np0005466031 nova_compute[235803]: 2025-10-02 13:41:53.669 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:41:53 np0005466031 nova_compute[235803]: 2025-10-02 13:41:53.669 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:41:53 np0005466031 nova_compute[235803]: 2025-10-02 13:41:53.670 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:41:53 np0005466031 nova_compute[235803]: 2025-10-02 13:41:53.670 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:41:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:41:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:53.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:41:54 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:41:54 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2271838928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.121 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.314 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.315 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4115MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.315 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.316 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.504 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.504 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.517 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:41:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:54.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.629 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.629 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.640 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.663 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:41:54 np0005466031 nova_compute[235803]: 2025-10-02 13:41:54.682 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/123264677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.089359) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515089397, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2342, "num_deletes": 251, "total_data_size": 5941691, "memory_usage": 6022632, "flush_reason": "Manual Compaction"}
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Oct  2 09:41:55 np0005466031 nova_compute[235803]: 2025-10-02 13:41:55.103 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:41:55 np0005466031 nova_compute[235803]: 2025-10-02 13:41:55.110 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515115498, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 3880001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92890, "largest_seqno": 95227, "table_properties": {"data_size": 3870469, "index_size": 6089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18979, "raw_average_key_size": 20, "raw_value_size": 3851624, "raw_average_value_size": 4101, "num_data_blocks": 267, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412288, "oldest_key_time": 1759412288, "file_creation_time": 1759412515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 26213 microseconds, and 14920 cpu microseconds.
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.115571) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 3880001 bytes OK
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.115592) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.116923) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.116939) EVENT_LOG_v1 {"time_micros": 1759412515116934, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.116957) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 5931478, prev total WAL file size 5931478, number of live WAL files 2.
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.118465) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(3789KB)], [192(12MB)]
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515118508, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 17088149, "oldest_snapshot_seqno": -1}
Oct  2 09:41:55 np0005466031 nova_compute[235803]: 2025-10-02 13:41:55.127 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:41:55 np0005466031 nova_compute[235803]: 2025-10-02 13:41:55.130 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:41:55 np0005466031 nova_compute[235803]: 2025-10-02 13:41:55.130 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11702 keys, 15047503 bytes, temperature: kUnknown
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515193667, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 15047503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14971956, "index_size": 45162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 308892, "raw_average_key_size": 26, "raw_value_size": 14767533, "raw_average_value_size": 1261, "num_data_blocks": 1719, "num_entries": 11702, "num_filter_entries": 11702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759412515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.193954) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 15047503 bytes
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.195067) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.1 rd, 200.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.6 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 12219, records dropped: 517 output_compression: NoCompression
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.195096) EVENT_LOG_v1 {"time_micros": 1759412515195082, "job": 124, "event": "compaction_finished", "compaction_time_micros": 75245, "compaction_time_cpu_micros": 49833, "output_level": 6, "num_output_files": 1, "total_output_size": 15047503, "num_input_records": 12219, "num_output_records": 11702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515196683, "job": 124, "event": "table_file_deletion", "file_number": 194}
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515201634, "job": 124, "event": "table_file_deletion", "file_number": 192}
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.118346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.201685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.201689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.201690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.201692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:41:55.201693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:55.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:56 np0005466031 nova_compute[235803]: 2025-10-02 13:41:56.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:56.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:56 np0005466031 nova_compute[235803]: 2025-10-02 13:41:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:56 np0005466031 nova_compute[235803]: 2025-10-02 13:41:56.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:41:56 np0005466031 nova_compute[235803]: 2025-10-02 13:41:56.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:41:56 np0005466031 nova_compute[235803]: 2025-10-02 13:41:56.649 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:41:56 np0005466031 nova_compute[235803]: 2025-10-02 13:41:56.649 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:57 np0005466031 nova_compute[235803]: 2025-10-02 13:41:57.663 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:57.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:58 np0005466031 nova_compute[235803]: 2025-10-02 13:41:58.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:41:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:58.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:41:58 np0005466031 nova_compute[235803]: 2025-10-02 13:41:58.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:58 np0005466031 nova_compute[235803]: 2025-10-02 13:41:58.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:41:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:41:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:59.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:00.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:01 np0005466031 nova_compute[235803]: 2025-10-02 13:42:01.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:01.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:02.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:03 np0005466031 nova_compute[235803]: 2025-10-02 13:42:03.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:42:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:42:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:04.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:05.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:06 np0005466031 nova_compute[235803]: 2025-10-02 13:42:06.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:06.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:06 np0005466031 nova_compute[235803]: 2025-10-02 13:42:06.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:07.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:08 np0005466031 nova_compute[235803]: 2025-10-02 13:42:08.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:08.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:09.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:11 np0005466031 nova_compute[235803]: 2025-10-02 13:42:11.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:42:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:11.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:42:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:13 np0005466031 nova_compute[235803]: 2025-10-02 13:42:13.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:13 np0005466031 podman[354942]: 2025-10-02 13:42:13.615215332 +0000 UTC m=+0.050309861 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 09:42:13 np0005466031 podman[354943]: 2025-10-02 13:42:13.643483066 +0000 UTC m=+0.075449895 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct  2 09:42:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:13.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:14.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:16 np0005466031 nova_compute[235803]: 2025-10-02 13:42:16.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:16.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:18 np0005466031 nova_compute[235803]: 2025-10-02 13:42:18.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.107840) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539107878, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 459, "num_deletes": 250, "total_data_size": 613617, "memory_usage": 622880, "flush_reason": "Manual Compaction"}
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539118819, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 312887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95232, "largest_seqno": 95686, "table_properties": {"data_size": 310473, "index_size": 513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6550, "raw_average_key_size": 20, "raw_value_size": 305594, "raw_average_value_size": 952, "num_data_blocks": 23, "num_entries": 321, "num_filter_entries": 321, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412516, "oldest_key_time": 1759412516, "file_creation_time": 1759412539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 11023 microseconds, and 1796 cpu microseconds.
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.118864) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 312887 bytes OK
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.118884) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.133014) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.133045) EVENT_LOG_v1 {"time_micros": 1759412539133036, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.133070) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 610790, prev total WAL file size 610790, number of live WAL files 2.
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.133656) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323539' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(305KB)], [195(14MB)]
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539133682, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 15360390, "oldest_snapshot_seqno": -1}
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 11520 keys, 11628515 bytes, temperature: kUnknown
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539194261, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 11628515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11558779, "index_size": 39863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28805, "raw_key_size": 305318, "raw_average_key_size": 26, "raw_value_size": 11362022, "raw_average_value_size": 986, "num_data_blocks": 1497, "num_entries": 11520, "num_filter_entries": 11520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759412539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.194485) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11628515 bytes
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.195833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.3 rd, 191.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(86.3) write-amplify(37.2) OK, records in: 12023, records dropped: 503 output_compression: NoCompression
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.195848) EVENT_LOG_v1 {"time_micros": 1759412539195841, "job": 126, "event": "compaction_finished", "compaction_time_micros": 60639, "compaction_time_cpu_micros": 38534, "output_level": 6, "num_output_files": 1, "total_output_size": 11628515, "num_input_records": 12023, "num_output_records": 11520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539195972, "job": 126, "event": "table_file_deletion", "file_number": 197}
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539198159, "job": 126, "event": "table_file_deletion", "file_number": 195}
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.133591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.198182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.198185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.198187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.198188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:42:19.198189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466031 podman[354987]: 2025-10-02 13:42:19.616404406 +0000 UTC m=+0.049825427 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:42:19 np0005466031 podman[354988]: 2025-10-02 13:42:19.616472968 +0000 UTC m=+0.045697148 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 09:42:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:19.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:20.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:21 np0005466031 nova_compute[235803]: 2025-10-02 13:42:21.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:22.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:23 np0005466031 nova_compute[235803]: 2025-10-02 13:42:23.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:23.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:24.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:25.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:42:25.914 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:42:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:42:25.914 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:42:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:42:25.914 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:42:26 np0005466031 nova_compute[235803]: 2025-10-02 13:42:26.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:26.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:27.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:28 np0005466031 nova_compute[235803]: 2025-10-02 13:42:28.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:29.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:30.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:31 np0005466031 nova_compute[235803]: 2025-10-02 13:42:31.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:32.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:33 np0005466031 nova_compute[235803]: 2025-10-02 13:42:33.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:35.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:36 np0005466031 nova_compute[235803]: 2025-10-02 13:42:36.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:36.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:37.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:38 np0005466031 nova_compute[235803]: 2025-10-02 13:42:38.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:38.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:42:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:42:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:39 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:39.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:42:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:42:40 np0005466031 nova_compute[235803]: 2025-10-02 13:42:40.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:42:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:42:41 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:41 np0005466031 nova_compute[235803]: 2025-10-02 13:42:41.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:42 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:42:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:43 np0005466031 nova_compute[235803]: 2025-10-02 13:42:43.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:43.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:44 np0005466031 podman[355344]: 2025-10-02 13:42:44.640994914 +0000 UTC m=+0.067030882 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:42:44 np0005466031 podman[355345]: 2025-10-02 13:42:44.70088162 +0000 UTC m=+0.114906632 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:42:45 np0005466031 nova_compute[235803]: 2025-10-02 13:42:45.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:45 np0005466031 nova_compute[235803]: 2025-10-02 13:42:45.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:42:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:45.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:46 np0005466031 nova_compute[235803]: 2025-10-02 13:42:46.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:46.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:47.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:48 np0005466031 nova_compute[235803]: 2025-10-02 13:42:48.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:48.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:48 np0005466031 nova_compute[235803]: 2025-10-02 13:42:48.791 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:48 np0005466031 nova_compute[235803]: 2025-10-02 13:42:48.791 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:48 np0005466031 nova_compute[235803]: 2025-10-02 13:42:48.791 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:42:48 np0005466031 nova_compute[235803]: 2025-10-02 13:42:48.811 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:42:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:49.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:50.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:50 np0005466031 podman[355444]: 2025-10-02 13:42:50.636739151 +0000 UTC m=+0.059373112 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:42:50 np0005466031 podman[355443]: 2025-10-02 13:42:50.644759182 +0000 UTC m=+0.071641806 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:42:51 np0005466031 nova_compute[235803]: 2025-10-02 13:42:51.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:51 np0005466031 nova_compute[235803]: 2025-10-02 13:42:51.651 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:51.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:52.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:52 np0005466031 nova_compute[235803]: 2025-10-02 13:42:52.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:53 np0005466031 nova_compute[235803]: 2025-10-02 13:42:53.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:53.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:54.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:54 np0005466031 nova_compute[235803]: 2025-10-02 13:42:54.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:54 np0005466031 nova_compute[235803]: 2025-10-02 13:42:54.735 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:42:54 np0005466031 nova_compute[235803]: 2025-10-02 13:42:54.735 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:42:54 np0005466031 nova_compute[235803]: 2025-10-02 13:42:54.736 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:42:54 np0005466031 nova_compute[235803]: 2025-10-02 13:42:54.736 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:42:54 np0005466031 nova_compute[235803]: 2025-10-02 13:42:54.737 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:42:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:55 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:42:55 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/281986870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:42:55 np0005466031 nova_compute[235803]: 2025-10-02 13:42:55.199 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:42:55 np0005466031 nova_compute[235803]: 2025-10-02 13:42:55.354 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:42:55 np0005466031 nova_compute[235803]: 2025-10-02 13:42:55.355 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4126MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:42:55 np0005466031 nova_compute[235803]: 2025-10-02 13:42:55.355 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:42:55 np0005466031 nova_compute[235803]: 2025-10-02 13:42:55.355 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:42:55 np0005466031 nova_compute[235803]: 2025-10-02 13:42:55.611 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:42:55 np0005466031 nova_compute[235803]: 2025-10-02 13:42:55.612 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:42:55 np0005466031 nova_compute[235803]: 2025-10-02 13:42:55.633 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:42:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:55.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:56 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:42:56 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/805743753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:42:56 np0005466031 nova_compute[235803]: 2025-10-02 13:42:56.108 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:42:56 np0005466031 nova_compute[235803]: 2025-10-02 13:42:56.115 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:42:56 np0005466031 nova_compute[235803]: 2025-10-02 13:42:56.172 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:42:56 np0005466031 nova_compute[235803]: 2025-10-02 13:42:56.175 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:42:56 np0005466031 nova_compute[235803]: 2025-10-02 13:42:56.176 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:42:56 np0005466031 nova_compute[235803]: 2025-10-02 13:42:56.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:42:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:57.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:42:58 np0005466031 nova_compute[235803]: 2025-10-02 13:42:58.177 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:58 np0005466031 nova_compute[235803]: 2025-10-02 13:42:58.177 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:42:58 np0005466031 nova_compute[235803]: 2025-10-02 13:42:58.178 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:42:58 np0005466031 nova_compute[235803]: 2025-10-02 13:42:58.203 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:42:58 np0005466031 nova_compute[235803]: 2025-10-02 13:42:58.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:58.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:58 np0005466031 nova_compute[235803]: 2025-10-02 13:42:58.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:58 np0005466031 nova_compute[235803]: 2025-10-02 13:42:58.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:42:59 np0005466031 nova_compute[235803]: 2025-10-02 13:42:59.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:42:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:59.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:00.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:01 np0005466031 nova_compute[235803]: 2025-10-02 13:43:01.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:02.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:03 np0005466031 nova_compute[235803]: 2025-10-02 13:43:03.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:04.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:06 np0005466031 nova_compute[235803]: 2025-10-02 13:43:06.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:06 np0005466031 nova_compute[235803]: 2025-10-02 13:43:06.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:06.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:08 np0005466031 nova_compute[235803]: 2025-10-02 13:43:08.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:08.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:10.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:11 np0005466031 nova_compute[235803]: 2025-10-02 13:43:11.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:11.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:12.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:13 np0005466031 nova_compute[235803]: 2025-10-02 13:43:13.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:13 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:13 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:13 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:13.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:43:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:14.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:43:15 np0005466031 podman[355640]: 2025-10-02 13:43:15.608148788 +0000 UTC m=+0.043413292 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:43:15 np0005466031 podman[355641]: 2025-10-02 13:43:15.665415728 +0000 UTC m=+0.092582179 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:43:15 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:15 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:15 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:15.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:16 np0005466031 nova_compute[235803]: 2025-10-02 13:43:16.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:43:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:16.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:43:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:17 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:17 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:17 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:17.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:18 np0005466031 nova_compute[235803]: 2025-10-02 13:43:18.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:18.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:19 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:19 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:19 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:19.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:20 np0005466031 nova_compute[235803]: 2025-10-02 13:43:20.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:43:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:20.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:43:21 np0005466031 podman[355690]: 2025-10-02 13:43:21.638053839 +0000 UTC m=+0.058569588 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 09:43:21 np0005466031 podman[355689]: 2025-10-02 13:43:21.637976277 +0000 UTC m=+0.064310754 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:43:21 np0005466031 nova_compute[235803]: 2025-10-02 13:43:21.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:21 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:21 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:21 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:21.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:43:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:22.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:43:23 np0005466031 nova_compute[235803]: 2025-10-02 13:43:23.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:23 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:23 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:23 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:23.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:24.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:25 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:25 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:25 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:43:25.915 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:43:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:43:25.915 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:43:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:43:25.915 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:43:26 np0005466031 nova_compute[235803]: 2025-10-02 13:43:26.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:26.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:27 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:27 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:27 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:27.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:28 np0005466031 nova_compute[235803]: 2025-10-02 13:43:28.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:28.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:29 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:29 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:29 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:29.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:30.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:31 np0005466031 nova_compute[235803]: 2025-10-02 13:43:31.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:31 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:31 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:31 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:31.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:32.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:33 np0005466031 nova_compute[235803]: 2025-10-02 13:43:33.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:33 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:33 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:33 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:33.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:43:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:34.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:43:35 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:35 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:35 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:36 np0005466031 nova_compute[235803]: 2025-10-02 13:43:36.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:43:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:36.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:43:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:37 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:37 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:37 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:37.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:38.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:38 np0005466031 nova_compute[235803]: 2025-10-02 13:43:38.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:39 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:39 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:39 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:39.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:40.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:41 np0005466031 nova_compute[235803]: 2025-10-02 13:43:41.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:41 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:41 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:41 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:41.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:42 np0005466031 nova_compute[235803]: 2025-10-02 13:43:42.638 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:42.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:43 np0005466031 nova_compute[235803]: 2025-10-02 13:43:43.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:43 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:43 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:43 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:43.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:44.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:45 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:45 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:45 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:45.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:46 np0005466031 nova_compute[235803]: 2025-10-02 13:43:46.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:46 np0005466031 podman[355795]: 2025-10-02 13:43:46.672123846 +0000 UTC m=+0.089482759 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:43:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:46.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:46 np0005466031 podman[355796]: 2025-10-02 13:43:46.705026414 +0000 UTC m=+0.105629064 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:43:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:47 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:47 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:47 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:47.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:48.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:48 np0005466031 nova_compute[235803]: 2025-10-02 13:43:48.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:49 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:49 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:49 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:50 np0005466031 nova_compute[235803]: 2025-10-02 13:43:50.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:50.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:51 np0005466031 nova_compute[235803]: 2025-10-02 13:43:51.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:51 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:51 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:51 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:51.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:52 np0005466031 podman[355892]: 2025-10-02 13:43:52.620514669 +0000 UTC m=+0.047921091 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:43:52 np0005466031 podman[355893]: 2025-10-02 13:43:52.623530076 +0000 UTC m=+0.049222929 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:43:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:52.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:53 np0005466031 nova_compute[235803]: 2025-10-02 13:43:53.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:53 np0005466031 nova_compute[235803]: 2025-10-02 13:43:53.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:53 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:53 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:43:53 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:53.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:43:54 np0005466031 nova_compute[235803]: 2025-10-02 13:43:54.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:54.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:55 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:43:55 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:55 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:55 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:55.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.676 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.677 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:56.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.719 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.720 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.720 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.721 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:43:56 np0005466031 nova_compute[235803]: 2025-10-02 13:43:56.721 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:43:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:43:57 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:43:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:43:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/86670586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:43:57 np0005466031 nova_compute[235803]: 2025-10-02 13:43:57.132 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:43:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:57 np0005466031 nova_compute[235803]: 2025-10-02 13:43:57.327 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:43:57 np0005466031 nova_compute[235803]: 2025-10-02 13:43:57.329 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4089MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:43:57 np0005466031 nova_compute[235803]: 2025-10-02 13:43:57.330 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:43:57 np0005466031 nova_compute[235803]: 2025-10-02 13:43:57.330 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:43:57 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:57 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:43:57 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:57.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.322 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.322 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.350 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:58.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:58 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:43:58 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2452559841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.805 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.812 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.838 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.841 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:43:58 np0005466031 nova_compute[235803]: 2025-10-02 13:43:58.842 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:43:59 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:43:59 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:59 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:00.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:01 np0005466031 nova_compute[235803]: 2025-10-02 13:44:01.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:01 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:01 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:01 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:02.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:02 np0005466031 nova_compute[235803]: 2025-10-02 13:44:02.802 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:02 np0005466031 nova_compute[235803]: 2025-10-02 13:44:02.802 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:02 np0005466031 nova_compute[235803]: 2025-10-02 13:44:02.803 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:44:03 np0005466031 nova_compute[235803]: 2025-10-02 13:44:03.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:03 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:03 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:03 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:03.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:04.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:05 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:05 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:44:05 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:05.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:44:06 np0005466031 nova_compute[235803]: 2025-10-02 13:44:06.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:06.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:07 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:07 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:07 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:07.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:44:08 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:44:08 np0005466031 nova_compute[235803]: 2025-10-02 13:44:08.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:08 np0005466031 nova_compute[235803]: 2025-10-02 13:44:08.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:08.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:09 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:09 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:09 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:09.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:10.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:11 np0005466031 nova_compute[235803]: 2025-10-02 13:44:11.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:11 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:11 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:11 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:11.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:12.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:13 np0005466031 nova_compute[235803]: 2025-10-02 13:44:13.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:13.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:14.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:16 np0005466031 nova_compute[235803]: 2025-10-02 13:44:16.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:17 np0005466031 podman[356218]: 2025-10-02 13:44:17.648432437 +0000 UTC m=+0.064450268 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:44:17 np0005466031 podman[356219]: 2025-10-02 13:44:17.671512232 +0000 UTC m=+0.090570301 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:44:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:18.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:18 np0005466031 nova_compute[235803]: 2025-10-02 13:44:18.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:18.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:20.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:20.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:21 np0005466031 nova_compute[235803]: 2025-10-02 13:44:21.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:22.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:22.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:23 np0005466031 podman[356266]: 2025-10-02 13:44:23.652990047 +0000 UTC m=+0.064773978 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:44:23 np0005466031 podman[356267]: 2025-10-02 13:44:23.653360157 +0000 UTC m=+0.067714552 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:44:23 np0005466031 nova_compute[235803]: 2025-10-02 13:44:23.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:24.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:24.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:44:25.916 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:44:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:44:25.917 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:44:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:44:25.917 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:44:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:26 np0005466031 nova_compute[235803]: 2025-10-02 13:44:26.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:26.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:28.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:28 np0005466031 nova_compute[235803]: 2025-10-02 13:44:28.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:30.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:30.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:31 np0005466031 nova_compute[235803]: 2025-10-02 13:44:31.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:32.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:32.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:33 np0005466031 nova_compute[235803]: 2025-10-02 13:44:33.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:34.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:34.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:36 np0005466031 nova_compute[235803]: 2025-10-02 13:44:36.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:38.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:38.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:38 np0005466031 nova_compute[235803]: 2025-10-02 13:44:38.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:40.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:41 np0005466031 nova_compute[235803]: 2025-10-02 13:44:41.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:42.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:42.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:43 np0005466031 nova_compute[235803]: 2025-10-02 13:44:43.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:44.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:44 np0005466031 nova_compute[235803]: 2025-10-02 13:44:44.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:46.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:46 np0005466031 nova_compute[235803]: 2025-10-02 13:44:46.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:44:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:46.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:44:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:48.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:48 np0005466031 podman[356367]: 2025-10-02 13:44:48.6383926 +0000 UTC m=+0.066551899 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:44:48 np0005466031 podman[356368]: 2025-10-02 13:44:48.708316205 +0000 UTC m=+0.132873050 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:44:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:48 np0005466031 nova_compute[235803]: 2025-10-02 13:44:48.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.158769) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689158805, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1642, "num_deletes": 255, "total_data_size": 3912432, "memory_usage": 3979184, "flush_reason": "Manual Compaction"}
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689174510, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 2572239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95692, "largest_seqno": 97328, "table_properties": {"data_size": 2565284, "index_size": 4025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14215, "raw_average_key_size": 19, "raw_value_size": 2551404, "raw_average_value_size": 3543, "num_data_blocks": 177, "num_entries": 720, "num_filter_entries": 720, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412539, "oldest_key_time": 1759412539, "file_creation_time": 1759412689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 15828 microseconds, and 7222 cpu microseconds.
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.174597) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 2572239 bytes OK
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.174617) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.177506) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.177519) EVENT_LOG_v1 {"time_micros": 1759412689177514, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.177534) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 3904943, prev total WAL file size 3904943, number of live WAL files 2.
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.178403) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373732' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(2511KB)], [198(11MB)]
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689178445, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14200754, "oldest_snapshot_seqno": -1}
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 11711 keys, 14066520 bytes, temperature: kUnknown
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689238464, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 14066520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13992795, "index_size": 43340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 310183, "raw_average_key_size": 26, "raw_value_size": 13790033, "raw_average_value_size": 1177, "num_data_blocks": 1645, "num_entries": 11711, "num_filter_entries": 11711, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759412689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.238962) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14066520 bytes
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.240380) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.8 rd, 233.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.1 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 12240, records dropped: 529 output_compression: NoCompression
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.240419) EVENT_LOG_v1 {"time_micros": 1759412689240402, "job": 128, "event": "compaction_finished", "compaction_time_micros": 60222, "compaction_time_cpu_micros": 32728, "output_level": 6, "num_output_files": 1, "total_output_size": 14066520, "num_input_records": 12240, "num_output_records": 11711, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689241505, "job": 128, "event": "table_file_deletion", "file_number": 200}
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689246429, "job": 128, "event": "table_file_deletion", "file_number": 198}
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.178340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.246572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.246578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.246581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.246583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:44:49.246585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:50.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:44:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:44:51 np0005466031 nova_compute[235803]: 2025-10-02 13:44:51.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:52.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:52 np0005466031 nova_compute[235803]: 2025-10-02 13:44:52.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:44:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:52.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:44:53 np0005466031 nova_compute[235803]: 2025-10-02 13:44:53.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:54.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:54 np0005466031 nova_compute[235803]: 2025-10-02 13:44:54.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:54 np0005466031 podman[356464]: 2025-10-02 13:44:54.651175079 +0000 UTC m=+0.070751320 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:44:54 np0005466031 podman[356463]: 2025-10-02 13:44:54.682819711 +0000 UTC m=+0.096582924 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:44:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:54.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:55 np0005466031 nova_compute[235803]: 2025-10-02 13:44:55.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:56.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.637 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.662 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.662 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.694 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.694 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.694 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.694 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.695 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:44:56 np0005466031 nova_compute[235803]: 2025-10-02 13:44:56.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:56.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:44:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/290588798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.179 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.397 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.398 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4115MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.398 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.398 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.501 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.501 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.554 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:44:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:44:57 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3055510341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.974 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:44:57 np0005466031 nova_compute[235803]: 2025-10-02 13:44:57.981 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:44:58 np0005466031 nova_compute[235803]: 2025-10-02 13:44:58.019 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:44:58 np0005466031 nova_compute[235803]: 2025-10-02 13:44:58.022 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:44:58 np0005466031 nova_compute[235803]: 2025-10-02 13:44:58.023 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:44:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:58.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:44:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:58.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:58 np0005466031 nova_compute[235803]: 2025-10-02 13:44:58.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:45:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:00.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:45:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:00.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:01 np0005466031 nova_compute[235803]: 2025-10-02 13:45:01.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:02.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:02.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:02 np0005466031 nova_compute[235803]: 2025-10-02 13:45:02.998 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:02 np0005466031 nova_compute[235803]: 2025-10-02 13:45:02.998 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:02 np0005466031 nova_compute[235803]: 2025-10-02 13:45:02.998 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:45:03 np0005466031 nova_compute[235803]: 2025-10-02 13:45:03.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:04.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:04.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:06 np0005466031 nova_compute[235803]: 2025-10-02 13:45:06.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:06.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:08.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:08 np0005466031 nova_compute[235803]: 2025-10-02 13:45:08.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:08.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:08 np0005466031 nova_compute[235803]: 2025-10-02 13:45:08.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:10.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:10 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:45:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:11 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:45:11 np0005466031 nova_compute[235803]: 2025-10-02 13:45:11.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:12.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:12.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:13 np0005466031 nova_compute[235803]: 2025-10-02 13:45:13.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:45:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:14.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:45:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:14.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:16.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:16.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:16 np0005466031 nova_compute[235803]: 2025-10-02 13:45:16.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:18.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:18 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:18.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:18 np0005466031 nova_compute[235803]: 2025-10-02 13:45:18.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:19 np0005466031 podman[356789]: 2025-10-02 13:45:19.627396194 +0000 UTC m=+0.057583441 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:45:19 np0005466031 podman[356790]: 2025-10-02 13:45:19.687126475 +0000 UTC m=+0.108042504 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 09:45:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:20.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:20.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:21 np0005466031 nova_compute[235803]: 2025-10-02 13:45:21.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:22.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:22 np0005466031 nova_compute[235803]: 2025-10-02 13:45:22.631 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:22.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:23 np0005466031 nova_compute[235803]: 2025-10-02 13:45:23.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:24.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:24.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:25 np0005466031 podman[356836]: 2025-10-02 13:45:25.625489697 +0000 UTC m=+0.053119121 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:45:25 np0005466031 podman[356837]: 2025-10-02 13:45:25.627632019 +0000 UTC m=+0.050657321 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:45:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:45:25.917 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:45:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:45:25.917 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:45:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:45:25.917 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:45:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:26.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:26.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:26 np0005466031 nova_compute[235803]: 2025-10-02 13:45:26.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:28.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:28.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:28 np0005466031 nova_compute[235803]: 2025-10-02 13:45:28.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:30.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:30.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:31 np0005466031 nova_compute[235803]: 2025-10-02 13:45:31.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:32.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:33 np0005466031 nova_compute[235803]: 2025-10-02 13:45:33.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:34.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:34.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.485332) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735485374, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 753, "num_deletes": 251, "total_data_size": 1401101, "memory_usage": 1430120, "flush_reason": "Manual Compaction"}
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735492453, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 913877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97333, "largest_seqno": 98081, "table_properties": {"data_size": 910235, "index_size": 1485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8415, "raw_average_key_size": 19, "raw_value_size": 902881, "raw_average_value_size": 2104, "num_data_blocks": 65, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412690, "oldest_key_time": 1759412690, "file_creation_time": 1759412735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 7162 microseconds, and 3302 cpu microseconds.
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.492496) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 913877 bytes OK
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.492515) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.494051) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.494064) EVENT_LOG_v1 {"time_micros": 1759412735494059, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.494082) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1397103, prev total WAL file size 1397103, number of live WAL files 2.
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.494667) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(892KB)], [201(13MB)]
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735494712, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 14980397, "oldest_snapshot_seqno": -1}
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 11623 keys, 12971078 bytes, temperature: kUnknown
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735549222, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 12971078, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12899049, "index_size": 41836, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 309028, "raw_average_key_size": 26, "raw_value_size": 12698848, "raw_average_value_size": 1092, "num_data_blocks": 1573, "num_entries": 11623, "num_filter_entries": 11623, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405568, "oldest_key_time": 0, "file_creation_time": 1759412735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93a4d4e4-8268-411e-81f1-7a8fce5e679b", "db_session_id": "KJCGTWCK9W49B5CVXGTD", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.549501) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12971078 bytes
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.550654) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 274.4 rd, 237.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(30.6) write-amplify(14.2) OK, records in: 12140, records dropped: 517 output_compression: NoCompression
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.550675) EVENT_LOG_v1 {"time_micros": 1759412735550666, "job": 130, "event": "compaction_finished", "compaction_time_micros": 54593, "compaction_time_cpu_micros": 31271, "output_level": 6, "num_output_files": 1, "total_output_size": 12971078, "num_input_records": 12140, "num_output_records": 11623, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735550956, "job": 130, "event": "table_file_deletion", "file_number": 203}
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735553307, "job": 130, "event": "table_file_deletion", "file_number": 201}
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.494596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.553434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.553442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.553445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.553449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466031 ceph-mon[76340]: rocksdb: (Original Log Time 2025/10/02-13:45:35.553452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:36.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:36.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:36 np0005466031 nova_compute[235803]: 2025-10-02 13:45:36.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:38.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:38.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:38 np0005466031 nova_compute[235803]: 2025-10-02 13:45:38.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:40.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:40.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:41 np0005466031 nova_compute[235803]: 2025-10-02 13:45:41.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:42.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:42.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:43 np0005466031 nova_compute[235803]: 2025-10-02 13:45:43.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:44.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:46.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:46 np0005466031 nova_compute[235803]: 2025-10-02 13:45:46.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:46.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:46 np0005466031 nova_compute[235803]: 2025-10-02 13:45:46.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:48.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:48.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:48 np0005466031 nova_compute[235803]: 2025-10-02 13:45:48.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:50.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:50 np0005466031 podman[356991]: 2025-10-02 13:45:50.642530081 +0000 UTC m=+0.071284875 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:45:50 np0005466031 podman[356992]: 2025-10-02 13:45:50.657325057 +0000 UTC m=+0.080328666 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:45:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:50.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:51 np0005466031 nova_compute[235803]: 2025-10-02 13:45:51.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:52.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:52.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:54 np0005466031 nova_compute[235803]: 2025-10-02 13:45:54.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:54.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:54 np0005466031 nova_compute[235803]: 2025-10-02 13:45:54.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:54 np0005466031 nova_compute[235803]: 2025-10-02 13:45:54.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:54.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:45:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:56.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:45:56 np0005466031 podman[357039]: 2025-10-02 13:45:56.666430896 +0000 UTC m=+0.082321393 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:45:56 np0005466031 podman[357040]: 2025-10-02 13:45:56.666702334 +0000 UTC m=+0.081831569 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 09:45:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:56.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:56 np0005466031 nova_compute[235803]: 2025-10-02 13:45:56.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:57 np0005466031 nova_compute[235803]: 2025-10-02 13:45:57.632 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:57 np0005466031 nova_compute[235803]: 2025-10-02 13:45:57.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:57 np0005466031 nova_compute[235803]: 2025-10-02 13:45:57.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:45:57 np0005466031 nova_compute[235803]: 2025-10-02 13:45:57.636 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:45:57 np0005466031 nova_compute[235803]: 2025-10-02 13:45:57.909 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:45:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:58.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:58 np0005466031 nova_compute[235803]: 2025-10-02 13:45:58.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:58 np0005466031 nova_compute[235803]: 2025-10-02 13:45:58.666 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:45:58 np0005466031 nova_compute[235803]: 2025-10-02 13:45:58.666 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:45:58 np0005466031 nova_compute[235803]: 2025-10-02 13:45:58.666 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:45:58 np0005466031 nova_compute[235803]: 2025-10-02 13:45:58.666 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:45:58 np0005466031 nova_compute[235803]: 2025-10-02 13:45:58.667 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:45:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:45:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:58.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:59 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:45:59 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4274614728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.157 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.398 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.401 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4097MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.402 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.403 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.660 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.661 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:45:59 np0005466031 nova_compute[235803]: 2025-10-02 13:45:59.809 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:46:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:00.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:00 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:46:00 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2203131589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:46:00 np0005466031 nova_compute[235803]: 2025-10-02 13:46:00.251 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:46:00 np0005466031 nova_compute[235803]: 2025-10-02 13:46:00.259 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:46:00 np0005466031 nova_compute[235803]: 2025-10-02 13:46:00.412 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:46:00 np0005466031 nova_compute[235803]: 2025-10-02 13:46:00.415 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:46:00 np0005466031 nova_compute[235803]: 2025-10-02 13:46:00.415 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:46:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:00.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:01 np0005466031 nova_compute[235803]: 2025-10-02 13:46:01.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:02.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:02.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:03 np0005466031 nova_compute[235803]: 2025-10-02 13:46:03.417 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:03 np0005466031 nova_compute[235803]: 2025-10-02 13:46:03.418 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:03 np0005466031 nova_compute[235803]: 2025-10-02 13:46:03.418 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:46:04 np0005466031 nova_compute[235803]: 2025-10-02 13:46:04.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:04.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:04.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:06.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:06.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:06 np0005466031 nova_compute[235803]: 2025-10-02 13:46:06.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:08.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:46:08 np0005466031 ceph-mon[76340]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 19K writes, 98K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1440 writes, 7137 keys, 1440 commit groups, 1.0 writes per commit group, ingest: 15.02 MB, 0.03 MB/s#012Interval WAL: 1440 writes, 1440 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     65.8      1.87              0.36        65    0.029       0      0       0.0       0.0#012  L6      1/0   12.37 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6    117.6    101.3      6.78              1.89        64    0.106    529K    34K       0.0       0.0#012 Sum      1/0   12.37 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     92.2     93.6      8.65              2.25       129    0.067    529K    34K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0    176.2    179.3      0.49              0.26        12    0.041     72K   3107       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    117.6    101.3      6.78              1.89        64    0.106    529K    34K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     65.9      1.87              0.36        64    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.120, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.79 GB write, 0.11 MB/s write, 0.78 GB read, 0.11 MB/s read, 8.7 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5570d4fad1f0#2 capacity: 304.00 MB usage: 87.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000685 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5389,83.33 MB,27.4102%) FilterBlock(129,1.43 MB,0.469905%) IndexBlock(129,2.32 MB,0.763231%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:46:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:08.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:09 np0005466031 nova_compute[235803]: 2025-10-02 13:46:09.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:09 np0005466031 nova_compute[235803]: 2025-10-02 13:46:09.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:10.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:46:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:10.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:46:11 np0005466031 nova_compute[235803]: 2025-10-02 13:46:11.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:12.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:46:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:12.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:46:14 np0005466031 nova_compute[235803]: 2025-10-02 13:46:14.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:14.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:14.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:16.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:16.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:16 np0005466031 nova_compute[235803]: 2025-10-02 13:46:16.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:18.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:18.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:19 np0005466031 nova_compute[235803]: 2025-10-02 13:46:19.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:46:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:46:19 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:46:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:20.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:20.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:21 np0005466031 podman[357318]: 2025-10-02 13:46:21.646276107 +0000 UTC m=+0.068911957 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:46:21 np0005466031 podman[357319]: 2025-10-02 13:46:21.709447427 +0000 UTC m=+0.115456148 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 09:46:21 np0005466031 nova_compute[235803]: 2025-10-02 13:46:21.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:22.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:24 np0005466031 nova_compute[235803]: 2025-10-02 13:46:24.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:24.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:24 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:24 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:24 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:24.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:46:25.917 141898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:46:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:46:25.918 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:46:25 np0005466031 ovn_metadata_agent[141893]: 2025-10-02 13:46:25.919 141898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:46:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:46:26 np0005466031 ceph-mon[76340]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:46:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:26 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:26 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:46:26 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:26.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:46:26 np0005466031 nova_compute[235803]: 2025-10-02 13:46:26.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:27 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:27 np0005466031 podman[357416]: 2025-10-02 13:46:27.641299404 +0000 UTC m=+0.063467980 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 09:46:27 np0005466031 podman[357415]: 2025-10-02 13:46:27.666221112 +0000 UTC m=+0.087209304 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:46:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:28.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:28 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:28 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:28 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:28.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:29 np0005466031 nova_compute[235803]: 2025-10-02 13:46:29.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:30.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:30 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:30 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:30 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:31 np0005466031 nova_compute[235803]: 2025-10-02 13:46:31.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:32 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:32.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:32 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:32 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:32 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:32.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:34 np0005466031 nova_compute[235803]: 2025-10-02 13:46:34.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:46:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:46:34 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:34 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:34 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:36 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:36 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:36 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:36 np0005466031 nova_compute[235803]: 2025-10-02 13:46:36.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:37 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:38 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:38 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:38 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:38.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:39 np0005466031 nova_compute[235803]: 2025-10-02 13:46:39.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:40.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:40 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:40 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:40 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:41 np0005466031 nova_compute[235803]: 2025-10-02 13:46:41.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:42 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:46:42 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 76K writes, 309K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.04 MB/s#012Cumulative WAL: 76K writes, 28K syncs, 2.71 writes per sync, written: 0.31 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 490 writes, 749 keys, 490 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s#012Interval WAL: 490 writes, 244 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:46:42 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:42 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:46:42 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:46:43 np0005466031 ceph-mgr[76697]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 09:46:44 np0005466031 nova_compute[235803]: 2025-10-02 13:46:44.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:44 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:44 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:44 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:46.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:46 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:46 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:46 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:46 np0005466031 nova_compute[235803]: 2025-10-02 13:46:46.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:47 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:47 np0005466031 nova_compute[235803]: 2025-10-02 13:46:47.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:48.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:48 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:48 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:48 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:48.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:49 np0005466031 nova_compute[235803]: 2025-10-02 13:46:49.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:50.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:50 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:50 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:50 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:50.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:52 np0005466031 nova_compute[235803]: 2025-10-02 13:46:52.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:52 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:52.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:52 np0005466031 podman[357567]: 2025-10-02 13:46:52.669281173 +0000 UTC m=+0.087051660 container health_status c5ee4e13899d02c28d37ca7b6cc0b73dc95ed0c759e855bee6977b1e6b24a019 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:46:52 np0005466031 podman[357568]: 2025-10-02 13:46:52.723332 +0000 UTC m=+0.143431944 container health_status f4536cee1edc590d4febe1d1f7b891f8274afb72e48c666b42f28e6cbc619097 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:46:52 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:52 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:52 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:52.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:54.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:54 np0005466031 nova_compute[235803]: 2025-10-02 13:46:54.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:54 np0005466031 nova_compute[235803]: 2025-10-02 13:46:54.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:54 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:54 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:54 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:54.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:46:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:56.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:46:56 np0005466031 nova_compute[235803]: 2025-10-02 13:46:56.637 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:56 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:56 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:56 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:56.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:57 np0005466031 nova_compute[235803]: 2025-10-02 13:46:57.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:57 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:57 np0005466031 nova_compute[235803]: 2025-10-02 13:46:57.633 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:57 np0005466031 nova_compute[235803]: 2025-10-02 13:46:57.634 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:57 np0005466031 nova_compute[235803]: 2025-10-02 13:46:57.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:46:57 np0005466031 nova_compute[235803]: 2025-10-02 13:46:57.635 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:46:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:58 np0005466031 podman[357616]: 2025-10-02 13:46:58.630004841 +0000 UTC m=+0.060839374 container health_status d404706841b45f82ae678f9789f0e0cb9a06bb55d5e9a2cd213842eb0336b06a (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:46:58 np0005466031 podman[357615]: 2025-10-02 13:46:58.670466137 +0000 UTC m=+0.095925526 container health_status 552249b37a7fceb15c754898c88dda6d0f5e306998e74393a54a782be728c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001)
Oct  2 09:46:58 np0005466031 nova_compute[235803]: 2025-10-02 13:46:58.767 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:46:58 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:46:58 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:58 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:58.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:58 np0005466031 systemd-logind[786]: New session 74 of user zuul.
Oct  2 09:46:58 np0005466031 systemd[1]: Started Session 74 of User zuul.
Oct  2 09:46:59 np0005466031 nova_compute[235803]: 2025-10-02 13:46:59.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:00 np0005466031 nova_compute[235803]: 2025-10-02 13:47:00.635 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:00 np0005466031 nova_compute[235803]: 2025-10-02 13:47:00.770 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:47:00 np0005466031 nova_compute[235803]: 2025-10-02 13:47:00.770 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:47:00 np0005466031 nova_compute[235803]: 2025-10-02 13:47:00.770 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:47:00 np0005466031 nova_compute[235803]: 2025-10-02 13:47:00.770 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:47:00 np0005466031 nova_compute[235803]: 2025-10-02 13:47:00.771 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:47:00 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:00 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:00 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:00.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:01 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:47:01 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/811443912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.191 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.346 2 WARNING nova.virt.libvirt.driver [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.347 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4051MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.347 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.348 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.540 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.540 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.560 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing inventories for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.657 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating ProviderTree inventory for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.658 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Updating inventory in ProviderTree for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.680 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing aggregate associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.732 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Refreshing trait associations for resource provider f694d536-1dcd-4bb3-8516-534a40cdf6d7, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:47:01 np0005466031 nova_compute[235803]: 2025-10-02 13:47:01.762 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:47:02 np0005466031 nova_compute[235803]: 2025-10-02 13:47:02.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:47:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/853717363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:47:02 np0005466031 nova_compute[235803]: 2025-10-02 13:47:02.210 2 DEBUG oslo_concurrency.processutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:47:02 np0005466031 nova_compute[235803]: 2025-10-02 13:47:02.216 2 DEBUG nova.compute.provider_tree [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed in ProviderTree for provider: f694d536-1dcd-4bb3-8516-534a40cdf6d7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:47:02 np0005466031 nova_compute[235803]: 2025-10-02 13:47:02.285 2 DEBUG nova.scheduler.client.report [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Inventory has not changed for provider f694d536-1dcd-4bb3-8516-534a40cdf6d7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:47:02 np0005466031 nova_compute[235803]: 2025-10-02 13:47:02.287 2 DEBUG nova.compute.resource_tracker [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:47:02 np0005466031 nova_compute[235803]: 2025-10-02 13:47:02.288 2 DEBUG oslo_concurrency.lockutils [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:47:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:47:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:02.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:47:02 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:47:02 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3447872947' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:47:02 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:02 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:02 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:02.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:04 np0005466031 nova_compute[235803]: 2025-10-02 13:47:04.289 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:04 np0005466031 nova_compute[235803]: 2025-10-02 13:47:04.289 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:04 np0005466031 nova_compute[235803]: 2025-10-02 13:47:04.289 2 DEBUG nova.compute.manager [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:47:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:04 np0005466031 nova_compute[235803]: 2025-10-02 13:47:04.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:04 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:04 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:04 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:04.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:47:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2758744299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:47:05 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:47:05 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2758744299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:47:05 np0005466031 ovs-vsctl[357990]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  2 09:47:06 np0005466031 virtqemud[235323]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  2 09:47:06 np0005466031 virtqemud[235323]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  2 09:47:06 np0005466031 virtqemud[235323]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:47:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:06.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:06 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: cache status {prefix=cache status} (starting...)
Oct  2 09:47:06 np0005466031 lvm[358307]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 09:47:06 np0005466031 lvm[358307]: VG ceph_vg0 finished
Oct  2 09:47:06 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: client ls {prefix=client ls} (starting...)
Oct  2 09:47:06 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:06 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:47:06 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:06.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:47:07 np0005466031 nova_compute[235803]: 2025-10-02 13:47:07.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:07 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: damage ls {prefix=damage ls} (starting...)
Oct  2 09:47:07 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump loads {prefix=dump loads} (starting...)
Oct  2 09:47:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  2 09:47:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3618812975' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  2 09:47:07 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  2 09:47:07 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  2 09:47:07 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 09:47:07 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1511379920' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 09:47:08 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  2 09:47:08 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  2 09:47:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:08.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 09:47:08 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4261183795' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 09:47:08 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  2 09:47:08 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  2 09:47:08 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: ops {prefix=ops} (starting...)
Oct  2 09:47:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  2 09:47:08 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2575496155' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  2 09:47:08 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:08 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:08 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:08.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:08 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  2 09:47:08 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2997921120' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  2 09:47:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 09:47:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094434344' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 09:47:09 np0005466031 nova_compute[235803]: 2025-10-02 13:47:09.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:47:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3431451301' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:47:09 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: session ls {prefix=session ls} (starting...)
Oct  2 09:47:09 np0005466031 ceph-mds[84762]: mds.cephfs.compute-2.dtavud asok_command: status {prefix=status} (starting...)
Oct  2 09:47:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:47:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/356698630' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:47:09 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  2 09:47:09 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/165376371' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  2 09:47:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:47:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/649391731' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:47:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:47:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:10.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:47:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  2 09:47:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2648762803' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  2 09:47:10 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:47:10 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3411214287' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:47:10 np0005466031 nova_compute[235803]: 2025-10-02 13:47:10.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:10 np0005466031 nova_compute[235803]: 2025-10-02 13:47:10.636 2 DEBUG oslo_service.periodic_task [None req-1c07da4c-ae97-44e8-abb4-1942aa467ea6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:10 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:10 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:47:10 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:10.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:47:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 09:47:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3505966985' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 09:47:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  2 09:47:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3993059270' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  2 09:47:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:47:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1295786025' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:47:11 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  2 09:47:11 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/809149563' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  2 09:47:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:47:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2425987190' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:47:12 np0005466031 nova_compute[235803]: 2025-10-02 13:47:12.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:47:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:12.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:47:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:47:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/402183147' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4963969 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e321f0b40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4963969 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4963969 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.883484840s of 14.927380562s, submitted: 17
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e34a1ac00 session 0x559e321c9860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525836288 unmapped: 72114176 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5030487 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525844480 unmapped: 72105984 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525844480 unmapped: 72105984 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e363d4b40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e363d52c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525844480 unmapped: 72105984 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e363f4960
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525713408 unmapped: 72237056 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e355a4960
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525713408 unmapped: 72237056 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5031236 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5092036 data_alloc: 218103808 data_used: 13369344
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5092036 data_alloc: 218103808 data_used: 13369344
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 525672448 unmapped: 72278016 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.852407455s of 18.952495575s, submitted: 18
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19ad95000/0x0/0x1bfc00000, data 0x1931831/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,0,0,0,1,16])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529211392 unmapped: 68739072 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5204464 data_alloc: 234881024 data_used: 14237696
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a195000/0x0/0x1bfc00000, data 0x2531831/0x2739000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529440768 unmapped: 68509696 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e379e4800 session 0x559e31c0fc20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5198752 data_alloc: 234881024 data_used: 14245888
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a175000/0x0/0x1bfc00000, data 0x2551831/0x2759000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.727799416s of 13.010139465s, submitted: 88
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a175000/0x0/0x1bfc00000, data 0x2551831/0x2759000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e321daf00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199008 data_alloc: 234881024 data_used: 14245888
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e36517400 session 0x559e355a4000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e355a50e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 68378624 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524328960 unmapped: 73621504 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e345630e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975703 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975703 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975703 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524337152 unmapped: 73613312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e363f5a40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e3732c1e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e321d5680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e321c9a40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.897541046s of 17.429567337s, submitted: 30
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e31c11e00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e36517400 session 0x559e3487a000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e36517400 session 0x559e34644000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e32d4d2c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e363d45a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff9000/0x0/0x1bfc00000, data 0x16cc841/0x18d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5020167 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e34420b40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e34696000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e3487a5a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e321fa000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff9000/0x0/0x1bfc00000, data 0x16cc841/0x18d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 72K writes, 294K keys, 72K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.06 MB/s#012Cumulative WAL: 72K writes, 26K syncs, 2.73 writes per sync, written: 0.30 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7949 writes, 30K keys, 7949 commit groups, 1.0 writes per commit group, ingest: 32.65 MB, 0.05 MB/s#012Interval WAL: 7949 writes, 3033 syncs, 2.62 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524345344 unmapped: 73605120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5035458 data_alloc: 218103808 data_used: 6471680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff8000/0x0/0x1bfc00000, data 0x16cc851/0x18d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff8000/0x0/0x1bfc00000, data 0x16cc851/0x18d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5067938 data_alloc: 218103808 data_used: 11075584
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: mgrc ms_handle_reset ms_handle_reset con 0x559e3493cc00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff8000/0x0/0x1bfc00000, data 0x16cc851/0x18d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19aff8000/0x0/0x1bfc00000, data 0x16cc851/0x18d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 73588736 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5067938 data_alloc: 218103808 data_used: 11075584
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.283088684s of 19.358438492s, submitted: 14
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 526761984 unmapped: 71188480 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a6b1000/0x0/0x1bfc00000, data 0x2013851/0x221d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,1])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a689000/0x0/0x1bfc00000, data 0x203b851/0x2245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5145622 data_alloc: 218103808 data_used: 11075584
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527450112 unmapped: 70500352 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a689000/0x0/0x1bfc00000, data 0x203b851/0x2245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5143902 data_alloc: 218103808 data_used: 11075584
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e36517400 session 0x559e31fcb680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a686000/0x0/0x1bfc00000, data 0x203e851/0x2248000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527409152 unmapped: 70541312 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19a686000/0x0/0x1bfc00000, data 0x203e851/0x2248000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527417344 unmapped: 70533120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5143902 data_alloc: 218103808 data_used: 11075584
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e380b4d20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527417344 unmapped: 70533120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e363d5a40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.799705505s of 15.938135147s, submitted: 58
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e3487ab40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 527417344 unmapped: 70533120 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e355a5680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4986411 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522027008 unmapped: 75923456 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e345630e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e355a50e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 522043392 unmapped: 75907072 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,0,0,0,0,1,1,1])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 73760768 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e35293000 session 0x559e380b5a40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4986555 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e31fe4400 session 0x559e31c0fa40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524214272 unmapped: 73736192 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.477128983s of 10.101735115s, submitted: 272
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524222464 unmapped: 73728000 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e321a7000 session 0x559e31c0fc20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e32973400 session 0x559e321f2960
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4985842 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5fa000/0x0/0x1bfc00000, data 0x10cc831/0x12d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4985842 data_alloc: 218103808 data_used: 4780032
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 ms_handle_reset con 0x559e33dd3c00 session 0x559e3732d860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 heartbeat osd_stat(store_statfs(0x19b5f9000/0x0/0x1bfc00000, data 0x10cc841/0x12d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524238848 unmapped: 73711616 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.119676590s of 11.755991936s, submitted: 17
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 396 ms_handle_reset con 0x559e36517400 session 0x559e31faf0e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524124160 unmapped: 73826304 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 396 ms_handle_reset con 0x559e36517400 session 0x559e346b2d20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524132352 unmapped: 73818112 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4991828 data_alloc: 218103808 data_used: 4792320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 397 ms_handle_reset con 0x559e31fe4400 session 0x559e3732cf00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 397 heartbeat osd_stat(store_statfs(0x19b5f2000/0x0/0x1bfc00000, data 0x10d0147/0x12db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 397 ms_handle_reset con 0x559e321a7000 session 0x559e32e44d20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 397 ms_handle_reset con 0x559e32973400 session 0x559e345632c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4994082 data_alloc: 218103808 data_used: 4792320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 397 heartbeat osd_stat(store_statfs(0x19b5f3000/0x0/0x1bfc00000, data 0x10d0137/0x12da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 397 heartbeat osd_stat(store_statfs(0x19b5f3000/0x0/0x1bfc00000, data 0x10d0137/0x12da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524140544 unmapped: 73809920 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997056 data_alloc: 218103808 data_used: 4792320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524165120 unmapped: 73785344 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997056 data_alloc: 218103808 data_used: 4792320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997056 data_alloc: 218103808 data_used: 4792320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524173312 unmapped: 73777152 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524181504 unmapped: 73768960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524181504 unmapped: 73768960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997056 data_alloc: 218103808 data_used: 4792320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b5f0000/0x0/0x1bfc00000, data 0x10d1c76/0x12dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.285722733s of 27.344110489s, submitted: 24
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524181504 unmapped: 73768960 heap: 597950464 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e33dd3c00 session 0x559e321db860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19adec000/0x0/0x1bfc00000, data 0x18d38df/0x1ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5057012 data_alloc: 218103808 data_used: 4800512
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e33dd3c00 session 0x559e31c0ed20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e31fe4400 session 0x559e32e150e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e321a7000 session 0x559e35ba9680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 524189696 unmapped: 82157568 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e32973400 session 0x559e321d5860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 76775424 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e36517400 session 0x559e32d823c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 76775424 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19adec000/0x0/0x1bfc00000, data 0x18d38df/0x1ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e36517400 session 0x559e31c0e000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529571840 unmapped: 76775424 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e31fe4400 session 0x559e348ea960
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e321a7000 session 0x559e355a4960
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5165625 data_alloc: 218103808 data_used: 11612160
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.993896484s of 13.169622421s, submitted: 39
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e32973400 session 0x559e355a5e00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 75587584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 75579392 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5165625 data_alloc: 218103808 data_used: 11612160
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 75489280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36e000/0x0/0x1bfc00000, data 0x2351941/0x2560000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5241145 data_alloc: 234881024 data_used: 22200320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 ms_handle_reset con 0x559e3e6d7400 session 0x559e3732d0e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a36d000/0x0/0x1bfc00000, data 0x2351951/0x2561000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5241277 data_alloc: 234881024 data_used: 22200320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.166830063s of 12.190934181s, submitted: 2
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532889600 unmapped: 73457664 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 400 ms_handle_reset con 0x559e31fe4400 session 0x559e363d41e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536436736 unmapped: 69910528 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992ef000/0x0/0x1bfc00000, data 0x33ce5aa/0x35df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5376871 data_alloc: 234881024 data_used: 23175168
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992ce000/0x0/0x1bfc00000, data 0x33ef5aa/0x3600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5376655 data_alloc: 234881024 data_used: 23179264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992ce000/0x0/0x1bfc00000, data 0x33ef5aa/0x3600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992ce000/0x0/0x1bfc00000, data 0x33ef5aa/0x3600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537468928 unmapped: 68878336 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.866960526s of 13.409070015s, submitted: 159
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537550848 unmapped: 68796416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5376603 data_alloc: 234881024 data_used: 23179264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537550848 unmapped: 68796416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 400 heartbeat osd_stat(store_statfs(0x1992c3000/0x0/0x1bfc00000, data 0x33fa5aa/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537550848 unmapped: 68796416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537550848 unmapped: 68796416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 400 ms_handle_reset con 0x559e321a7000 session 0x559e346970e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537567232 unmapped: 68780032 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 ms_handle_reset con 0x559e32973400 session 0x559e346b2f00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5385877 data_alloc: 234881024 data_used: 23183360
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 heartbeat osd_stat(store_statfs(0x1992ab000/0x0/0x1bfc00000, data 0x340f257/0x3621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.273681641s of 10.380677223s, submitted: 41
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 ms_handle_reset con 0x559e33dd3c00 session 0x559e363f50e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 heartbeat osd_stat(store_statfs(0x1992a9000/0x0/0x1bfc00000, data 0x3413257/0x3625000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 ms_handle_reset con 0x559e3e6d7400 session 0x559e31f13e00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5381037 data_alloc: 234881024 data_used: 23183360
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537575424 unmapped: 68771840 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 heartbeat osd_stat(store_statfs(0x1992a9000/0x0/0x1bfc00000, data 0x3413257/0x3625000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a5000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5387419 data_alloc: 234881024 data_used: 23449600
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a5000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5387419 data_alloc: 234881024 data_used: 23449600
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537583616 unmapped: 68763648 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a5000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537591808 unmapped: 68755456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.241802216s of 14.524361610s, submitted: 12
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5408483 data_alloc: 234881024 data_used: 24846336
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a4000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5406307 data_alloc: 234881024 data_used: 24854528
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5417507 data_alloc: 234881024 data_used: 27004928
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.198542595s of 13.242837906s, submitted: 25
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5416979 data_alloc: 234881024 data_used: 27004928
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a6000/0x0/0x1bfc00000, data 0x3414d96/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e3e6d7400 session 0x559e355a5c20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e321a7000 session 0x559e346b2000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e32973400 session 0x559e37f710e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5416099 data_alloc: 234881024 data_used: 27000832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e31fe4400 session 0x559e34420780
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 ms_handle_reset con 0x559e33dd3c00 session 0x559e34562f00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x1992a7000/0x0/0x1bfc00000, data 0x3414d86/0x3627000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5102671 data_alloc: 218103808 data_used: 11649024
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x19aa39000/0x0/0x1bfc00000, data 0x18d8d24/0x1aea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 heartbeat osd_stat(store_statfs(0x19aa39000/0x0/0x1bfc00000, data 0x18d8d24/0x1aea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.055377960s of 15.151789665s, submitted: 31
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 403 ms_handle_reset con 0x559e33dd3c00 session 0x559e31f8fe00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533700608 unmapped: 72646656 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 403 heartbeat osd_stat(store_statfs(0x19b5e1000/0x0/0x1bfc00000, data 0x10da9c1/0x12ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 404 ms_handle_reset con 0x559e31fe4400 session 0x559e31c110e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5091919 data_alloc: 218103808 data_used: 4849664
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 404 heartbeat osd_stat(store_statfs(0x19addd000/0x0/0x1bfc00000, data 0x18dc659/0x1af0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 404 heartbeat osd_stat(store_statfs(0x19addd000/0x0/0x1bfc00000, data 0x18dc659/0x1af0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5092079 data_alloc: 218103808 data_used: 4853760
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533716992 unmapped: 72630272 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533725184 unmapped: 72622080 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e321a7000 session 0x559e321c9860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e32973400 session 0x559e380b43c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3e6d7400 session 0x559e34697c20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533725184 unmapped: 72622080 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19adda000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 72613888 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3e6d7400 session 0x559e321f0b40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533733376 unmapped: 72613888 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5069037 data_alloc: 218103808 data_used: 10473472
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e31fe4400 session 0x559e355a4f00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 70254592 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 70254592 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19adda000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19adda000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 70254592 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e34563e00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.949297905s of 16.216978073s, submitted: 36
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 70254592 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493e400 session 0x559e380b45a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19acc6000/0x0/0x1bfc00000, data 0x19f21a8/0x1c08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493f800 session 0x559e34562b40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493f800 session 0x559e34696000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e31fe4400 session 0x559e37f701e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5185125 data_alloc: 218103808 data_used: 10477568
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493e400 session 0x559e34562d20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e3732da40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3e6d7400 session 0x559e321f3680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537608192 unmapped: 68739072 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f81000/0x0/0x1bfc00000, data 0x27371a8/0x294d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e31fe4400 session 0x559e32ee4000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493e400 session 0x559e345621e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537706496 unmapped: 68640768 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f5b000/0x0/0x1bfc00000, data 0x275b1db/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537706496 unmapped: 68640768 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537714688 unmapped: 68632576 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5221372 data_alloc: 234881024 data_used: 14692352
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f5b000/0x0/0x1bfc00000, data 0x275b1db/0x2973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.183856964s of 11.327140808s, submitted: 31
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e36012000 session 0x559e344205a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:12 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:12 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:12.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5290644 data_alloc: 234881024 data_used: 24027136
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f5a000/0x0/0x1bfc00000, data 0x275b23d/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540180480 unmapped: 66166784 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 541229056 unmapped: 65118208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 541229056 unmapped: 65118208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5290776 data_alloc: 234881024 data_used: 24027136
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199f5a000/0x0/0x1bfc00000, data 0x275b23d/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 541229056 unmapped: 65118208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543694848 unmapped: 62652416 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199799000/0x0/0x1bfc00000, data 0x2f1423d/0x312d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,6])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542760960 unmapped: 63586304 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1993f7000/0x0/0x1bfc00000, data 0x32be23d/0x34d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542760960 unmapped: 63586304 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 543932416 unmapped: 62414848 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5387926 data_alloc: 234881024 data_used: 24592384
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544071680 unmapped: 62275584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.682239532s of 11.880360603s, submitted: 96
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544071680 unmapped: 62275584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544071680 unmapped: 62275584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1980fc000/0x0/0x1bfc00000, data 0x341823d/0x3631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544071680 unmapped: 62275584 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1980fa000/0x0/0x1bfc00000, data 0x341b23d/0x3634000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544292864 unmapped: 62054400 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5404446 data_alloc: 234881024 data_used: 25628672
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544292864 unmapped: 62054400 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5413246 data_alloc: 234881024 data_used: 25882624
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1980be000/0x0/0x1bfc00000, data 0x345723d/0x3670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.788679123s of 10.885962486s, submitted: 27
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x1980a3000/0x0/0x1bfc00000, data 0x347223d/0x368b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5413850 data_alloc: 234881024 data_used: 25890816
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544301056 unmapped: 62046208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x198092000/0x0/0x1bfc00000, data 0x348223d/0x369b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5415726 data_alloc: 234881024 data_used: 25903104
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x198092000/0x0/0x1bfc00000, data 0x348223d/0x369b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5415726 data_alloc: 234881024 data_used: 25903104
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x198092000/0x0/0x1bfc00000, data 0x348223d/0x369b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.196617126s of 16.837764740s, submitted: 12
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5416902 data_alloc: 234881024 data_used: 25935872
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493f800 session 0x559e3732d680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e31c10780
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3691d000 session 0x559e35ba9680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544309248 unmapped: 62038016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5417846 data_alloc: 234881024 data_used: 25980928
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5408086 data_alloc: 234881024 data_used: 26005504
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 61964288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5408086 data_alloc: 234881024 data_used: 26005504
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.805000305s of 16.312940598s, submitted: 7
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544391168 unmapped: 61956096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544391168 unmapped: 61956096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544391168 unmapped: 61956096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544391168 unmapped: 61956096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807b000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5421270 data_alloc: 234881024 data_used: 26669056
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807d000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5421398 data_alloc: 234881024 data_used: 26673152
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.173282623s of 11.213999748s, submitted: 11
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807d000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5420870 data_alloc: 234881024 data_used: 26673152
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e31fe4400 session 0x559e34421a40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x19807d000/0x0/0x1bfc00000, data 0x349823d/0x36b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 61947904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493f800 session 0x559e321d81e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x198075000/0x0/0x1bfc00000, data 0x349d23d/0x36b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,2])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e36012000 session 0x559e363f50e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 61939712 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e329d41e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3493e400 session 0x559e34697680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 61939712 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 ms_handle_reset con 0x559e3689e400 session 0x559e321db680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5086099 data_alloc: 218103808 data_used: 10477568
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199c3a000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.649071693s of 12.372345924s, submitted: 56
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 heartbeat osd_stat(store_statfs(0x199c3a000/0x0/0x1bfc00000, data 0x18de198/0x1af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 406 handle_osd_map epochs [406,406], i have 406, src has [1,406]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535543808 unmapped: 70803456 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5090113 data_alloc: 218103808 data_used: 10420224
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 406 ms_handle_reset con 0x559e31fe4400 session 0x559e363f4d20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 406 heartbeat osd_stat(store_statfs(0x19a437000/0x0/0x1bfc00000, data 0x10dfe22/0x12f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5015361 data_alloc: 218103808 data_used: 3670016
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 406 heartbeat osd_stat(store_statfs(0x19a437000/0x0/0x1bfc00000, data 0x10dfe22/0x12f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528809984 unmapped: 77537280 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528818176 unmapped: 77529088 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528818176 unmapped: 77529088 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528818176 unmapped: 77529088 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528826368 unmapped: 77520896 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528834560 unmapped: 77512704 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528834560 unmapped: 77512704 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528834560 unmapped: 77512704 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528834560 unmapped: 77512704 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528842752 unmapped: 77504512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528842752 unmapped: 77504512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528850944 unmapped: 77496320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528850944 unmapped: 77496320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018303 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e321f23c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e36012000 session 0x559e355a41e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e31fe4400 session 0x559e32d823c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493e400 session 0x559e31c0f0e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.496902466s of 51.781383514s, submitted: 33
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 528850944 unmapped: 77496320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e32e67c20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3689e400 session 0x559e348ea3c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3691d000 session 0x559e329d41e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3691d000 session 0x559e321d81e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e31fe4400 session 0x559e345621e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a434000/0x0/0x1bfc00000, data 0x10e199a/0x12fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199d11000/0x0/0x1bfc00000, data 0x18049d3/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5085455 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199d11000/0x0/0x1bfc00000, data 0x18049d3/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493e400 session 0x559e321f3680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e3732da40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 529965056 unmapped: 76382208 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199d11000/0x0/0x1bfc00000, data 0x18049d3/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3689e400 session 0x559e34562d20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530006016 unmapped: 76341248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3689e400 session 0x559e37f701e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199d11000/0x0/0x1bfc00000, data 0x18049d3/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530006016 unmapped: 76341248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530006016 unmapped: 76341248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5122818 data_alloc: 218103808 data_used: 8413184
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199ce6000/0x0/0x1bfc00000, data 0x182e9f6/0x1a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5134178 data_alloc: 218103808 data_used: 10027008
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199ce6000/0x0/0x1bfc00000, data 0x182e9f6/0x1a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530202624 unmapped: 76144640 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199ce6000/0x0/0x1bfc00000, data 0x182e9f6/0x1a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530210816 unmapped: 76136448 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.313760757s of 19.464603424s, submitted: 52
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5168926 data_alloc: 218103808 data_used: 10100736
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532914176 unmapped: 73433088 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533962752 unmapped: 72384512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533962752 unmapped: 72384512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533962752 unmapped: 72384512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533962752 unmapped: 72384512 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5252884 data_alloc: 218103808 data_used: 11071488
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x198fa2000/0x0/0x1bfc00000, data 0x25719f6/0x278b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x198fa2000/0x0/0x1bfc00000, data 0x25719f6/0x278b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5252884 data_alloc: 218103808 data_used: 11071488
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x198fa2000/0x0/0x1bfc00000, data 0x25719f6/0x278b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.516111374s of 14.813647270s, submitted: 97
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 533970944 unmapped: 72376320 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5252720 data_alloc: 218103808 data_used: 11071488
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e31fe4400 session 0x559e380b45a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493e400 session 0x559e321f0b40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x198fa3000/0x0/0x1bfc00000, data 0x25719f6/0x278b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,0,1])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e380b52c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530997248 unmapped: 75350016 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 75341824 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 75333632 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 75325440 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 75325440 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 75325440 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531030016 unmapped: 75317248 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 75309056 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 75292672 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 75292672 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 75292672 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 75284480 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531070976 unmapped: 75276288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531070976 unmapped: 75276288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531070976 unmapped: 75276288 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19a435000/0x0/0x1bfc00000, data 0x10e1961/0x12f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 75268096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5034980 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 75268096 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 75259904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 75259904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.737773895s of 59.510211945s, submitted: 52
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 75259904 heap: 606347264 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3691d000 session 0x559e355a4f00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 82935808 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5141627 data_alloc: 218103808 data_used: 3674112
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3691d000 session 0x559e3732cd20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x199730000/0x0/0x1bfc00000, data 0x1de698a/0x1ffe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 82935808 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e31fe4400 session 0x559e321c9680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 82935808 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493e400 session 0x559e321c8d20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 82935808 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3493f800 session 0x559e31fcb680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 ms_handle_reset con 0x559e3689e400 session 0x559e346452c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532488192 unmapped: 81731584 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19970b000/0x0/0x1bfc00000, data 0x1e0a9d3/0x2023000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532496384 unmapped: 81723392 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5144493 data_alloc: 218103808 data_used: 3678208
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 81879040 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534683648 unmapped: 79536128 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534683648 unmapped: 79536128 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19970b000/0x0/0x1bfc00000, data 0x1e0a9d3/0x2023000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5232653 data_alloc: 234881024 data_used: 14741504
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19970b000/0x0/0x1bfc00000, data 0x1e0a9d3/0x2023000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19970b000/0x0/0x1bfc00000, data 0x1e0a9d3/0x2023000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5232653 data_alloc: 234881024 data_used: 14741504
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 79527936 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.774681091s of 17.730520248s, submitted: 41
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 79314944 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x19945e000/0x0/0x1bfc00000, data 0x20b79d3/0x22d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,0,0,3,2,2])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535044096 unmapped: 79175680 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x1992aa000/0x0/0x1bfc00000, data 0x225d9d3/0x2476000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535052288 unmapped: 79167488 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 79839232 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5276397 data_alloc: 234881024 data_used: 15114240
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 79839232 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x197cc7000/0x0/0x1bfc00000, data 0x22849d3/0x249d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5291905 data_alloc: 234881024 data_used: 15024128
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.657398224s of 10.153116226s, submitted: 88
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536518656 unmapped: 77701120 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 heartbeat osd_stat(store_statfs(0x197cdf000/0x0/0x1bfc00000, data 0x22859d3/0x249e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 407 handle_osd_map epochs [408,408], i have 408, src has [1,408]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536510464 unmapped: 77709312 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289251 data_alloc: 234881024 data_used: 15032320
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 408 ms_handle_reset con 0x559e3691d000 session 0x559e3487af00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 408 ms_handle_reset con 0x559e3493f800 session 0x559e321c9a40
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536969216 unmapped: 77250560 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544342016 unmapped: 69877760 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 408 ms_handle_reset con 0x559e32232400 session 0x559e32d834a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544366592 unmapped: 69853184 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 409 ms_handle_reset con 0x559e3dcf6400 session 0x559e344201e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544382976 unmapped: 69836800 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 409 heartbeat osd_stat(store_statfs(0x1976c8000/0x0/0x1bfc00000, data 0x289a2d9/0x2ab5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 409 handle_osd_map epochs [410,410], i have 410, src has [1,410]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 410 ms_handle_reset con 0x559e3691c400 session 0x559e3732d2c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544399360 unmapped: 69820416 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5382451 data_alloc: 234881024 data_used: 21319680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 410 ms_handle_reset con 0x559e32232400 session 0x559e363f4f00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 410 ms_handle_reset con 0x559e3493f800 session 0x559e347ff4a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 410 ms_handle_reset con 0x559e3691d000 session 0x559e355a50e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 410 heartbeat osd_stat(store_statfs(0x1976c4000/0x0/0x1bfc00000, data 0x289bf4e/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 544407552 unmapped: 69812224 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5382451 data_alloc: 234881024 data_used: 21319680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.358633995s of 14.245987892s, submitted: 41
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 76177408 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3dcf6400 session 0x559e32e66d20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3e6c5400 session 0x559e34697860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e32232400 session 0x559e34421680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3493f800 session 0x559e346cb4a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3dcf6400 session 0x559e346b3860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3691d000 session 0x559e346b30e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3296b800 session 0x559e321354a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 76177408 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e32232400 session 0x559e363d43c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3493f800 session 0x559e346b2f00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538042368 unmapped: 76177408 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c2000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5367523 data_alloc: 234881024 data_used: 21327872
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538050560 unmapped: 76169216 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e3691d000 session 0x559e321d52c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538058752 unmapped: 76161024 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c3000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538066944 unmapped: 76152832 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373524 data_alloc: 234881024 data_used: 21962752
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c3000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c3000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5373524 data_alloc: 234881024 data_used: 21962752
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.111186981s of 16.436338425s, submitted: 30
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5374036 data_alloc: 234881024 data_used: 21954560
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1976c3000/0x0/0x1bfc00000, data 0x289da8d/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 76136448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5414037 data_alloc: 234881024 data_used: 22958080
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1973b3000/0x0/0x1bfc00000, data 0x2bada8d/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1973b3000/0x0/0x1bfc00000, data 0x2bada8d/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539541504 unmapped: 74678272 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1973b3000/0x0/0x1bfc00000, data 0x2bada8d/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.186029434s of 10.228796005s, submitted: 5
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x1973b3000/0x0/0x1bfc00000, data 0x2bada8d/0x2dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197345000/0x0/0x1bfc00000, data 0x2c1ba8d/0x2e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5414683 data_alloc: 234881024 data_used: 22958080
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197345000/0x0/0x1bfc00000, data 0x2c1ba8d/0x2e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5414683 data_alloc: 234881024 data_used: 22958080
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536764416 unmapped: 77455360 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197345000/0x0/0x1bfc00000, data 0x2c1ba8d/0x2e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5444555 data_alloc: 234881024 data_used: 24694784
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197110000/0x0/0x1bfc00000, data 0x2e4da8d/0x306b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542171136 unmapped: 72048640 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 72040448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5446475 data_alloc: 234881024 data_used: 24965120
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 72040448 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 ms_handle_reset con 0x559e321a6800 session 0x559e363d4f00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.819005966s of 18.885263443s, submitted: 14
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197110000/0x0/0x1bfc00000, data 0x2e4da8d/0x306b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 heartbeat osd_stat(store_statfs(0x197113000/0x0/0x1bfc00000, data 0x2e4da8d/0x306b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539893760 unmapped: 74326016 heap: 614219776 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 412 handle_osd_map epochs [412,412], i have 412, src has [1,412]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 412 ms_handle_reset con 0x559e36435400 session 0x559e32135680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 412 ms_handle_reset con 0x559e32232400 session 0x559e346963c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 412 ms_handle_reset con 0x559e321a6800 session 0x559e34696960
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 412 ms_handle_reset con 0x559e3493f800 session 0x559e321c81e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 550871040 unmapped: 67551232 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 412 handle_osd_map epochs [413,413], i have 413, src has [1,413]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 413 ms_handle_reset con 0x559e3691d000 session 0x559e363f43c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545439744 unmapped: 72982528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 414 ms_handle_reset con 0x559e3689f000 session 0x559e321d43c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545447936 unmapped: 72974336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5641245 data_alloc: 251658240 data_used: 32403456
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 414 ms_handle_reset con 0x559e3689f000 session 0x559e329d43c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 414 ms_handle_reset con 0x559e321a6800 session 0x559e380b43c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 414 ms_handle_reset con 0x559e32232400 session 0x559e321fa1e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 414 heartbeat osd_stat(store_statfs(0x195acb000/0x0/0x1bfc00000, data 0x4490008/0x46b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545456128 unmapped: 72966144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545456128 unmapped: 72966144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545472512 unmapped: 72949760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 415 heartbeat osd_stat(store_statfs(0x195ac9000/0x0/0x1bfc00000, data 0x4491cd1/0x46b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 415 ms_handle_reset con 0x559e3493f800 session 0x559e321db0e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545472512 unmapped: 72949760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 415 ms_handle_reset con 0x559e3dcf6400 session 0x559e31f8f680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 415 ms_handle_reset con 0x559e32f09400 session 0x559e35ba8780
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 415 heartbeat osd_stat(store_statfs(0x197107000/0x0/0x1bfc00000, data 0x2e54cd1/0x3077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545480704 unmapped: 72941568 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5485319 data_alloc: 251658240 data_used: 32403456
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545488896 unmapped: 72933376 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.492953300s of 10.028572083s, submitted: 121
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 416 ms_handle_reset con 0x559e321a6800 session 0x559e31fca3c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 72925184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 72925184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 72925184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 72925184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5423905 data_alloc: 234881024 data_used: 26447872
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct  2 09:47:12 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 417 heartbeat osd_stat(store_statfs(0x1976b3000/0x0/0x1bfc00000, data 0x28a682c/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 417 handle_osd_map epochs [418,418], i have 418, src has [1,418]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545505280 unmapped: 72916992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 418 ms_handle_reset con 0x559e32232400 session 0x559e32135c20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545521664 unmapped: 72900608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 418 ms_handle_reset con 0x559e31fe4400 session 0x559e363d50e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 418 ms_handle_reset con 0x559e3493e400 session 0x559e348eb2c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 545529856 unmapped: 72892416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534167552 unmapped: 84254720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1967728647' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 418 ms_handle_reset con 0x559e3493e400 session 0x559e346b23c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534167552 unmapped: 84254720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5107630 data_alloc: 218103808 data_used: 3715072
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534167552 unmapped: 84254720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 418 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f4fc2/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 418 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f4fc2/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 418 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f4fc2/0x1319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5107630 data_alloc: 218103808 data_used: 3715072
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.582468033s of 14.561615944s, submitted: 68
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534175744 unmapped: 84246528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534183936 unmapped: 84238336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534192128 unmapped: 84230144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e31fe4400 session 0x559e32ddb4a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534200320 unmapped: 84221952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111628 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534208512 unmapped: 84213760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e321a6800 session 0x559e321f3e00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534208512 unmapped: 84213760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e32232400 session 0x559e355a5860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534208512 unmapped: 84213760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e32f09400 session 0x559e321c8000
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.362262726s of 48.373008728s, submitted: 11
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534241280 unmapped: 84180992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e31fe4400 session 0x559e363f4780
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 84172800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5115500 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 84172800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 84172800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e3d000/0x0/0x1bfc00000, data 0x111ab10/0x1341000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 84172800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e321a6800 session 0x559e363f4780
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e32232400 session 0x559e355a5860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493e400 session 0x559e32ddb4a0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111667 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493f800 session 0x559e363d50e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534257664 unmapped: 84164608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111667 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493f800 session 0x559e32135c20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534265856 unmapped: 84156416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e62000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e31fe4400 session 0x559e35ba8780
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e321a6800 session 0x559e321db0e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5111667 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.037994385s of 17.063156128s, submitted: 6
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e32232400 session 0x559e321fa1e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 74K writes, 304K keys, 74K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.05 MB/s#012Cumulative WAL: 74K writes, 27K syncs, 2.73 writes per sync, written: 0.31 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2624 writes, 10K keys, 2624 commit groups, 1.0 writes per commit group, ingest: 10.09 MB, 0.02 MB/s#012Interval WAL: 2624 writes, 1007 syncs, 2.61 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559e305a74b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b11/0x131d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b11/0x131d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493e400 session 0x559e329d43c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534274048 unmapped: 84148224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5112727 data_alloc: 218103808 data_used: 3723264
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e3493e400 session 0x559e363f43c0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198e61000/0x0/0x1bfc00000, data 0x10f6b01/0x131c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534282240 unmapped: 84140032 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e31fe4400 session 0x559e34696960
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 ms_handle_reset con 0x559e321a6800 session 0x559e346b30e0
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 heartbeat osd_stat(store_statfs(0x198ab1000/0x0/0x1bfc00000, data 0x14a6b11/0x16cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534609920 unmapped: 83812352 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e32232400 session 0x559e34697860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534568960 unmapped: 83853312 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5153874 data_alloc: 218103808 data_used: 3735552
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534568960 unmapped: 83853312 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534568960 unmapped: 83853312 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aae000/0x0/0x1bfc00000, data 0x14a876a/0x16d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5153874 data_alloc: 218103808 data_used: 3735552
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aae000/0x0/0x1bfc00000, data 0x14a876a/0x16d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.734244347s of 15.983036041s, submitted: 33
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534577152 unmapped: 83845120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3689f000 session 0x559e32ddbe00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3493f800 session 0x559e321f3860
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534585344 unmapped: 83836928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534585344 unmapped: 83836928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157176 data_alloc: 218103808 data_used: 3735552
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aad000/0x0/0x1bfc00000, data 0x14a877a/0x16d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3689f000 session 0x559e321fbe00
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aad000/0x0/0x1bfc00000, data 0x14a877a/0x16d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e31fe4400 session 0x559e31c0fc20
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e321a6800 session 0x559e363f5680
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:12 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e32232400 session 0x559e31f134a0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534593536 unmapped: 83828736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5184599 data_alloc: 218103808 data_used: 7045120
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aac000/0x0/0x1bfc00000, data 0x14a878a/0x16d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e32232400 session 0x559e346961e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e31fe4400 session 0x559e346cbc20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534601728 unmapped: 83820544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.108129501s of 10.257454872s, submitted: 18
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e321a6800 session 0x559e34420960
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 heartbeat osd_stat(store_statfs(0x198aad000/0x0/0x1bfc00000, data 0x14a877a/0x16d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534609920 unmapped: 83812352 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3493f800 session 0x559e32d832c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e3689f000 session 0x559e346b2000
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535658496 unmapped: 82763776 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 ms_handle_reset con 0x559e31fe4400 session 0x559e34562f00
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535666688 unmapped: 82755584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5183696 data_alloc: 218103808 data_used: 7307264
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e321a6800 session 0x559e321d43c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535674880 unmapped: 82747392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e32232400 session 0x559e363d4780
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535691264 unmapped: 82731008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e3493f800 session 0x559e321f10e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 421 heartbeat osd_stat(store_statfs(0x198e5b000/0x0/0x1bfc00000, data 0x10fa407/0x1322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535691264 unmapped: 82731008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535691264 unmapped: 82731008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535642112 unmapped: 82780160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5128019 data_alloc: 218103808 data_used: 3743744
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535707648 unmapped: 82714624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535724032 unmapped: 82698240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.888588905s of 10.895521164s, submitted: 223
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e3493e400 session 0x559e31c10000
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535724032 unmapped: 82698240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 421 heartbeat osd_stat(store_statfs(0x198e5c000/0x0/0x1bfc00000, data 0x10fa407/0x1322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [0,1])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 421 ms_handle_reset con 0x559e31fe4400 session 0x559e31f13a40
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535773184 unmapped: 82649088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535773184 unmapped: 82649088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127947 data_alloc: 218103808 data_used: 3743744
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198e58000/0x0/0x1bfc00000, data 0x10fbf46/0x1325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e321a6800 session 0x559e321c9c20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198e58000/0x0/0x1bfc00000, data 0x10fbf46/0x1325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e32232400 session 0x559e3138f680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5132121 data_alloc: 218103808 data_used: 3751936
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198e58000/0x0/0x1bfc00000, data 0x10fbf46/0x1325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e3493f800 session 0x559e3732c3c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.618832588s of 10.136927605s, submitted: 109
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535789568 unmapped: 82632704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e3691d000 session 0x559e3487ba40
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535732224 unmapped: 82690048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e3691d000 session 0x559e31f8fe00
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535732224 unmapped: 82690048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220801 data_alloc: 218103808 data_used: 3751936
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220801 data_alloc: 218103808 data_used: 3751936
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220801 data_alloc: 218103808 data_used: 3751936
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220801 data_alloc: 218103808 data_used: 3751936
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e31fe4400 session 0x559e32d82b40
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x198327000/0x0/0x1bfc00000, data 0x1c2df46/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e321a6800 session 0x559e346445a0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535740416 unmapped: 82681856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e32232400 session 0x559e32134000
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.890636444s of 19.963485718s, submitted: 15
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 ms_handle_reset con 0x559e3493f800 session 0x559e31c10780
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 82329600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536092672 unmapped: 82329600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536109056 unmapped: 82313216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5265161 data_alloc: 218103808 data_used: 9170944
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x1982fc000/0x0/0x1bfc00000, data 0x1c57f56/0x1e82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5307721 data_alloc: 234881024 data_used: 15204352
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x1982fc000/0x0/0x1bfc00000, data 0x1c57f56/0x1e82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x1982fc000/0x0/0x1bfc00000, data 0x1c57f56/0x1e82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537264128 unmapped: 81158144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5307721 data_alloc: 234881024 data_used: 15204352
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.728299141s of 12.746973038s, submitted: 3
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540131328 unmapped: 78290944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539566080 unmapped: 78856192 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 539566080 unmapped: 78856192 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x197b97000/0x0/0x1bfc00000, data 0x23bcf56/0x25e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5377333 data_alloc: 234881024 data_used: 16367616
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x197b97000/0x0/0x1bfc00000, data 0x23bcf56/0x25e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x197b97000/0x0/0x1bfc00000, data 0x23bcf56/0x25e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5378173 data_alloc: 234881024 data_used: 16367616
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540000256 unmapped: 78422016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.133055687s of 11.368740082s, submitted: 81
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540008448 unmapped: 78413824 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 heartbeat osd_stat(store_statfs(0x197b75000/0x0/0x1bfc00000, data 0x23def56/0x2609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e32232400 session 0x559e34697680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e321a6800 session 0x559e346b3a40
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b71000/0x0/0x1bfc00000, data 0x23e0d01/0x260c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5383844 data_alloc: 234881024 data_used: 16375808
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b6d000/0x0/0x1bfc00000, data 0x259ad01/0x2611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b6d000/0x0/0x1bfc00000, data 0x259ad01/0x2611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396572 data_alloc: 234881024 data_used: 16375808
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b6d000/0x0/0x1bfc00000, data 0x259ad01/0x2611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.175535202s of 11.225886345s, submitted: 12
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b6a000/0x0/0x1bfc00000, data 0x259dd01/0x2614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3ac5a400 session 0x559e3487ad20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3691d000 session 0x559e31c101e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540016640 unmapped: 78405632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b69000/0x0/0x1bfc00000, data 0x259dd63/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540024832 unmapped: 78397440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5399792 data_alloc: 234881024 data_used: 16375808
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540024832 unmapped: 78397440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540024832 unmapped: 78397440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e32f09c00 session 0x559e34696b40
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b69000/0x0/0x1bfc00000, data 0x259dd63/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540033024 unmapped: 78389248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e321a6800 session 0x559e355a4f00
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540033024 unmapped: 78389248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e32232400 session 0x559e355a4780
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3691d000 session 0x559e355a5a40
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 78381056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5401113 data_alloc: 234881024 data_used: 16375808
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 78381056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b69000/0x0/0x1bfc00000, data 0x259dd63/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540041216 unmapped: 78381056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 78372864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540049408 unmapped: 78372864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.440385818s of 11.490693092s, submitted: 14
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b69000/0x0/0x1bfc00000, data 0x259dd63/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406917 data_alloc: 234881024 data_used: 16429056
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406917 data_alloc: 234881024 data_used: 16429056
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5416031 data_alloc: 234881024 data_used: 18796544
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.040010452s of 15.073230743s, submitted: 10
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415935 data_alloc: 234881024 data_used: 18796544
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415935 data_alloc: 234881024 data_used: 18796544
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415935 data_alloc: 234881024 data_used: 18796544
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540057600 unmapped: 78364672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415935 data_alloc: 234881024 data_used: 18796544
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.186006546s of 18.193222046s, submitted: 2
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3ac5a400 session 0x559e32e67c20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3a5dd000 session 0x559e380b43c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540065792 unmapped: 78356480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540073984 unmapped: 78348288 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e321a6800 session 0x559e32e45680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 78331904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5414851 data_alloc: 234881024 data_used: 18800640
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 78331904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d63/0x2619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 78331904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540090368 unmapped: 78331904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e32232400 session 0x559e348ebc20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540098560 unmapped: 78323712 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3691d000 session 0x559e31fcb680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5413563 data_alloc: 234881024 data_used: 18796544
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d01/0x2618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x25a2d01/0x2618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 ms_handle_reset con 0x559e3a5dd000 session 0x559e345630e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540114944 unmapped: 78307328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.828241348s of 11.966302872s, submitted: 44
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540123136 unmapped: 78299136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409117 data_alloc: 234881024 data_used: 18804736
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 424 ms_handle_reset con 0x559e3ac5a400 session 0x559e346b30e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540123136 unmapped: 78299136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 424 ms_handle_reset con 0x559e3493f800 session 0x559e321d4f00
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 424 ms_handle_reset con 0x559e31fe4400 session 0x559e3138f680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540123136 unmapped: 78299136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 424 heartbeat osd_stat(store_statfs(0x197b65000/0x0/0x1bfc00000, data 0x23eb9ae/0x2618000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540131328 unmapped: 78290944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 424 heartbeat osd_stat(store_statfs(0x197b67000/0x0/0x1bfc00000, data 0x23eb99e/0x2617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540139520 unmapped: 78282752 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 424 ms_handle_reset con 0x559e321a6800 session 0x559e346b25a0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540139520 unmapped: 78282752 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5399798 data_alloc: 234881024 data_used: 18669568
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 424 heartbeat osd_stat(store_statfs(0x197b91000/0x0/0x1bfc00000, data 0x23c199e/0x25ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 425 ms_handle_reset con 0x559e32232400 session 0x559e31c10000
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5403972 data_alloc: 234881024 data_used: 18677760
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 425 heartbeat osd_stat(store_statfs(0x197b8d000/0x0/0x1bfc00000, data 0x23c34dd/0x25f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 540155904 unmapped: 78266368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.849381447s of 11.910986900s, submitted: 28
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 425 heartbeat osd_stat(store_statfs(0x198e50000/0x0/0x1bfc00000, data 0x11014dd/0x132e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 425 ms_handle_reset con 0x559e3691d000 session 0x559e321c9c20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 425 heartbeat osd_stat(store_statfs(0x198e50000/0x0/0x1bfc00000, data 0x11014dd/0x132e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5159680 data_alloc: 218103808 data_used: 3776512
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 425 heartbeat osd_stat(store_statfs(0x198e50000/0x0/0x1bfc00000, data 0x11014dd/0x132e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534380544 unmapped: 84041728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 426 heartbeat osd_stat(store_statfs(0x198e4c000/0x0/0x1bfc00000, data 0x110318a/0x1331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5163854 data_alloc: 218103808 data_used: 3784704
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 426 ms_handle_reset con 0x559e31fe4400 session 0x559e380b54a0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 426 heartbeat osd_stat(store_statfs(0x198e4d000/0x0/0x1bfc00000, data 0x110318a/0x1331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5162542 data_alloc: 218103808 data_used: 3788800
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 426 heartbeat osd_stat(store_statfs(0x198e4d000/0x0/0x1bfc00000, data 0x110318a/0x1331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.663026810s of 14.793736458s, submitted: 42
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104cc9/0x1334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5166716 data_alloc: 218103808 data_used: 3796992
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104cc9/0x1334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e321a6800 session 0x559e3732c960
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e32232400 session 0x559e321c8d20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3493f800 session 0x559e32ee5860
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5169092 data_alloc: 218103808 data_used: 3796992
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534396928 unmapped: 84025344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104d2b/0x1335000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534405120 unmapped: 84017152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104d2b/0x1335000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.876494408s of 10.890866280s, submitted: 12
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3a5dd000 session 0x559e3732da40
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534413312 unmapped: 84008960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198e49000/0x0/0x1bfc00000, data 0x1104d2b/0x1335000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e31fe4400 session 0x559e34562f00
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534421504 unmapped: 84000768 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e321a6800 session 0x559e31c10000
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e32232400 session 0x559e346b30e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199550 data_alloc: 218103808 data_used: 3796992
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534503424 unmapped: 83918848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199550 data_alloc: 218103808 data_used: 3796992
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199550 data_alloc: 218103808 data_used: 3796992
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534511616 unmapped: 83910656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5199550 data_alloc: 218103808 data_used: 3796992
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b5f000/0x0/0x1bfc00000, data 0x13efcc9/0x161f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3493f800 session 0x559e31fcb680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3dcf6000 session 0x559e348ebc20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3dcf6000 session 0x559e380b43c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.670154572s of 21.787570953s, submitted: 36
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534519808 unmapped: 83902464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e31fe4400 session 0x559e32e67c20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5204921 data_alloc: 218103808 data_used: 3796992
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534528000 unmapped: 83894272 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b3a000/0x0/0x1bfc00000, data 0x1413cd9/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5219613 data_alloc: 218103808 data_used: 5652480
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b3a000/0x0/0x1bfc00000, data 0x1413cd9/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b3a000/0x0/0x1bfc00000, data 0x1413cd9/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5219613 data_alloc: 218103808 data_used: 5652480
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x198b3a000/0x0/0x1bfc00000, data 0x1413cd9/0x1644000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534544384 unmapped: 83877888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.476041794s of 14.511431694s, submitted: 9
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537731072 unmapped: 80691200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 6332416
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x1982a1000/0x0/0x1bfc00000, data 0x1cabcd9/0x1edc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5294823 data_alloc: 218103808 data_used: 6479872
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537755648 unmapped: 80666624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829f000/0x0/0x1bfc00000, data 0x1caecd9/0x1edf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e321a6800 session 0x559e31c101e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e32232400 session 0x559e34697680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537763840 unmapped: 80658432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.910843849s of 10.110754967s, submitted: 90
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3493f800 session 0x559e329d45a0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283446 data_alloc: 218103808 data_used: 6369280
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x1982c3000/0x0/0x1bfc00000, data 0x1c8bcc9/0x1ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283446 data_alloc: 218103808 data_used: 6369280
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537780224 unmapped: 80642048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x1982c3000/0x0/0x1bfc00000, data 0x1c8bcc9/0x1ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283446 data_alloc: 218103808 data_used: 6369280
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e31fe4400 session 0x559e348eb4a0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x1982c3000/0x0/0x1bfc00000, data 0x1c8bcc9/0x1ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e321a6800 session 0x559e348eb680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537788416 unmapped: 80633856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e32232400 session 0x559e32e15a40
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.753578186s of 15.813142776s, submitted: 17
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 80478208 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 ms_handle_reset con 0x559e3dcf6000 session 0x559e321c85a0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5288358 data_alloc: 218103808 data_used: 6373376
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 537944064 unmapped: 80478208 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289318 data_alloc: 218103808 data_used: 6451200
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5289318 data_alloc: 218103808 data_used: 6451200
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.299868584s of 12.453051567s, submitted: 3
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5306294 data_alloc: 218103808 data_used: 8187904
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5306294 data_alloc: 218103808 data_used: 8187904
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5309654 data_alloc: 218103808 data_used: 8790016
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 heartbeat osd_stat(store_statfs(0x19829e000/0x0/0x1bfc00000, data 0x1cafcd8/0x1ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536772608 unmapped: 81649664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.314167023s of 14.153972626s, submitted: 2
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e34c1c400 session 0x559e3487ad20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536797184 unmapped: 81625088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e321a6800 session 0x559e346b21e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e31fe4400 session 0x559e32134960
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5356888 data_alloc: 218103808 data_used: 8798208
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536805376 unmapped: 81616896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198294000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 81608704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 81608704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536821760 unmapped: 81600512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198294000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536821760 unmapped: 81600512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5356888 data_alloc: 218103808 data_used: 8798208
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e32232400 session 0x559e32a00f00
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e3dcf6000 session 0x559e32e66780
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198294000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e35292800 session 0x559e321c8960
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.009097099s of 11.039314270s, submitted: 5
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e31fe4400 session 0x559e32e672c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198294000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198295000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5357093 data_alloc: 218103808 data_used: 8802304
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536829952 unmapped: 81592320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5357093 data_alloc: 218103808 data_used: 8802304
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536838144 unmapped: 81584128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198295000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536846336 unmapped: 81575936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536846336 unmapped: 81575936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536846336 unmapped: 81575936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x198295000/0x0/0x1bfc00000, data 0x21e8993/0x1ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 536854528 unmapped: 81567744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.068496704s of 13.099369049s, submitted: 7
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5387847 data_alloc: 218103808 data_used: 9965568
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801c000/0x0/0x1bfc00000, data 0x2461993/0x2162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801c000/0x0/0x1bfc00000, data 0x2461993/0x2162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801c000/0x0/0x1bfc00000, data 0x2461993/0x2162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5388487 data_alloc: 218103808 data_used: 9981952
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534904832 unmapped: 83517440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801c000/0x0/0x1bfc00000, data 0x2461993/0x2162000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534913024 unmapped: 83509248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.394811630s of 10.438727379s, submitted: 7
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5389671 data_alloc: 218103808 data_used: 9981952
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534913024 unmapped: 83509248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534913024 unmapped: 83509248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534913024 unmapped: 83509248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534921216 unmapped: 83501056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x19801b000/0x0/0x1bfc00000, data 0x2461993/0x2163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534921216 unmapped: 83501056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5398279 data_alloc: 218103808 data_used: 9981952
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535044096 unmapped: 83378176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535044096 unmapped: 83378176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535044096 unmapped: 83378176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535101440 unmapped: 83320832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fad000/0x0/0x1bfc00000, data 0x24cf993/0x21d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 535101440 unmapped: 83320832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fad000/0x0/0x1bfc00000, data 0x24cf993/0x21d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396103 data_alloc: 218103808 data_used: 9981952
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534642688 unmapped: 83779584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534642688 unmapped: 83779584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fad000/0x0/0x1bfc00000, data 0x24cf993/0x21d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5397863 data_alloc: 218103808 data_used: 10248192
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.697258949s of 15.726952553s, submitted: 6
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e321a6800 session 0x559e363d4960
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e32232400 session 0x559e34562d20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e3dcf6000 session 0x559e3732cd20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534650880 unmapped: 83771392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fad000/0x0/0x1bfc00000, data 0x24cf993/0x21d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e379c5000 session 0x559e3487b680
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396787 data_alloc: 218103808 data_used: 10248192
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e31fe4400 session 0x559e380b52c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 ms_handle_reset con 0x559e321a6800 session 0x559e346b3860
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534659072 unmapped: 83763200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 429 ms_handle_reset con 0x559e32232400 session 0x559e363f41e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 429 heartbeat osd_stat(store_statfs(0x198298000/0x0/0x1bfc00000, data 0x1cb35de/0x1ee6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5346385 data_alloc: 218103808 data_used: 9920512
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 429 ms_handle_reset con 0x559e36898400 session 0x559e321c92c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 429 ms_handle_reset con 0x559e3ac5a000 session 0x559e346452c0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534667264 unmapped: 83755008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.201784134s of 11.308134079s, submitted: 33
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534675456 unmapped: 83746816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 429 heartbeat osd_stat(store_statfs(0x1982bd000/0x0/0x1bfc00000, data 0x1c8f5cf/0x1ec1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 429 ms_handle_reset con 0x559e3ac5a000 session 0x559e344210e0
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534675456 unmapped: 83746816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534675456 unmapped: 83746816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5339505 data_alloc: 218103808 data_used: 9814016
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534675456 unmapped: 83746816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x1982b9000/0x0/0x1bfc00000, data 0x1c9110e/0x1ec4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 83730432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 ms_handle_reset con 0x559e31fe4400 session 0x559e3732dc20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 534691840 unmapped: 83730432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 ms_handle_reset con 0x559e321a6800 session 0x559e3732de00
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531636224 unmapped: 86786048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531636224 unmapped: 86786048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531644416 unmapped: 86777856 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531734528 unmapped: 86687744 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'config show' '{prefix=config show}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531070976 unmapped: 87351296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530735104 unmapped: 87687168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 541810688 unmapped: 76611584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'perf dump' '{prefix=perf dump}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'perf schema' '{prefix=perf schema}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 87662592 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 87662592 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 87654400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 87654400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 87654400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 87654400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 87654400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 87654400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 87654400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530776064 unmapped: 87646208 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530776064 unmapped: 87646208 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530776064 unmapped: 87646208 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530784256 unmapped: 87638016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530784256 unmapped: 87638016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 87621632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 87621632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 87621632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 87621632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 87621632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 87621632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 87621632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 87621632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530808832 unmapped: 87613440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530808832 unmapped: 87613440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530808832 unmapped: 87613440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530808832 unmapped: 87613440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530817024 unmapped: 87605248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530825216 unmapped: 87597056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530825216 unmapped: 87597056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530825216 unmapped: 87597056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530825216 unmapped: 87597056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530825216 unmapped: 87597056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530825216 unmapped: 87597056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 87588864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 87588864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 87588864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530841600 unmapped: 87580672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530841600 unmapped: 87580672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530849792 unmapped: 87572480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530849792 unmapped: 87572480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530849792 unmapped: 87572480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530849792 unmapped: 87572480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 87564288 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 87564288 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 87564288 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530866176 unmapped: 87556096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530866176 unmapped: 87556096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 87547904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 87547904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 87547904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 87547904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 87547904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 87547904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 87547904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 87547904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530882560 unmapped: 87539712 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530882560 unmapped: 87539712 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530882560 unmapped: 87539712 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530890752 unmapped: 87531520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530890752 unmapped: 87531520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530890752 unmapped: 87531520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530890752 unmapped: 87531520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530898944 unmapped: 87523328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530898944 unmapped: 87523328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530898944 unmapped: 87523328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530898944 unmapped: 87523328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530915328 unmapped: 87506944 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530923520 unmapped: 87498752 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530923520 unmapped: 87498752 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530923520 unmapped: 87498752 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530931712 unmapped: 87490560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530931712 unmapped: 87490560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530931712 unmapped: 87490560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530931712 unmapped: 87490560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530931712 unmapped: 87490560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530939904 unmapped: 87482368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530948096 unmapped: 87474176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530948096 unmapped: 87474176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530948096 unmapped: 87474176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530948096 unmapped: 87474176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530948096 unmapped: 87474176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530948096 unmapped: 87474176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530956288 unmapped: 87465984 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530964480 unmapped: 87457792 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530964480 unmapped: 87457792 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530964480 unmapped: 87457792 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530972672 unmapped: 87449600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530972672 unmapped: 87449600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530972672 unmapped: 87449600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530972672 unmapped: 87449600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530972672 unmapped: 87449600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530972672 unmapped: 87449600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530980864 unmapped: 87441408 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530980864 unmapped: 87441408 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530989056 unmapped: 87433216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530989056 unmapped: 87433216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530989056 unmapped: 87433216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530989056 unmapped: 87433216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530989056 unmapped: 87433216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 530989056 unmapped: 87433216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 87416832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531005440 unmapped: 87416832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 87408640 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 87408640 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 87408640 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 87408640 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 87408640 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 87400448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 87400448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531021824 unmapped: 87400448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 87384064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 87384064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 87384064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 87384064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 87384064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 87384064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 87384064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531038208 unmapped: 87384064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 87367680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 87367680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 87367680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 87367680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531054592 unmapped: 87367680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 87359488 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 87359488 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 87359488 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 87359488 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 87359488 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531062784 unmapped: 87359488 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 87343104 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 87343104 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 76K writes, 309K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.05 MB/s#012Cumulative WAL: 76K writes, 28K syncs, 2.72 writes per sync, written: 0.31 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1617 writes, 4926 keys, 1617 commit groups, 1.0 writes per commit group, ingest: 3.44 MB, 0.01 MB/s#012Interval WAL: 1617 writes, 737 syncs, 2.19 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 87343104 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 87343104 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531079168 unmapped: 87343104 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 87334912 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 87334912 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 87334912 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 87334912 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 87334912 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 87334912 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 87334912 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531087360 unmapped: 87334912 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531103744 unmapped: 87318528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531103744 unmapped: 87318528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531103744 unmapped: 87318528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531111936 unmapped: 87310336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531111936 unmapped: 87310336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531111936 unmapped: 87310336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531111936 unmapped: 87310336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531111936 unmapped: 87310336 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531120128 unmapped: 87302144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531120128 unmapped: 87302144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531120128 unmapped: 87302144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531120128 unmapped: 87302144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531128320 unmapped: 87293952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531136512 unmapped: 87285760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198e41000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25a7f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531152896 unmapped: 87269376 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 287.072692871s of 287.114166260s, submitted: 20
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531161088 unmapped: 87261184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531177472 unmapped: 87244800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531193856 unmapped: 87228416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531226624 unmapped: 87195648 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531234816 unmapped: 87187456 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531243008 unmapped: 87179264 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531243008 unmapped: 87179264 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531275776 unmapped: 87146496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531283968 unmapped: 87138304 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531292160 unmapped: 87130112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531300352 unmapped: 87121920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531300352 unmapped: 87121920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531300352 unmapped: 87121920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531300352 unmapped: 87121920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531300352 unmapped: 87121920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531300352 unmapped: 87121920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531300352 unmapped: 87121920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531300352 unmapped: 87121920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531308544 unmapped: 87113728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531308544 unmapped: 87113728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531308544 unmapped: 87113728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531308544 unmapped: 87113728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531308544 unmapped: 87113728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531308544 unmapped: 87113728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531308544 unmapped: 87113728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531308544 unmapped: 87113728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531316736 unmapped: 87105536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531316736 unmapped: 87105536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531316736 unmapped: 87105536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531316736 unmapped: 87105536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531316736 unmapped: 87105536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531324928 unmapped: 87097344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531324928 unmapped: 87097344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531324928 unmapped: 87097344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531333120 unmapped: 87089152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531333120 unmapped: 87089152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531333120 unmapped: 87089152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531333120 unmapped: 87089152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531333120 unmapped: 87089152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531333120 unmapped: 87089152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531333120 unmapped: 87089152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531341312 unmapped: 87080960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531341312 unmapped: 87080960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531341312 unmapped: 87080960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531341312 unmapped: 87080960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531341312 unmapped: 87080960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531341312 unmapped: 87080960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531341312 unmapped: 87080960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531341312 unmapped: 87080960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531349504 unmapped: 87072768 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531349504 unmapped: 87072768 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531349504 unmapped: 87072768 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531357696 unmapped: 87064576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531357696 unmapped: 87064576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531357696 unmapped: 87064576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531357696 unmapped: 87064576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531357696 unmapped: 87064576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531365888 unmapped: 87056384 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531365888 unmapped: 87056384 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531374080 unmapped: 87048192 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531374080 unmapped: 87048192 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531374080 unmapped: 87048192 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531382272 unmapped: 87040000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531382272 unmapped: 87040000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531382272 unmapped: 87040000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531382272 unmapped: 87040000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531390464 unmapped: 87031808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531390464 unmapped: 87031808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531390464 unmapped: 87031808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531390464 unmapped: 87031808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531398656 unmapped: 87023616 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531398656 unmapped: 87023616 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531398656 unmapped: 87023616 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531398656 unmapped: 87023616 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531406848 unmapped: 87015424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531406848 unmapped: 87015424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531406848 unmapped: 87015424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531406848 unmapped: 87015424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531406848 unmapped: 87015424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531406848 unmapped: 87015424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531406848 unmapped: 87015424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531406848 unmapped: 87015424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531415040 unmapped: 87007232 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531415040 unmapped: 87007232 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531423232 unmapped: 86999040 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531431424 unmapped: 86990848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531431424 unmapped: 86990848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531431424 unmapped: 86990848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531431424 unmapped: 86990848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531439616 unmapped: 86982656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531439616 unmapped: 86982656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531439616 unmapped: 86982656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531439616 unmapped: 86982656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531439616 unmapped: 86982656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531439616 unmapped: 86982656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531447808 unmapped: 86974464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531447808 unmapped: 86974464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531447808 unmapped: 86974464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531447808 unmapped: 86974464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531447808 unmapped: 86974464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531447808 unmapped: 86974464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531447808 unmapped: 86974464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531464192 unmapped: 86958080 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531464192 unmapped: 86958080 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531464192 unmapped: 86958080 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531464192 unmapped: 86958080 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531472384 unmapped: 86949888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531472384 unmapped: 86949888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531472384 unmapped: 86949888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531472384 unmapped: 86949888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531472384 unmapped: 86949888 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531480576 unmapped: 86941696 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531480576 unmapped: 86941696 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531480576 unmapped: 86941696 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531488768 unmapped: 86933504 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531488768 unmapped: 86933504 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531488768 unmapped: 86933504 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531488768 unmapped: 86933504 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531488768 unmapped: 86933504 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531488768 unmapped: 86933504 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531488768 unmapped: 86933504 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531505152 unmapped: 86917120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531505152 unmapped: 86917120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531505152 unmapped: 86917120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531505152 unmapped: 86917120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531505152 unmapped: 86917120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531505152 unmapped: 86917120 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531513344 unmapped: 86908928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531513344 unmapped: 86908928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531513344 unmapped: 86908928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531513344 unmapped: 86908928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531513344 unmapped: 86908928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531513344 unmapped: 86908928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531513344 unmapped: 86908928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531521536 unmapped: 86900736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531521536 unmapped: 86900736 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531529728 unmapped: 86892544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531529728 unmapped: 86892544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531546112 unmapped: 86876160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531546112 unmapped: 86876160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531546112 unmapped: 86876160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531546112 unmapped: 86876160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531546112 unmapped: 86876160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531546112 unmapped: 86876160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531546112 unmapped: 86876160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531546112 unmapped: 86876160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 86867968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 86867968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 86867968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 86867968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 86867968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 86867968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531554304 unmapped: 86867968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531570688 unmapped: 86851584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531570688 unmapped: 86851584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531570688 unmapped: 86851584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531570688 unmapped: 86851584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531570688 unmapped: 86851584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531570688 unmapped: 86851584 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531578880 unmapped: 86843392 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531587072 unmapped: 86835200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531587072 unmapped: 86835200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531587072 unmapped: 86835200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531587072 unmapped: 86835200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531587072 unmapped: 86835200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531587072 unmapped: 86835200 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531595264 unmapped: 86827008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531595264 unmapped: 86827008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531595264 unmapped: 86827008 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531603456 unmapped: 86818816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531603456 unmapped: 86818816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531603456 unmapped: 86818816 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531611648 unmapped: 86810624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531611648 unmapped: 86810624 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531619840 unmapped: 86802432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531619840 unmapped: 86802432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531619840 unmapped: 86802432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531619840 unmapped: 86802432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531619840 unmapped: 86802432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531619840 unmapped: 86802432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531619840 unmapped: 86802432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531619840 unmapped: 86802432 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531628032 unmapped: 86794240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531628032 unmapped: 86794240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531628032 unmapped: 86794240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531628032 unmapped: 86794240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531628032 unmapped: 86794240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531628032 unmapped: 86794240 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531636224 unmapped: 86786048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531636224 unmapped: 86786048 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531652608 unmapped: 86769664 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531660800 unmapped: 86761472 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531668992 unmapped: 86753280 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531677184 unmapped: 86745088 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531685376 unmapped: 86736896 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531693568 unmapped: 86728704 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531701760 unmapped: 86720512 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531709952 unmapped: 86712320 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531718144 unmapped: 86704128 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531726336 unmapped: 86695936 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531742720 unmapped: 86679552 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531750912 unmapped: 86671360 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531759104 unmapped: 86663168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531759104 unmapped: 86663168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531759104 unmapped: 86663168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531759104 unmapped: 86663168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531759104 unmapped: 86663168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531759104 unmapped: 86663168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531759104 unmapped: 86663168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531759104 unmapped: 86663168 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531775488 unmapped: 86646784 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531775488 unmapped: 86646784 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531775488 unmapped: 86646784 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531775488 unmapped: 86646784 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531775488 unmapped: 86646784 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531775488 unmapped: 86646784 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531775488 unmapped: 86646784 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531775488 unmapped: 86646784 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531791872 unmapped: 86630400 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531808256 unmapped: 86614016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531808256 unmapped: 86614016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531808256 unmapped: 86614016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531808256 unmapped: 86614016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531808256 unmapped: 86614016 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531816448 unmapped: 86605824 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531816448 unmapped: 86605824 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531816448 unmapped: 86605824 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531824640 unmapped: 86597632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531824640 unmapped: 86597632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531824640 unmapped: 86597632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531824640 unmapped: 86597632 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531832832 unmapped: 86589440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531832832 unmapped: 86589440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531832832 unmapped: 86589440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531832832 unmapped: 86589440 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531841024 unmapped: 86581248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531841024 unmapped: 86581248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531841024 unmapped: 86581248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531841024 unmapped: 86581248 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531849216 unmapped: 86573056 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531857408 unmapped: 86564864 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531865600 unmapped: 86556672 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531873792 unmapped: 86548480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531873792 unmapped: 86548480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531873792 unmapped: 86548480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531873792 unmapped: 86548480 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531890176 unmapped: 86532096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531890176 unmapped: 86532096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531890176 unmapped: 86532096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531890176 unmapped: 86532096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531890176 unmapped: 86532096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531890176 unmapped: 86532096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531890176 unmapped: 86532096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531890176 unmapped: 86532096 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531898368 unmapped: 86523904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531898368 unmapped: 86523904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531898368 unmapped: 86523904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531898368 unmapped: 86523904 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531914752 unmapped: 86507520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531914752 unmapped: 86507520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531914752 unmapped: 86507520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531914752 unmapped: 86507520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531914752 unmapped: 86507520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531914752 unmapped: 86507520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531914752 unmapped: 86507520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531914752 unmapped: 86507520 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531922944 unmapped: 86499328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531922944 unmapped: 86499328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531922944 unmapped: 86499328 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531931136 unmapped: 86491136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531931136 unmapped: 86491136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531931136 unmapped: 86491136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531931136 unmapped: 86491136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531931136 unmapped: 86491136 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531955712 unmapped: 86466560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531955712 unmapped: 86466560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531955712 unmapped: 86466560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531955712 unmapped: 86466560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531955712 unmapped: 86466560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531955712 unmapped: 86466560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531955712 unmapped: 86466560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531955712 unmapped: 86466560 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531963904 unmapped: 86458368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531963904 unmapped: 86458368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531963904 unmapped: 86458368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531963904 unmapped: 86458368 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531972096 unmapped: 86450176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531972096 unmapped: 86450176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531972096 unmapped: 86450176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531972096 unmapped: 86450176 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531980288 unmapped: 86441984 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531980288 unmapped: 86441984 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531980288 unmapped: 86441984 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531980288 unmapped: 86441984 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531980288 unmapped: 86441984 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531980288 unmapped: 86441984 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531980288 unmapped: 86441984 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531996672 unmapped: 86425600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531996672 unmapped: 86425600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531996672 unmapped: 86425600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531996672 unmapped: 86425600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 531996672 unmapped: 86425600 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532004864 unmapped: 86417408 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532004864 unmapped: 86417408 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532004864 unmapped: 86417408 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532013056 unmapped: 86409216 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532021248 unmapped: 86401024 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532021248 unmapped: 86401024 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532021248 unmapped: 86401024 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532021248 unmapped: 86401024 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532029440 unmapped: 86392832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532029440 unmapped: 86392832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532029440 unmapped: 86392832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532029440 unmapped: 86392832 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532037632 unmapped: 86384640 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532045824 unmapped: 86376448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532045824 unmapped: 86376448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532045824 unmapped: 86376448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532045824 unmapped: 86376448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532045824 unmapped: 86376448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532045824 unmapped: 86376448 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532054016 unmapped: 86368256 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532054016 unmapped: 86368256 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532054016 unmapped: 86368256 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532054016 unmapped: 86368256 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532062208 unmapped: 86360064 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532070400 unmapped: 86351872 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532070400 unmapped: 86351872 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532070400 unmapped: 86351872 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532078592 unmapped: 86343680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532078592 unmapped: 86343680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532078592 unmapped: 86343680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532078592 unmapped: 86343680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532078592 unmapped: 86343680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532078592 unmapped: 86343680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532078592 unmapped: 86343680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532078592 unmapped: 86343680 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532086784 unmapped: 86335488 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532094976 unmapped: 86327296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532094976 unmapped: 86327296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532094976 unmapped: 86327296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532094976 unmapped: 86327296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532094976 unmapped: 86327296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532094976 unmapped: 86327296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532094976 unmapped: 86327296 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532119552 unmapped: 86302720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532119552 unmapped: 86302720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532119552 unmapped: 86302720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532119552 unmapped: 86302720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532119552 unmapped: 86302720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532119552 unmapped: 86302720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532119552 unmapped: 86302720 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 86294528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532127744 unmapped: 86294528 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532144128 unmapped: 86278144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532144128 unmapped: 86278144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532144128 unmapped: 86278144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532144128 unmapped: 86278144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532144128 unmapped: 86278144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532144128 unmapped: 86278144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532144128 unmapped: 86278144 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532152320 unmapped: 86269952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532152320 unmapped: 86269952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532152320 unmapped: 86269952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532152320 unmapped: 86269952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532152320 unmapped: 86269952 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532160512 unmapped: 86261760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532160512 unmapped: 86261760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532160512 unmapped: 86261760 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532168704 unmapped: 86253568 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532168704 unmapped: 86253568 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532168704 unmapped: 86253568 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532168704 unmapped: 86253568 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532168704 unmapped: 86253568 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532168704 unmapped: 86253568 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532185088 unmapped: 86237184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532185088 unmapped: 86237184 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 86228992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 86228992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 86228992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 86228992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 86228992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 86228992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532193280 unmapped: 86228992 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532201472 unmapped: 86220800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532209664 unmapped: 86212608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532209664 unmapped: 86212608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532209664 unmapped: 86212608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532209664 unmapped: 86212608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532209664 unmapped: 86212608 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532217856 unmapped: 86204416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532217856 unmapped: 86204416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532217856 unmapped: 86204416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532217856 unmapped: 86204416 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532226048 unmapped: 86196224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532226048 unmapped: 86196224 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532234240 unmapped: 86188032 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532234240 unmapped: 86188032 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532242432 unmapped: 86179840 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532242432 unmapped: 86179840 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532242432 unmapped: 86179840 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532242432 unmapped: 86179840 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532242432 unmapped: 86179840 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532242432 unmapped: 86179840 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532242432 unmapped: 86179840 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532242432 unmapped: 86179840 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532250624 unmapped: 86171648 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532250624 unmapped: 86171648 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532250624 unmapped: 86171648 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532267008 unmapped: 86155264 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532267008 unmapped: 86155264 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532267008 unmapped: 86155264 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532267008 unmapped: 86155264 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532267008 unmapped: 86155264 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532275200 unmapped: 86147072 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532275200 unmapped: 86147072 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532283392 unmapped: 86138880 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532291584 unmapped: 86130688 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532291584 unmapped: 86130688 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532299776 unmapped: 86122496 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532307968 unmapped: 86114304 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 86106112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 86106112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 86106112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 86106112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 86106112 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532324352 unmapped: 86097920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532324352 unmapped: 86097920 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532332544 unmapped: 86089728 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532340736 unmapped: 86081536 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532348928 unmapped: 86073344 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532357120 unmapped: 86065152 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532365312 unmapped: 86056960 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532373504 unmapped: 86048768 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532373504 unmapped: 86048768 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532373504 unmapped: 86048768 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532381696 unmapped: 86040576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532381696 unmapped: 86040576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532381696 unmapped: 86040576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532381696 unmapped: 86040576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532381696 unmapped: 86040576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532381696 unmapped: 86040576 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532389888 unmapped: 86032384 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532398080 unmapped: 86024192 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532406272 unmapped: 86016000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532406272 unmapped: 86016000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532406272 unmapped: 86016000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532406272 unmapped: 86016000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532406272 unmapped: 86016000 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532414464 unmapped: 86007808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532414464 unmapped: 86007808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532414464 unmapped: 86007808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532414464 unmapped: 86007808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532414464 unmapped: 86007808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532414464 unmapped: 86007808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532414464 unmapped: 86007808 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532430848 unmapped: 85991424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532430848 unmapped: 85991424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532430848 unmapped: 85991424 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532439040 unmapped: 85983232 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532447232 unmapped: 85975040 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532447232 unmapped: 85975040 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532455424 unmapped: 85966848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532455424 unmapped: 85966848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532455424 unmapped: 85966848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532455424 unmapped: 85966848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532455424 unmapped: 85966848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532455424 unmapped: 85966848 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532463616 unmapped: 85958656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532463616 unmapped: 85958656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532463616 unmapped: 85958656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532463616 unmapped: 85958656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 76K writes, 309K keys, 76K commit groups, 1.0 writes per commit group, ingest: 0.31 GB, 0.04 MB/s#012Cumulative WAL: 76K writes, 28K syncs, 2.71 writes per sync, written: 0.31 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 490 writes, 749 keys, 490 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s#012Interval WAL: 490 writes, 244 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532463616 unmapped: 85958656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532463616 unmapped: 85958656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532463616 unmapped: 85958656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532463616 unmapped: 85958656 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532471808 unmapped: 85950464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532471808 unmapped: 85950464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532471808 unmapped: 85950464 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: mgrc ms_handle_reset ms_handle_reset con 0x559e379e4800
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532553728 unmapped: 85868544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532553728 unmapped: 85868544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532553728 unmapped: 85868544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532553728 unmapped: 85868544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532553728 unmapped: 85868544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532553728 unmapped: 85868544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532553728 unmapped: 85868544 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532561920 unmapped: 85860352 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532561920 unmapped: 85860352 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532570112 unmapped: 85852160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532570112 unmapped: 85852160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532570112 unmapped: 85852160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532570112 unmapped: 85852160 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532578304 unmapped: 85843968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532578304 unmapped: 85843968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532578304 unmapped: 85843968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532578304 unmapped: 85843968 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532586496 unmapped: 85835776 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532586496 unmapped: 85835776 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532586496 unmapped: 85835776 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532586496 unmapped: 85835776 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'config show' '{prefix=config show}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532537344 unmapped: 85884928 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: bluestore.MempoolThread(0x559e30685b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195789 data_alloc: 218103808 data_used: 3825664
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: prioritycache tune_memory target: 4294967296 mapped: 532201472 unmapped: 86220800 heap: 618422272 old mem: 2845415832 new mem: 2845415832
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: osd.2 430 heartbeat osd_stat(store_statfs(0x198a31000/0x0/0x1bfc00000, data 0x110a10e/0x133d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Oct  2 09:47:13 np0005466031 ceph-osd[79023]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:47:13 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:47:13 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/325351810' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:47:14 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  2 09:47:14 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/531916238' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  2 09:47:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:47:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:14.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:47:14 np0005466031 nova_compute[235803]: 2025-10-02 13:47:14.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:14 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:14 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 09:47:14 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:14.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 09:47:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  2 09:47:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2664765077' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  2 09:47:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  2 09:47:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1775043447' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  2 09:47:15 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 09:47:15 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2071889290' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4037444600' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1088956816' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 09:47:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:16.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3474037624' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3164670167' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2775357721' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  2 09:47:16 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:16 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:16 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:16.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  2 09:47:16 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1460811360' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  2 09:47:17 np0005466031 nova_compute[235803]: 2025-10-02 13:47:17.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2540075283' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3512761745' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2370018337' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1780767086' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  2 09:47:17 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3501681092' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  2 09:47:18 np0005466031 systemd[1]: Starting Hostname Service...
Oct  2 09:47:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct  2 09:47:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4160799232' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  2 09:47:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct  2 09:47:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3500533200' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  2 09:47:18 np0005466031 systemd[1]: Started Hostname Service.
Oct  2 09:47:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:18 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct  2 09:47:18 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1571223080' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  2 09:47:18 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:18 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:18 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:18.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:19 np0005466031 nova_compute[235803]: 2025-10-02 13:47:19.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:19 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct  2 09:47:19 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3856099125' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  2 09:47:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct  2 09:47:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2042272111' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  2 09:47:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.002000057s ======
Oct  2 09:47:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:20.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Oct  2 09:47:20 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:47:20 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3100741757' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:47:20 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:20 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:20 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:20.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct  2 09:47:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2931492052' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  2 09:47:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:47:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:47:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:47:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:47:21 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct  2 09:47:21 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/889082410' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  2 09:47:22 np0005466031 nova_compute[235803]: 2025-10-02 13:47:22.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:22.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:22 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct  2 09:47:22 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4290154286' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  2 09:47:22 np0005466031 radosgw[82465]: ====== starting new request req=0x7f1c27c606f0 =====
Oct  2 09:47:22 np0005466031 radosgw[82465]: ====== req done req=0x7f1c27c606f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:22 np0005466031 radosgw[82465]: beast: 0x7f1c27c606f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:22.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:23 np0005466031 ceph-mon[76340]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct  2 09:47:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/803408113' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct  2 09:47:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:47:23 np0005466031 ceph-mon[76340]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
